Al Gore, Atari Democrats, and the “Invention” of the Internet
Posted on | July 9, 2022 | No Comments
This is the fourth part of a narrative about how the Internet changed from a military network to a wide-scale global system of interconnected networks. Part I discussed the impact of the Strategic Defense Initiative (SDI) or “Star Wars” on funding for the National Science Foundation’s adoption of the ARPANET. In Part II, I looked at how the fears about nuclear war and Japan’s artificial intelligence (AI) propelled early funding on the Internet. Finally, Part III introduced the “Atari Democrats” and their early role in crafting the legislation in creating the Internet.
This post is a follow-up to make some points about Al Gore and the Atari Democrat’s involvement in the success of the Internet and how political leadership is needed for technological innovation and implementation. What is particularly needed is to draw lessons for future infrastructure, open-source software, and other enabling systems of innovation, social connectedness, and civic-minded entrepreneurship. These include smart energy grids, mobility networks, healthcare systems, e-commerce platforms, and crypto-blockchain exchanges.
The story of Al Gore “inventing” the Internet started after CNN’s Wolf Blitzer interviewed the Vice-President in 1999 and gained traction during the 2000 Presidential campaign against George W. Bush. The accusation circulated that Gore claimed he “invented the Internet,” and the phrase was used to tag the Vietnam vet and Vice President as a “liar” and someone who couldn’t be trusted. The issue says a lot about how election campaigns operate, the role of science and technology in the economy, and especially about the impact of governance and statecraft in economic and technological development. Here is what he actually said:
Of course, the most controversial part of this interview about Vice President Gore’s plans to announce his presidential candidacy was this statement, “During my service in the United States Congress, I took the initiative in creating the Internet.” That actual quote was turned into “inventing the Internet” and was used against him in the 2000 presidential elections. The meanings are quite different.
“Inventing” suggests a combination of technical imagination and physical manipulation usually reserved for engineers. We do after all, want our buildings to remain upright and our tires to stay on our cars as we ride down the road. To “create” has a much more flexible meaning, indicating more of an art or a craft. There was no reason to say he invented the Internet except to frame it in a way that suggested he designed it technically, which does sound implausible.
Gore never claimed engineering prowess but could never adequately counter this critique. Gore would win the popular vote in 2000 but failed in his bid for the Presidency. The Supreme Court ruled he had lost Florida’s electoral votes in a close and controversial election in the “Sunshine” state. It’s hard to say how much this particular meme contributed to the loss, but the “inventing” narrative stuck and has persisted in modern politics in subtle ways.
The controversy says more about how little we understand innovation and technological development and how impoverished our conversations have been about developing data networks and information technologies. The history of information technologies, particularly communications networking, has been one interplay between technical innovation, market dynamics, and intellectual leadership that guides policy actions, including military and research funding.
The data networking infrastructure, undoubtedly the world’s largest machine, required a set of political skills, both collective and individualized, to be implemented. In addition to the engineering skills that created the famed data packets and their TCP/IP (Transmission Control Protocol and Internet Protocol) protocols, political skills were needed for the funding, the regulatory changes, and the global power needed to guide the international frameworks that shape what is now often called Information and Communications Technologies (ICT). These frameworks included key developments at the International Telecommunications Union (ITU), the World Intellectual Property Organization (WIPO), and the World Trade Organization (WTO).
Al Gore got support from those generally considered the “real inventors” of the Internet. While Republicans continued to ridicule and “swiftboat” Gore for trying to claim he “invented the Internet,” many in the scientific community including the engineers who designed the Internet, verified Gore’s role. Robert Kahn and Vinton Cerf acknowledged Gore’s initiatives as both a Congressman and Senator.
-
As far back as the 1970s Congressman Gore promoted the idea of high speed telecommunications as an engine for both economic growth and the improvement of our educational system. He was the first elected official to grasp the potential of computer communications to have a broader impact than just improving the conduct of science and scholarship. Though easily forgotten, now, at the time this was an unproven and controversial concept. Our work on the Internet started in 1973 and was based on even earlier work that took place in the mid-late 1960s.
But the Internet, as we know it today, was not deployed until 1983. When the Internet was still in the early stages of its deployment, Congressman Gore provided intellectual leadership by helping create the vision of the potential benefits of high speed computing and communication. As an example, he sponsored hearings on how advanced technologies might be put to use in areas like coordinating the response of government agencies to natural disasters and other crises. – Vint Cerf
Senator Gore was heavily involved in sponsoring legislation to research and connect supercomputers. Gore was an important member of the “Atari Democrats.” Along with Senator Gary Hart, Robert Reich, and other Democrats, they pushed forward “high tech” ideas and legislation for funding and research. The term’s meaning varied, but “Atari Democrat” generally referred to a pro-technology and pro-free trade social liberal Democrat.
Atari was a successful arcade game, home game console, and software company in the 1970s. Started by Nolan Bushnell, it gave a start to Steve Jobs and Steve Wozniak, among others. The term began to stick to some Democrats around 1982 and generally linked them to the Democrats’ Greens and an emerging “neo-liberal” wing. It also suggested they were “young moderates who saw investment and “high technology” as an answer and complement to the New Deal.” [1]
The New York Times discussed the Atari Democrats and tensions that emerged during the 1980s between the traditional Democratic liberals and the Atari Democrats. The latter attempted to find a middle ground on the economy and international affairs with the Republicans, while the former courted union workers, many of whom had shifted to Reagan and the Republicans.[2]
One of the emerging issues of the time was the trade deficit with Japan, whose cars and electronics were making significant inroads into the US economy. Gore and other Democrats were particularly concerned about losing the race for artificial intelligence. The Japanese “Fifth Generation” AI project was launched in 1982 by the country’s Japan’s Ministry of International Trade and Industry (MITI), which had a reputation at the time for guiding Japan’s very successful export strategy.
Known as the national Fifth Generation Computer Systems (FGCS) project, the AI project was carried out by ICOT, (later AITRG), project, the AI project was carried out by ICOT (later AITRG), part of the Japan Information Processing Development Corporation (JIPDEC), the Advanced IT Research Group (AITRG), a research institute that brought in Japanese computer manufacturers (JCMs), and a few other electronics industry firms. A primary US concern was that the combination of government involvement and the Keiretsu corporate/industrial structure of the Japanese political economy would give them a significant advantage in advanced computing innovations.
Congress was concerned about the competition over high-speed processors and new software systems that were recognized at the time as crucial components in developing many new military armaments, especially the space-based “Star Wars” missile defense system that President Reagan had proposed as the Strategic Defense Initiative (SDI). Any system of satellites and weaponry forming a defensive shield against nuclear attack would need advanced microprocessors and supercomputing capabilities. It would require artificial intelligence (AI).
The likely vehicle for this research was the National Science Foundation (NSF), the brainchild of Vannevar Bush, who managed Science and Technology for the US during World War II. That included the Manhattan Project that created the Atomic Bomb. The NSF was formed during the 1950s with established research areas in biology, chemistry, mathematics, and physics. In 1962, it set up its first computing science program within its Mathematical Sciences Division. At first, it encouraged the use of computers in each of these fields, and later, it provided a general computing infrastructure, including setting up university computer centers in the mid-1950s that would be available to all researchers. In 1968, the Office of Computing Activities began subsidizing computer networking. They funded some 30 regional centers to help universities efficiently use scarce computer resources and timesharing capabilities.
In 1984, a year after the military institutionalized TCP/IP, the NSF created the Office of Advanced Scientific Computing, whose mandate was to create several supercomputing centers around the US. [2] Over the next year, five centers were funded by the NSF:
General Atomics — San Diego Supercomputer Center, SDSC
University of Illinois at Urbana-Champaign — National Center for Supercomputing Applications, NCSA
Carnegie Mellon University — Pittsburgh Supercomputer Center, PSC
Cornell University — Cornell Theory Center, CTC
Princeton University — John von Neumann National Supercomputer Center, JvNC
However, it soon became apparent that they would not adequately serve the scientific community. Gore began to support high-performance computing and networking projects, particularly the National Science Foundation Authorization Act, where he added two amendments, one calling for more research on the “Greenhouse Effect” affecting climate change and the other calling for investigating future options for communications networks for connecting research computers. This Computer Network Study Act would specifically examine the requirements for data transmission capabilities conducted through fiber optics, data security, and software capability. The NSFNET’s decision to choose TCP/IP in 1985 as the protocol for the planned National Science Foundation Network (NSFNET) would pave the way for the Internet.
In the Supercomputer Network Study Act of 1986, Gore proposed to direct the Office of Science and Technology Policy (the Office) to study critical issues and options regarding communications networks for supercomputers at universities and Federal research facilities in the United States and required the Office to report the results to the Congress within a year. The proposal was attached to the Senate Bill S. 2184: National Science Foundation Authorization Act for Fiscal Year 1987, but it was never passed.
Still, a report was produced that pointed to the potential role of the NSF in networking supercomputers, and in 1987, the NSF agreed to manage the NSFNET backbone with Merit and IBM. In October 1988, Gore sponsored additional legislation for “data superhighways” in the 100th Congress. S.2918 National High-Performance Computer Technology Act of 1988 and later H.R.3131 – National High-Performance Computer Technology Act of 1989 was sponsored by Rep. Doug Walgren to amend the National Science and Technology Policy, Organization, and Priorities Act of 1976. It directed the President, through the Federal Coordinating Council for Science, Engineering, and Technology (Council), to create a National High-Performance Computer Technology Plan and to fund a 3 Gigabit backbone network for the NSFNET.
It paved the way for S.272 High-Performance Computing and the National Research and Education Network (1991-1992) sponsored by Al Gore that passed and was signed by President George H.W. Bush on December 9, 1991. Often called the Gore Bill, it led to the development of the National Information Infrastructure (NII) and the funding of the National Research and Education Network (NREN).
The Gore Bill began a discussion of the “Information Superhighway” that enticed cable, broadcast, telecommunications, satellite, and wireless companies to start developing their digital strategies. It also provided the groundwork for Gore’s international campaign for a Global Information Infrastructure (GII) that would lead to the relatively seamless and cheap data communications of the World Wide Web.
Its $600 million appropriation also funded the National Center for Supercomputing Applications (NCSA) at the University of Illinois, where graduate student Marc Andreessen and others created Mosaic, the early Web browser that became Netscape. The Netscape IPO started the Internet’s commercial boom of the 1990s.
As chairman of the Senate Subcommittee on Science, Technology, and Space, Gore held hearings on these issues. During a 1989 hearing colloquy with Dr. Craig Fields of ARPA and Dr. William Wulf of NSF, Gore solicited information about what constituted a high-speed network and where technology was headed. He asked how much sooner NSFnet speed could be enhanced 30-fold if more Federal funding was provided. During this hearing, Gore made fun of himself during an exchange about high-speed networking speeds: “That’s all right. I think of my [1988] presidential campaign as a gigaflop.” [The witness had explained that “gigaflop” referred to one billion floating point operations per second.]
Al Gore is interesting because he has been a successful legislator and a mover of public opinion on climate change and the global Internet. He can take credit for much of the climate change discussion. He has worked hard to bring the topic to the public’s attention, mobilize action on markets for carbon credits, and accelerate developments in alternative energy. His actions and tactics are worth studying as we need more leaders, perhaps “Atari Democrats,” who can create positive futures rather than obsessing about tearing down what we have.
Summary
The transition of the Internet from a military network to a global infrastructure was shaped by key political and technological influences. This article examines the role of Al Gore and the “Atari Democrats” in fostering Internet development through policy, funding, and legislative initiatives.
The notion that Al Gore claimed to have “invented the Internet” originated from a mischaracterization of his 1999 CNN interview, where he stated, “During my service in the United States Congress, I took the initiative in creating the Internet.” This phrase was misinterpreted and politicized during the 2000 presidential election. Gore never asserted engineering credit but played a significant role in promoting the early legislative framework supporting the Internet’s expansion.
Al Gore’s contributions were validated by key Internet pioneers, such as Robert Kahn and Vinton Cerf, who acknowledged his leadership in recognizing the importance of high-speed networking and supercomputing for economic and educational development. Gore’s legislative efforts date back to the 1970s, when he advocated for advanced computing and communication networks, influencing the development of NSFNET, which became the foundation of the modern Internet.
The “Atari Democrats,” including Gore, were a faction of the Democratic Party in the 1980s that focused on technological innovation, economic growth, and global competitiveness. They sought to leverage advancements in computing and networking to maintain U.S. leadership in technology. Concerns about Japan’s artificial intelligence (AI) advancements in the 1980s further motivated U.S. policymakers to invest in high-performance computing.
Key legislative milestones include:
The 1986 Supercomputer Network Study Act, which promoted research on networking supercomputers.
The National High-Performance Computer Technology Act of 1988, which aimed to develop a national research and education network.
The High-Performance Computing Act of 1991 (commonly called the “Gore Bill”), which laid the foundation for the modern Internet by expanding the NSFNET backbone and influencing commercial Internet growth.
Gore’s leadership extended to advocating for a “Global Information Infrastructure,” emphasizing digital connectivity and international collaboration worldwide. His work not only contributed to the technological expansion of the Internet but also played a role in shaping public awareness of climate change and the importance of sustainability in innovation.
The history of the Internet highlights the interplay between technical expertise, political vision, and regulatory frameworks. Gore’s influence demonstrates the necessity of political leadership in fostering technological advancements that benefit society.
Citation APA (7th Edition)
Pennings, A.J. (2022, Jul 09) Al Gore, Atari Democrats, and the “Invention” of the Internet. apennings.com https://apennings.com/how-it-came-to-rule-the-world/al-gore-atari-democrats-and-the-invention-of-the-internet/
Notes
[1] E. J. Dionne, Special to The New York Times. WASHINGTON TALK; Greening of Democrats: An 80’s Mix of Idealism And Shrewd Politics. The New York Times, 14 June 1989, www.nytimes.com/1989/06/14/us/washington-talk-greening-democrats-80-s-mix-idealism-shrewd-politics.html. Accessed April 24th, 2019.
[2] Wayne, Leslie. Designing a New Economics for the “Atari Democrats.” The New York Times, 26 Sept. 1982, www.nytimes.com/1982/09/26/business/designing-a-new-economics-for-the-atari-democrats.html.
Linked References (APA 7th Edition)
Pennings, A. J. (2022). How “Star Wars” and the Japanese Artificial Intelligence (AI) Threat led to the Internet Japan. apennings.com. Retrieved from https://apennings.com/how-it-came-to-rule-the-world/how-star-wars-and-the-japanese-artificial-intelligence-ai-threat-led-to-the-internet-japan/
Pennings, A. J. (2022). NSFNET and the Atari Democrats. apennings.com. Retrieved from https://apennings.com/how-it-came-to-rule-the-world/how-star-wars-and-the-japanese-artificial-intelligence-ai-threat-led-to-the-internet-part-iii-nsfnet-and-the-atari-democrats/
Blitzer, W. (1999). Inventing the Internet [Video]. YouTube. Retrieved from https://youtu.be/HpMgne-X-Ns?t=689
Cerf, V. (2015). The Role of Al Gore in the Development of the Internet. BBC News. Retrieved from http://www.bbc.com/news/science-environment-31450389
New York Times. (1989, June 14). Greening Democrats: 80’s Mix Idealism & Shrewd Politics. The New York Times. Retrieved from https://www.nytimes.com/1989/06/14/us/washington-talk-greening-democrats-80-s-mix-idealism-shrewd-politics.html
New York Times. (1982, September 26). Designing a New Economics for the Atari Democrats. The New York Times. Retrieved from https://www.nytimes.com/1982/09/26/business/designing-a-new-economics-for-the-atari-democrats.html
Stanford University. (1982). The Japanese Fifth Generation AI Project. Stanford Digital Repository. Retrieved from https://stacks.stanford.edu/file/druid:wt917by4830/wt917by4830.pdf
ScienceDirect. (1993). Fifth Generation Computer Systems (FGCS). ScienceDirect Journal. Retrieved from https://www.sciencedirect.com/science/article/pii/0167739X93900038
National Science Foundation (NSF). (1985). NSFNET’s Decision to Choose TCP/IP. First Monday. Retrieved from https://firstmonday.org/ojs/index.php/fm/article/view/799/708
U.S. Congress. (1991). High-Performance Computing and the National Research and Education Network Act. Congress.gov. Retrieved from https://www.congress.gov/bill/102nd-congress/senate-bill/272/all-info#latestSummary-content
Wikipedia. (n.d.). National Center for Supercomputing Applications (NCSA). Retrieved from https://en.wikipedia.org/wiki/National_Center_for_Supercomputing_Applications
Pennings, A. J. (2022). Contending Information Superhighways. apennings.com. Retrieved from https://apennings.com/global-communications/contending-information-superhighways/
Pennings, A. J. (2022). Engineering TCP/IP Politics and the Enabling Framework of the Internet. apennings.com. Retrieved from https://apennings.com/telecom-policy/engineering-tcp-ip-politics-and-the-enabling-framework-of-the-internet/
Pennings, A. J. (2022). The Netscape IPO and the Internet’s commercial boom. apennings.com. Retrieved from https://apennings.com/democratic-political-economies/from-new-deal-to-green-new-deal-part-3-its-the-infrastructure-stupid/
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is a Professor at the State University of New York, South Korea. Previously, he taught at St. Edwards University in Austin, Texas and was on the faculty of New York University from 2002-2012. He began his teaching career at Victoria University in Wellington, New Zealand and was a Fellow at the East-West Center in Hawaii in the 1990s.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Al Gore > Artificial Intelligence (AI) > Atari Democrats > Fifth Generation > Netscape IPO > NSF > NSFNET
U.S. Internet Policy, Part 6: Net Neutrality, Broadband Infrastructure and the Digital Divide
Posted on | June 4, 2022 | No Comments
In the era of the COVID-19 pandemic, the Internet proved to be more critical than ever. Parents telecommuted to work, kids telelearned at home, and streaming media services entertained both. Sick people began to use telemedicine instead of visiting the doctor or hospital. Many people applied for jobs online. Families also embraced the relative safety and convenience of e-commerce delivering commodities to the home.
Yet broadband services in the US were often not available, affordable, or up to the task of allowing all the people in a home to access the bandwidth needed. Previously I examined the decline of ISPs and the dominance of traditional telcos over broadband. Also, I wrote about the Trump’s administration’s support for ending net neutrality by the FCC. This post looks at the plans to increase funding for broadband infrastructure in the US and list some of the challenges facing the Biden Administration’s Internet policy.
The digital divide proved to be more consequential than ever as the K-shaped recovery took shape, exacerbating income divisions. The divide has been particularly stressful on American families as schools and other activities for kids closed down during the Covid-19 pandemic. Some 20 million Americans had none, or very slow Internet service. while another 100 million could not afford broadband.
Inequalities were deepened by the types of jobs affected, as contact jobs in service industries were particularly hard hit while more professional jobs that could be conducted online did well. Also, financial assets continued to appreciate due to the Federal Reserve’s low-interest rates, and quantitative easing kept mortgages cheap and raised home prices.
During his first 100 days, Biden proposed a $2.25 trillion infrastructure package focused on updating transportation infrastructure as well as funding to fight climate change and other provisions to prop up American families. It has also incorporated the The Accessible, Affordable Internet for All Act introduced by US Senator Amy Klobuchar (D-MN), Co-chair of the Senate Broadband Caucus, and House Majority Whip James E. Clyburn (D-SC) in early 2021. This plan to modernize underserved communities involved:
– $80 billion to deploy high-speed broadband infrastructure;
– $5 billion for a related secured loan program; and a
– New office within the National Telecommunications and Information Administration (NTIA) to monitor and ensure transparency in these projects.
In early November 2021, the House passed the Infrastructure Investment and Jobs Act bill 223-202 allocating over US$1 trillion in spending for much-needed infrastructure, with $579 billion in new spending, including US$65 billion for broadband.
The Senate had organized and passed the bill in the summer but it was held up in the House of Representatives. The House progressives wanted the bill tied to a third phase of the Build Back Better program with increased social spending for healthcare, new housing, and climate change. Eventually, Speaker of the House Nancy Pelosi organized enough votes to pass the measure independently and President Biden signed the bill on November 15, 2021.
The infrastructure bill allocates money to three major areas: direct payments to consumers to pay for broadband services, support for municipal networks, including those in tribal areas, and subsidies for major companies to build out more broadband infrastructure such as fiber optics lines and wireless base stations. The money is destined for the states who can demonstrate groups in need.
It allocates $14 billion to help low-income Americans pay for service at about $30 a month. President Biden announced progress in the Affordable Connectivity Program, an extension and revision of the Emergency Broadband Benefit in the spring of 2022.
The digital divide emerged as a much-needed policy issue again due to the priorities of the COVID-19 pandemic and changes in education and work. More and better access, especially in rural areas became a high priority. Major broadband providers have organized to take advantage of infrastructure spending and comply with the administrations concerns to provide $30 monthly broadband subsidies for eligible households.
Several issues linger in the public’s consciousness depending on media attention. Some have come to the forefront of public scrutiny. These include:
– Even more, better, and cheaper broadband access through mobile, satellite, and wireline facilities, especially in rural areas. Broadband strategies must also consider the implications of SpaceX’s Starlink satellite network.
Also, we should be looking at a wider range of Internet issues such as:
– Antitrust concerns about cable and telco ISPs, including net neutrality. [1]
– Privacy and the collection of behavioral data by platforms to predict, guide, and manipulate online user actions.
– Section 230 reform for Internet platforms and content producers, including assessing social media companies’ legal responsibilities for user-generated content.
– Security issues, including ransomware and other threats to infrastructure.
– Deep fakes, memes, and other issues of misrepresentation, including fake news.
– eGovernment and digital money, particularly the role of blockchain and cryptocurrencies as the Internet moves to Web 3.0.
Citation APA (7th Edition)
Pennings, A.J. (2022, Jun 4). U.S. Internet Policy, Part 6: Broadband Infrastructure and the Digital Divide. https://apennings.com/telecom-policy/u-s-internet-policy-part-6-broadband-infrastructure-and-the-digital-divide/
Notes
[1] Newman, R. (2016). The Debate Nobody Knows: Network Neutrality’s Neoliberal Roots and a Conundrum for Media Reform. International Journal of Communication, 10, 5969–5988.
© ALL RIGHTS RESERVED

var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Affordable Internet for All Act > broadband policy > Infrastructure Investment and Jobs Act > Net Neutrality > Starlink > The Accessible > Title II Net Neutrality Rules
Analyzing YouTube Channels
Posted on | May 2, 2022 | No Comments
The video capabilities of the Internet have made possible a new era of media analysis. The traditions of film and television can be brought to the new online media, including YouTube. The challenge will be to both apply traditional media analysis as well as suggest new modes of televisual criticism for this relatively new medium.[1]
This post reviews some of the techniques used in cinematic and television production and suggests a strategy to analyze YouTube channels based on work in my Visual Rhetoric and IT class. It uses a semiotic (study of meaning) framework to examine the channel imagery (mise-en-scène) and editing (montage) to determine what might make such videos successful. It applies denotative and connotative strategies to describe the meaning-making practices with vocabulary and explain what YouTube creators do to inform/entertain the audience.[2]
Story-telling is an important consideration. Who is telling what story, and how? What narrative devices are being used? How does it engage the audience? This channel Lessons from the Screenplay (LFTS) discuses the narrative of how the new James Bond was introduced in 2006. Note who is speaking, whether you actually see them, and how the story is being told.
What are the main signifying practices that make the YouTube channel a success? A semiotic approach looks closely at the details visible in the video (denotation). It then connects the content with connotative meanings such as social myths and narratives. These details would include various “signs” such as indexical metrics, mainly subscribers, views, likes, and the money YouTubers make. It also looks at the typographies, logos, and other meaning-making practices, such as camera pictures, and how those images are spliced together.
The shot continues to be the major unit of analysis, reflecting the relationship between the camera and the scene. The primary visual grammar holds for the most part – establishing shot (LS), medium shot (MS), closeup (CU). The wider shot creates a meaningful context for the tighter images that provide detail and usually heightened emotion. The smartphone now offers an extraordinary high-quality camera for single shots or high-definition video that can get YouTube channelers started.
Ask more about the mise-en-scene. What do you seen in the image? The lighting? The props? The backgrounds? The special effects (FX)? Is a drone used for birds-eye shots? How is the camera used to create the shot – zooms, pans, tilts. And why? What is the motivation or reason for the technique? Who is doing the shooting? What camera techniques are being used? And why?
A bit more challenging is the editing. Applications like iMovie or just hiring someone on Fiverr.com have been helpful for YouTube channelers. The analytical challenge is to keep up with the vocabulary and understanding what the montage is doing in terms of “building” meaning in the narrative.
A narrative is a series of connected events edited in chronological significance. Montage editing can include parallel editing, flashbacks in time, and flash forwards in time as well. What about the montage is noteworthy? What is the pace of editing? What transitions are being used – and once again, what is the motivation?
Ask what drives the narrative. Narration asks who is the storyteller? Who is the narrator? Are they like the anchor in a news program? What is the mode of address: voice-over, talking to the camera, or a combination? Do we see him or her? Is it combined with a voice-over? Or is it all told from a voice of anonymous authority? -the voice of “God” booming over the montage of images.
Also relevant are the microeconomics of the channel and the overall political economy of YouTube. Understanding how YouTube channels make money either through Adsense, brand arrangements, and, more recently, patronage helps understand a channel’s messaging. In the latter, individuals and organizations become “patreons” by pledging to support a channel financially, including smaller payments of gratitude at websites such as buy me a coffee.
Recommendation engines are a key to understanding viewer captivity in YouTube. Using a sophisticated computer algorithm and data collection system, it finds content related to your search or the content you are watching. These computer programs reduce what could become complex search and decision processes to just a few recommendations. It lists a series of video thumbnails based on metadata from the currently viewed video and information gathered about your past viewings and searches.
In February 2005, Chad Hurley, Steve Chen, and Jawed Karim established YouTube in San Bruno, California. YouTube has since become the second-most visited website worldwide with 14 billion monthly views. Only Google, from the same parent company, Alphabet, has more. By 2022, almost 5 billion videos were watched on YouTube every single day, and 694,000 hours of video are streamed on YouTube each minute. More than half of those YouTube views are seen on mobile devices.
This particular form of YouTube analysis asks: What signifying practices make the YouTube channel a success?[3] It applies the denotative and connotative semiotic methodologies to language (describe) and explains what creators do in their videos to educate and/or entertain the audience. This process trains students to understand what techniques are used and their impact. Hopefully, this also contributes to the growing YouTube Studies area in academia.
Citation APA (7th Edition)
Pennings, A.J. (2022, May 2) Analyzing YouTube Channels. apennings.com https://apennings.com/media-strategies/analyzing-youtube-channels/
Notes
[1] Pennings, A. (2020, Feb 2). YouTube Meaning-Creating (and Money-Creating) Practices. Retrieved from https://apennings.com/media-strategies/youtube-meaning-creating-practices/ Accessed May 1, 2022.
[2] Our class uses a series of web pages by David Chandler to learn basic televisual grammar and vocabulary, semiotics and signs, as well as denotation, connotation and myth. His book Semiotics: The Basics is also very useful.
[3] According to Karol Krol, there are several features or qualities that make a YouTube channel successful: continuous posting, using an angle, content quality, and content that is international.
© ALL RIGHTS RESERVED

Tags: Meaning-making practices > mise-en-scène > montage > Narrative theory > Semiotics > YouTube Studies
Wireless Charging Infrastructure for EVs: Snack and Sell?
Posted on | April 22, 2022 | No Comments
I recently chaired the defense of a PhD dissertation on patents for electric vehicle (EV) charging. Soonwoo (Daniel) Chang, with his advisor, Dr. Clovia Hamilton, did a great job mapping trends associated with the technology patents central to wireless electric charging, including resonant induction power transmission (RIPT).[1] The research got me interested in exploring more about the possibilities of wireless charging as part of my Automatrix series, especially since the US government is investing billions of dollars in developing electric charging infrastructure throughout the country.
This post discusses some of the issues related to wireless charging of EVs. Cars, but also buses, vans, and long-haul trucks are increasingly using electric batteries for locomotion instead of petroleum-fueled internal combustion engines (ICE). It suggests that wireless charging infrastructure be considered that can allow EVs to partially replenish their batteries (“snack”) and in some cases sell electricity back to the grid. Currently, not many EV original equipment manufacturers (OEMs) are designing their automobiles for wireless charging. Why not?
Most likely, OEMs are quite aware of what they need to succeed in the short term. Tesla, for example, has developed an EV charging infrastructure viable for personal autos using cables that plug into a vehicle. But we should also be wary of some of the limitations of plug-in chargers and prepare to build a more dynamic infrastructure that would allow charging in multiple locations and without getting out of the vehicle. These are some of my speculations, and they do not necessarily reflect the results of the soon-to-be Dr. Chang, whose research sparked a lot of my thinking.
You may have used wireless charging for your smartphone. It’s helpful to get a quick charge without dealing with plugs and wires. Inductive charging has some limitations though. While you can play music or a podcast, or talk over a speaker mic, it’s typically immobile. Your phone also needs to be very close to the charging device, its coils aligned properly, and it gets hot. It’s generally not that energy-efficient either, often losing more than 50% of its electricity while charging. With several devices connected in nearly every home, the losses can add up, putting strain on a community’s electrical grid.
In EV wireless charging, a receiver with a coiled wire is placed underneath the vehicle and connects to the battery. This “near field” charging requires that the vehicle be near a similar charging coil. The receiver needs to come near a charging plate on the ground to transmit the energy.
However, advances in magnetic resonance technologies have increased the distance and energy efficiencies involved in wireless charging of EVs. Power transfer is increased by electromagnetically tuning the devices to each other with a magnetic flux, allowing convenient replenishing of a vehicle’s battery. Electric devices generally have conductive wires in a coil shape that maintain stability for the instrument by resisting or storing the flow of current, initially. They are classified by the frequency of their current that flows through it that consists of direct current (DC), audio frequency (AF), and radio frequency (RF). These frequencies can be managed and directed to transfer power over a short distance to another electrical coil. They are not radio waves or emit ionizing radiation that have sufficient energy to detach electrons, so they appear to be relatively safe.
WiTricity for example, uses frequencies around 85 kHz to tune the two coils and expand the charging range from a few millimeters to tens of centimeters. The Massachusetts company has spearheaded the development of this technology and has opened up many possibilities, particularly its use in the public sphere. This may also include dynamic charging that allows a vehicle to charge while moving. See the below video about WiTricity:
It’s no secret that charging issues are a limiting factor for EV diffusion. Drivers of ICE vehicles have resigned themselves to getting out of their cars, organizing a payment, inserting the nozzle into the gas tank, and waiting patiently for a few minutes while the liquid hydrocarbons poured into their cars. Unfortunately, EV owners are still dealing with a lack of available charging locations as well as challenges with nozzle standards, payment systems, long lines, and lengthy charging periods. Currently, most EV owners in the US charge at home with L2 chargers that can readily be bought online or at a Home Depot. EV owners in urban areas need to find other locations, such as parking garages and may face fines for staying too long.
Standards both permit and restrict technological solutions. The Society of Automotive Engineers (SAE) published the J2954 standard in 2020 for wireless charging with three wireless charging levels — WPT1 (3.7 kW), WPT2 (7 kW), and WPT3 (11 kW) for transfer up to 10 inches. These generally may take up to three and a half hours to fully charge.[2]
Yes, these are not impressive numbers given the competition from high-end EV superchargers. Even the EVgo chargers at my local Walmart in Austin have several standards (J-1772, CHAdeMO, CCS/SAE) that charge from 14.4 – 50 kW.[1] Note that EVs have onboard chargers with varying acceptance rates (in kW) that convert AC electricity found in homes to the DC a car battery needs to store. A Chevy Volt with 3.3 kW acceptance will not charge as fast as a Telsa Model S with 10 kW no matter where it is plugged. Other factors include charging cables that can often be awkward to handle. The experimental charging “snake” that reaches out and automatically finds and connects to the auto’s electric nozzle doesn’t seem to be viable yet. So, charging is a limiting factor for EV adoption success.
The strategy behind wireless charging will probably focus more on what WiTricity has coined “power snacking” than full meals of electricity. Snacking is actually better for your battery as longevity improves if you don’t let the battery capacity run down to below 20% or recharge it to 100 percent. Keeping the electric ions in equilibrium across the battery reduces strain and increases the number of charge cycles before degrading occurs.
The snacking can be done by a waiting taxi, a bus stopping for a queue of passengers, a quick stop at a convenience store, and perhaps EVs waiting at a red light. Shopping centers are likely to “capture” customers with charging stalls, especially if they can reduce costs by having micro-grids with solar panels on roofs.
Many countries have tested infrastructure for charging EVs in motion, although this will require substantially more investment. “Dynamic” wireless charging appears to be feasible but comes with high costs as it needs to be embedded in existing infrastructure such as bridges and highways.
The major issues for wireless charging are the infrastructure changes needed and the OEM buy-in. Wireless charging will require more planning and construction than the current charger station. They will also require monitoring and payment applications and systems. Most importantly, they will require electricity – and without significant capacity coming from renewable sources, the purpose will be mainly defeated. Vehicle manufacturers will need to include the wireless charging pads and ensure safety. On the positive side, they can use smaller batteries for many models as constant recharging will reduce range anxiety.
Wireless technologies have been successfully tested for “vehicle to grid” (V2G) transmission. This innovation means a car or truck can sell electricity back to the grid. These might be particularly useful for charging locations off the grid or places challenging to connect—for instance, charging pads at a red light. So we might see a “snack or sell” option in future cars. The prices are likely to vary by time, location, and charging speed, but this setup will present some arbitrage opportunities.
The arbitrage economics are based on ‘valley filling’ when EVs charge at low-demand hours, often overnight, and ‘peak shaving’ when an EV transmits stored energy back into the grid during high-demand hours. So, for example, a vehicle charging at home overnight with cheaper grid electricity or excess from solar panels can sell it on the way to work or at the office parking lot. You might not get rich, but considering the money currently spent on diesel or petrol, it could still help your wallet.
Effective infrastructure like highways or the Internet provides indirect network effects, allowing different entities to use the system and expanding that network’s possibilities. The Global Positioning System (GPS), for example, uses some 27 satellites that transmit pulsed time codes. These signals allowed multiple devices to be invented that triangulate and compute a latitude, longitude, and altitude position to provide different location services. In this case, an effective wireless charging infrastructure enables many different vehicles to use the electrical network. A lot of the wireless charging infrastructure will be done by corporate fleets like Amazon, Best Buy, and Schindler Elevator. Hopefully, the US Post Office will catch up.
However, the US government made a down payment on EV charging stations in the 2021 infrastructure bill. Legislation targeted $15 billion in the Infrastructure Investment and Jobs Act, for “a national network of electric vehicle (EV) chargers along highways and in rural and disadvantaged communities.” It was cut in half to $7.5 billion to provide much needed funding for electric bus fleets for schools and municipalities.[3] Should future infrastructure spending target wireless charging?
Once we move beyond the internal combustion engine (ICE) in vehicles, you will see a lot more flexibility in design of autos, buses, vans, trams, etc. They require fewer parts and are easier to construct. We see it on the lower end with electric bikes and even those controversial electric scooters. New forms of electric autonomous trolleys and vans are necessary to revive urban transportation in a quiet and sustainable way. All these changes in mobility will require changes in the electrical infrastructure.
The term “Smart Mobility” has emerged as an important sociological and technical construct. The city of Austin, Texas:
-
Smart Mobility involves deploying new technology to move people and goods through the city in faster, safer, cleaner, more affordable and more equitable ways. Our mission is “to lead Austin toward its mobility future through meaningful innovation, collaboration, and education.”
Smart devices have expanded to smart vehicles. Autonomy is becoming prevalent and some cars and trucks offer full self-driving options (FSD), with or without a passenger. Wireless charging is central to this process. Auto-valet services, for example, will allow your car to drop you off and park itself, likely at a stall that can provide charging. Who is going to get out to plug it in?
Notes
[1] Gaining Competitive Advantage with a Performance-Oriented Assessment using Patent Mapping and Topic Trend Analysis: A Case for Comparing South Korea, United States and Europe’s EV Wireless Charging Patents. A 2022 PhD Dissertation by Soonwoo (Daniel) Chang for Stony Brook University in New York. He can be reached at sdchang8@gmail.com
[2] A kWh is a 1,000 Watts of electricity. Named after James Watt, the inventor of the steam engine, a watt is the unit of electrical power equal to one ampere under the pressure of one volt. The time it takes to charge a car depends on both the car’s acceptance rate and the amount of electricity sent by the charging station. Volts x Amps – Wattage.
[3] Known officially as the Infrastructure Investment and Jobs Act, it authorized over half a trillion dollars in spending for airports, bridges, broadband, rail, roads, and water systems. It also included up to $108 billion in spending for public transportation such as rail as part of the largest federal investment in public transit in the nation’s history. Another $73 billion was for upgrades to the electrical grid to transmit higher loads while efficiently collecting and allocating energy.
Citation APA (7th Edition)
Pennings, A.J. (2022, Apr 22). Wireless Charging Infrastructure for EVs: Snack and Sell? apennings.com https://apennings.com/mobile-technologies/wireless-charging-infrastructure-for-evs-snack-and-sell/
© ALL RIGHTS RESERVED

var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: dynamic wireless charging > electric vehicles (EVs) > energy snacking > full self-driving options (FSD) > Infrastructure Investment and Jobs Act > OEMs > resonant induction power transmission > Smart Grids > smart mobility > WiTricity
The Great Monetary Surge Since 2020 and the Return of Inflation
Posted on | March 4, 2022 | No Comments
COVID-19 hit dramatically in early 2020, sending the financial markets around the world reeling and unemployment soaring. The immediate response in the US was a monetary surge led by the Federal Reserve Bank and legislation by the US Congress.
The rising money tide reversed the economic collapse, alleviated some of the harshnesses of the K-shaped recovery, and helped many companies weather the sudden downturn.[1] These monetary and fiscal spikes continued to reverberate in the economy, increasing the value of financial assets, reducing unemployment, but also adding to inflationary pressure not seen since the 1970s.[2]
This post examines the 2020 monetary surge and compares it with additional spending in 2021 and the slight declines entering 2022. What impact did the second Trump stimulus have in December 2020? What additional liquidity did the Biden “Build Back Better” bills in 2021 introduce? Also, how much did the Fed’s continuing Quantitative Easing (QE), zero reserve ratios, and low interest rates contribute to the rise of inflation?
The chart below shows the amount of money in the economy over the last few decades and the dramatic jump in 2020.[3]
The Fed moved first as COVID-19 emerged. It slashed interest rate targets and let banks lend without restriction. The Fed Funds interest rate went down to nearly zero again while bank reserve ratios were reduced from 10 percent to 0 percent. That meant that banks could lend out freely without holding a fraction of deposits in reserve. Meanwhile, the Fed called desperately for government spending to assist the emergency and recovery process. Senate Leader Mitch McConnell responded.
Fiscal policy shaped quickly in the Senate. All US spending bills must originate from the House of Representatives, so the Senate used a previous House bill to create the CARES Act. The Middle Class Health Benefits Tax Repeal Act, originally introduced in the U.S. Congress on January 24, 2019 was used as a “shell bill” to begin working on economic and public health relief for the accelerating Covid-19 pandemic.
The Senate added content to combat the virus and protect faltering businesses and the unemployed. Then, on March 27, 2020, President Trump signed the $2.2 trillion CARES (Coronavirus Aid, Relief, and Economic Security) Act into law. Jared Kushner, the President’s son-in-law, became the “coronavirus czar” in charge of personal protective equipment (PPE) and other medical equipment needed to treat Covid patients.
The bill also included the controversial Paycheck Protection Program (PPP) and its famous Section 1106, which provided forgiveness of up to the full principal amount of qualifying loans. The PPP was replenished the next month with an additional $484 billion.
The chart below shows the amounts of M1 money in the economy (currency, demand deposit bank accounts, and liquid deposits, such as OCDs and savings deposits, and money market deposit accounts) including the dramatic jump in the spring of 2020 and extends the above chart into 2021 and the beginning of 2022.
So, 2020 saw about $3.7 trillion in additional fiscal spending for COVID-19 relief in addition to the Fed’s QE, reduction of Fed Funds interest rates from 1.5% to near 0. These numbers show the dramatic economic stimulus that quickly turned the economy around as indicated partially in the Dow-Jones Industrial Index (DJIA) of the top 30 companies by market capitalization (shown below) and pricing power as well as the turn-around in unemployment.
It was a pretty impressive response in that spring in 2020. The Fed hit hard with the reserve ratio, fed funds rate, and QE. Having done about all that it could to jack up the money supply in 2020, the Fed begged for fiscal support. The Senate and House of Representatives delivered. GOP Senate Leader Mitch McConnell had pulled in some impressive legislation with the CARES Act but the COVID-19 pandemic was still raging around the world and not affecting everyone the same.
Compare 2020 with the pandemic spending in 2021. Biden’s proposed Build Back Better plan was conceived to address COVID-19’s K-shaped recovery, specifically to strengthen the middle class, provide care for the elderly, increase affordable housing, and reduce child poverty. First on the agenda was the American Rescue Plan, passed and signed in March 2021 for about $1.9 trillion.[4]
Second was the bi-partisan infrastructure bill with $579 billion in new spending added to what had already been planned. While not specifically Coronavirus relief, the Infrastructure Investment and Jobs Act (IIJA) added $1.2 trillion over ten years ($120 billion a year) to the economy. This included money for roads and bridges, broadband, mass transit, and even for a network of EV charging stations.
Concerns about inflation increased during 2021, especially after President Biden signed a new bill for COVID-19 relief in the spring and the bi-partisan infrastructure bill in the summer. A third bill, the American Families Plan, or BBB III, was pushed by the progressive wing of the Democratic Party. It proposed additional spending of up to $6 trillion and was heavily debated, but it did not get the support of two Democratic Senators and was never voted on. The media narrative was very negative, not about the bill, but bickering in the Democratic party.
So roughly $2 trillion was added in 2021, or about half as much as the 2020 stimulus spending. It was still a lot of money, plus bank money from continued QE and low-interest rates. The $5 trillion in 2020 for domestic relief spending raised concerns about excess demand, especially in light of global reductions in the supply of goods and services. It was a lot to add to an economy with its supply of goods and services depleted by unemployment, closed factories, seaport bottlenecks, and rising energy prices.
But at the risk of sounding like Condoleezza Rice after 9/11, no one ever imagined the supply of goods and services would come under such pressure as COVID-19 continued. Inflation needs money but is also a matter of supply, and market failures can dramatically affect rising prices.
Inflation is a complex interplay between effective demand and the provision of goods and services. Inflation can be demand-pull or cost-push. Demand-pull inflation is when an increase in demand (too much money) is greater than the ability of production of goods and services to keep up and typically results in higher prices. Its calculation also needs to consider the pricing power of major suppliers. After all, prices only go up when suppliers change their price sheets.
Cost-push inflation happens when prices increase due to rising costs of raw materials and wages. The Russian invasion of Ukraine and the sanctions imposed created shortages of essential resources from both countries, such as aluminum and carbon-based products such as fertilizers, natural gas, nickel, and crude oil. It is difficult to assess the impact of food shortages that might result from the degradation of the Ukrainian infrastructure. Overall, rising costs of inputs, measured by the Producer Price Index (PPI), pressure suppliers to raise prices.
But most of the current problems come from supply shortages due to the limits of the COVID-19 pandemic. Calculating the delays in factory production, the chip shortages, the reduction in oil production, restaurant closures, and shipping containers waiting at the docks, etc., will give us a tally of what has gone missing in the US economy and what is producing inflationary forces.
We had a fantastic shot of money into the US economy during 2020. The legislated increases continued into 2021 but nowhere near the amounts of the previous year.[5] The stimulus helped forestall an economic collapse but became part of the complex equation of supply and demand forces increasing prices. The global inflation that emerged from the COVID-19 resulted from a “perfect storm” of economic and political issues.
Postcript 2024
A few weeks after the passage of the CHIPS and Science Act, Biden signed the Inflation Reduction Act (IRA) on August 16, 2022. The “Chips Act” appropriated $52.7 billion and authorized roughly $280 billion in new funding to boost domestic research and manufacturing of semiconductors in the United States. The IRA name was initially ridiculed by the Republicans as inflation reached 9.1% that summer, but the inflation rate halved over the next year and declined to 3.9% by early 2024.
In a surprising move, the IRA was sponsored by Senators Chuck Schumer (D-NY) and Joe Manchin (D-WV), who had previously opposed the progressive’s multi-trillion Build Back Better III version, primarily due to climate change regulations and massive spending. The fact that all Democrats in the Senate and House voted for the bill and all Republicans voted against it underscored the complex political dynamics at play.
It focused mainly on healthcare, continuing subsidies for the Affordable Care Act, and reinstating the ability of the government to negotiate drug prices, especially insulin, that had been eliminated by the Bush-Cheney administration in 2003. The most significant fiscal impact was the $663 billion in tax benefits for energy, and the climate investments agriculture energy, manufacturing, transportation, environmental justice, and pollution reduction and/or resilience.[6]
Another critical institution is the Federal Reserve, whose Chairman Jerome Powell insisted until late 2021 that the inflation rate was “transitory.” As shown in the chart above, the Fed’s targets went from near-zero interest rates (Fed Funds Rate) in March 2022 with 11 rate hikes to a range of 5.25 percent to 5.5 percent in July 2023 where it has stayed at an actual rate of about 5.3 percent for at least a year.
Covid is over (fingers crossed). Inflation has returned to George W. times (high of 4.1%), marking what Bloomberg called a remarkable turnaround given it took 8.5 years to get back to 3 percent after the 1970s. Spending was down in fiscal year 2023 decreased $137 billion, or 2.2 percent. Oil is up about 9% from a year ago due to geopolitical concerns, mainly shipping through the Red Sea. One last note: China’s production and tariffs on imports remain a long-term stress on prices in the US.
Citation APA (7th Edition)
Pennings, A.J. (2022, Mar 4). The Great Monetary Surge Since 2020 and the Return of Inflation. apennings.com https://apennings.com/dystopian-economies/the-great-monetary-spike-of-2020/
Notes
[1] The K-shaped recovery indicated a split between groups of society that were able to adjust fairly rapidly economically and those subject to long-term unemployment and poverty.
[2] The US has had four significant periods of high inflation since WWII. The first, industrial demand inflation occurred with the massive Cold War development, including the interstate road system and the buildout of the telephone system. Then, the birth of the baby boom generation led to consumer demand inflation in the 1960s, with additional spending for Vietnam War. The third spike came from the global oil crisis due to the rise of OPEC in the 1970s and the redistribution of petrodollars worldwide via bank lending. In the US you also had the Baby Boomers buying homes for the first time. Globalization set in during the 1980s due to advances in air transportation, maritime containerization, and international communications – increasing supply and leading to meager inflation. We are now in the fourth age of US inflation due to COVID-19 spending and associated reduction of global shipping, as well as the shutdown of services and manufacturing capability.
[3] Money supply statistics from St. Louis FRED.
[4] The Trump government increased deficits almost as much as the Obama administration, except in half the time. It also spent more on stimulus in 8 months in 2020 than Biden did in all of 2021. But it had the COVID-19 virus and had to respond. Biden’s expenditures, while prescient because of Delta and Omnicron variants, was a bit more elective and Biden III was stopped by Sen Manchin (W-VA). The federal deficit totaled nearly $2.8 trillion in 2020, about $360 billion more than in 2021. So I consider it about equal and in total, about half the monetary part of inflation. The Fed, through QE, 0% reserve ratio, and near-zero interest by Trump appointee Jerome Powell added the other half.
[5] The Fed added $4.66 trillion to the money supply n 2020 and another $10 trillion up to February 2022. So that is a lot of money. Biden stopped the stimulus for the most part, but the Fed is still adding money.
[6] IRA interestingly from Wikipedia and Fed Funds Rate chart from St. Louis FRED.
© ALL RIGHTS RESERVED

var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: CARES Act > Cost-push inflation > COVID-19 > COVID-19 Stimulus > demand-pull inflation > Infrastructure Investment and Jobs Act (IIJA)
US Technology Diplomacy
Posted on | December 31, 2021 | No Comments
Edited Remarks to the Global Technology Diplomacy Forum 2021. Hosted by the Ministry of Foreign Affairs of the Republic of Korea. November 30, 2021, 13:30 (KST)
Ladies and Gentlemen, Ambassador Min,
I’d like to cover three areas today as I talk about US technology diplomacy. First, I want to talk about America’s domestic renewal and how it relates to its international diplomatic agenda. Americans across the political spectrum recognize that the US needs to change. However, they differ on the causes and the solutions. Then I will address some of the major US institutions managing technology diplomacy, including the private sector.
Lastly, I want to end with comments on “norms” in multilateral technology diplomacy. Norms provide guidance and processes for mutual understanding and diplomatic leadership. As the increasing centrality of technology has disrupted the State Department’s mission objectives and brought other departments and agencies into the mix, diplomatic leadership has become even more crucial. Issues such as the “securitization” of cyber technology – state actors involved in cyberattacks, and the tensions they are creating with tech companies, are of particular concern.
So, America is rediscovering itself, undergoing domestic renewal, and once again sorting out its friends and enemies around the world. As is the rest of the world, we are experiencing dramatic technological change, pressured by the pandemic, climate pollution, decentralized finance, and changes in the global media ecosystem. I could add to this list, but I think you get the point.
America is divided. It was rocked by the MAGA movement, otherwise known as Make America Great Again. President Trump stressed border security, carbon utilization, and tax cuts while alienating many traditional international allies. He withdrew from many international forums, including the Paris Climate Accords, the World Health Organization, UNESCO, the Human Rights Council, and the Iranian nuclear negotiations.
Now the US is embarking on the Build Back Better plan. Instead of focusing on tax cuts for the more affluent, it focuses on money to end the pandemic, building US infrastructure, and creating the conditions for families to return to work. It allocated $3.1 trillion in spending on top of some $5 trillion in 2020 for domestic spending, raising concerns about excess demand in light of global reductions in the supply of goods and services.
The primary goal of both administrations has been to get through this pandemic. Recent spending has shifted to infrastructure, mainly roads and bridges, mass transit, renewable energy, and even broadband. It is, in a sense, a “smart new deal,” literally a grand experiment, if phase 3 is passed that will support parents going back to work and address climate concerns.
A new influence is called MMT or modern monetary theory. Rather than the pained austerity theory that hampered the Obama administration’s attempts to recover from the Great Recession, MMT tries to delink deficits from borrowing and taxes. It encourages government spending where appropriate because only naitonal governments can issue the official currency, while cautioning against the real risks of inflation.
So, the US is looking inwards. But it recognizes international threats and opportunities. I am not in any way a speaker for the Biden administration, but let me echo some concerns here today and start by recognizing the multiple branches and institutions with international reach that influence mediations regarding science and technology.
The actors involved in US diplomacy are many. The Departments of Agriculture, Homeland Security, Defense, Treasury, even the Federal Reserve Bank all have some roles in international diplomacy. However, the Departments of Commerce and State are most relevant to today’s topic, as is the trade negotiation representative of the executive office.
Since diplomacy starts with the President, let me first draw attention to the United States Trade Representative (USTR), a Cabinet-level position currently headed by Katherine Tai, an attorney fluent in Mandarin. Tai is the President’s principal trade advisor, negotiator, and spokesperson on trade issues and develops and coordinates U.S. trade policy, in consultation with other federal agencies and Congress. Her nomination was confirmed in the Senate without dissent, which is indicative of the Biden’s administration concerns about China.
Remember that Biden was still Vice-President in 2015 when the Made in China 2025 Report was released that simultaneously angered and struck fear in American industry. It created a backlash in American politics, including implementing the failed tariffs on Chinese goods. The event is reminiscent of Japan’s Fifth Generation AI project in the early 1980s. Luckily, the fear of Japanese AI led to investments in supercomputers and networking and ultimately the Internet thanks to the “Atari Democrats,” a group that included Al Gore and Gary Hart.
The Department of Commerce is also critical. “Commerce” includes the International Trade Administration (ITA) and the NTIA. The latter includes the Office of International Affairs (OIA) that advises the US President on international telecommunications and information policy issues. The OIA plays an important role in the formulation of international information and communications technology (ICT) policies including ICANN and DNS issues. Domestically, the NTIA is in charge of the $65 billion in spending for broadband development that was designated in the 2021 infrastructure bill.
Commerce is engaged in technology and patent protection to help American businesses maintain their edge over global competitors. Secretary Gina Rimando is particularly interested these days in legislation that will allow her to entice foreign companies to the US to reduce supply chain worries. She was very pleased recently, when Samsung announced a $17 billion dollar investment in a semiconductor fabrication site in Texas. The site is, by the way, about 30 kilometers from my home in northern Austin.
Secretary Rimando is confident that “Commerce has tools to level the global playing field.” Major concerns include ensuring access to foreign markets, enforcing export controls, protecting intellectual property (IP), and being an advocate for American business. Commerce is awaiting final passage of the United States Innovation and Competition Act (USICA) that will allocate $10 billion to the Dept of Commerce to invest in regional tech hubs across the US.
Speaker Pelosi recently agreed with Majority Leader Schumer to conference to reconcile the Act. But in light of the supply chain shocks, particularly in semiconductors, several other bills might be included. The “Chip Act” has proposed another $52 billion to help develop the semiconductor industry manufacture on US soil. It was passed by the Senate but held up a bit by the House as it pushed through BBBII, the Bi-partisan infrastructure act.
Another relevant bill that might be connected is the Facilitating American-Built Semiconductors Act (FABS). As part of the USICA, it would double the number of “fabs” from 9 to 18. Fabs are enormous multi-billion dollar facilities needed to fabricate or manufacture advanced chips. Currently, Taiwan’s TSMC is the world leader, but Samsung is nearly as potent.
As mentioned earlier, Samsung is investing in Austin, Texas, while TSMC is building new facilities in Phoenix, Arizona. Intel is also building there, as the former world innovator and leader looks to regain stature. But the number of fabs throughout Asia could grow to over 50 in the upcoming decade. As a result, total investment in the US should be in the range of $500 billion to regain a significant presence in the global production of chips and ensure supplies for domestic use, including the automobile industry.
Anthony Blinken is our current Secretary of State and has had a busy year consolidating international coalitions that were pulled apart during the last administration. Blinken presented the Biden administration’s vision of a strategic approach to shaping the rules for technology and science policy in several forums this year. He recognizes that US diplomacy needs to “shape the strategic tech landscape, not just react to it.” He has recently taken an interest in Artificial Intelligence (AI), citing important work reported in the OECD 2019 Recommendations on AI. He stressed the importance of AI that respects human rights and democratic values.
He has also been working on the Quad (Australia, India, Japan, US) Framework for Technology, in which they committed to integrating human rights and democratic values into the ways “technology is designed, developed, governed, and used,” particularly in wireless 5G and beyond.
You can see here what he calls “pillars” representing the State Dept’s science and technology priorities.
Build US capacity and expertise in global health, cyber security, climate and emerging technologies through multilateral diplomacy;
Ensure leadership in technology leadership by encouraging more initiative and more innovation;
Protect the Internet and US analytical capabilities;
Set standards and norms for emerging technologies;
Make technology work for democracy;
Develop cooperative relationships by “friend-shoring” and “near-shoring” to help secure and resilient supply chains;
Protect “innovative ecosystems” and talent “pipelines.”
Let me start to conclude by bringing to your attention cybersecurity and the private sector. The Department of State is paying increasing attention to computer and network security threats and weaknesses. But tensions are emerging between nation-states that increasingly see technology as a security issue and tech companies that feel the pressure from customers for additional protection and support.
Microsoft called for a “Digital Geneva Convention” in February 2017 to protect citizens and companies from cyber attacks. The proposal was not warmly received. But they followed up after the May 2017 WannaCry ransomware cyberattacks by gathering broad corporate support with the Cybersecurity Tech Accord.
These companies want governments to protect their customers and not cajole them into attacking innocent people or companies. They also want to empower developers to build more resilient ICT systems. They see value in developing strategic partnerships across many sectors of society to achieve their objectives.
CTA signatories agree to commitments in four key areas:
-
Stronger defense against cyberattacks by pledging to protect all customers globally regardless of the motivation for attacks online;
No offense by choosing to not help governments or other actors launch cyberattacks against innocent civilians or enterprises and protecting against the tampering or exploitation of products and services through every stage of technology development, design, and distribution;
Empowering developers and the people and businesses that use their technology by helping them build and improve capacity for protecting themselves; and
Building upon existing relationships and taking collective action together to establish new formal and informal partnerships with industry, civil society, and security researchers.
The goals of the Cybersecurity Tech Accord are to improve technical collaboration, coordinate vulnerability disclosures, and share information about threats. Most of all it is to minimize the introduction of malicious code into the Internet and other aspects of cyberspace. Microsoft suggested five important “norms” that should inform international discussions of cybersecurity:
-
Harmonization;
Risk reduction;
Transparency;
Proportionality, and;
Collaboration.
Norms are common in multilateralism as they are used to work out approaches to common concerns. Norms are better seen as processes and not goals. It is not so much a matter of norm acceptance but a a struggle over precise meanings, eventually favoring the member states’ prerogative.
Norms are not treaties that need to be ratified by the Senate nor alliances like NATO, whose Article 5 provides that if a NATO Ally is the victim of an armed attack, it is an attack on all members. They are not specific agreements like those achieved at the WTO meetings in 1996 and 1997 that gave us a truly World Wide Web. While norms do not have the force of law, they still can have weight. Norms can provide positive guidance. For example, the ITU called for an end to facial recognition technologies. You may have noticed Facebook stopped using it. Conversely, norms can also result in some states being labeled “bad actors.” Consequently, sanctions can be applied.
Technological strength is now recognized as a prime determinant of economic, political, and military power. The U.S. is rebuilding its industrial strength and returning wealth to its citizens. This transformation will require working with key international partners to encourage investment and innovation while protecting the Internet and needed supply chains, as well as critical intellectual property. A significant challenge will be to make technology work for democracy while safeguarding personal data.
Citation APA (7th Edition)
Pennings, A.J. (2021, Dec 31). US Technology Diplomacy. apennings.com https://apennings.com/how-it-came-to-rule-the-world/digital-monetarism/us-technology-diplomacy/
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Ⓒ ALL RIGHTS RESERVED

var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Cybersecurity > NTIA > USTR
Symbolic Economies in the Virtual Classroom: Dead Poets and the Lawnmower Man
Posted on | November 4, 2021 | No Comments
The stark contrasts between the closed moral community of the preparatory Welton Academy in the Dead Poets Society (1989} and the emotional and intellectual capers of its new teacher Johnald “John” Charles Keating afford the opportunity to query the processes of energetic investments and signification in modernity’s educational spaces.
Likewise, the representation of educational subjectivity in The Lawnmower Man (1992 provides an ancillary contrast for exploring the technicalization of educational space. Particularly an interrogation of its operations on the body and its intellects could prove helpful in an analysis of the symbolic dynamics that operate in the “virtual classrooms.”[1] These are emerging through new multimedia communications technology and telecinematic simulation equipment.
This post examines two films that address the production of modern educational spaces and subjectivities. Through them we can begin to figure the symbolic and energetic configurations in the “virtual classroom” and other technological environments for learning and training. Note that this is from my PhD dissertation Symbolic Economies and the Politics of Global Cyberspaces (1993).
The boarding school’s repressed libidinous and spiritual “economies” invite a reading of The Dead Poets Society that focuses on socio-signifying practices. Notably, it figures the role of the teacher as what Goux termed a symbolic third. Drawing on his quest for a general economics based on symbolic energies, we can not only figure the teacher as representative of patriarchal but also logocentric significance. Like money, a condensation of value occurs that raises his position to the privileged subject and evaluator. Also, the chosen text rises to the select mode of signifying. Their role becomes the mediator and arbitrator of intellectual values and texts. Consequently, they develop a monopoly on the construction of facticity and truth.
The teacher, played by Robin Williams, is a “media event” in the sense that, by elaborating a series of emotionally and intellectually rich forms of signification, he disrupts the school’s anti-erotic sovereignties and traditional forms of educational worship. John Keating is a carefully constructed teacher-character who maintains a credible front to his peers while engaging his students in a series of revaluing exercises. His invoking the philosophy of “carpe diem,” for example, disrupts the ascetic delays of pleasure and self-gratification. Instead, these serve to channel emotional and intellectual investments into the subjectivities prescribed by the school’s bourgeois govern-mentality.
His unusual behavior and pedagogy invoke a curiosity in his students that addresses their subjugated desires and self-construction. His former pact with a secret society of self-proclaimed poets awakens their dormant dreams of social adventure and expressive identities. This secret knowledge, time-tested by the ancients of their alma mater, promises sexual conquest and alternative forms of imagination. “Spirits soared, women swooned, and gods were created.” By re-presenting literary classics of Shakespeare and Milton, but with the voice of macho film star and arch-American John Wayne, he distorts the distinctions between “high” and “low” cultures and encourages the dissolution of aesthetic boundaries that work to solidify not only class distinctions but the socio-symbolic rigidifications of emotional affect.
The reincarnated “Dead Poets Society” organize their meetings in a cave located off the campus in a nearby forest. There they read unauthorized poetry, smoke cigarettes, and mix with women – all the activities that are forbidden at the school. As Gebauer points out, the symbology of the cave has never been about the outside world, but about the inside one. “Our imagination remains captive in the cave. We do, in fact, repeatably seek out the cave in a different form.” Our ontology has its commencement in the topography of the cave and he points out: “In one way or another, all our notions of paradise are linked with situations of the cave.”[2] This is also the encapsulating trajectory of Virillio’s last vehicle.
However, Keating’s enthusiastic ideations soon conflict with other domains of symbolic controls, including the potent Oedipal dynamics that have proved to rein too tight a grip on one of his students. In his quest to act in a community play, the student goes against his father’s demands to cut down on his extracurricular activities, forges a permission slip, and performs the leading role of Buck in A Midsummer Night’s Dream.
The father inadvertently discovers the disobedience and shows up at the play to observe. Despite the acclaim and evident success, he fiercely pulls his son away from the backstage party. After a confrontation at home, where among other things, the mother’s disappointment is invoked to punish the son, the son is forbidden to act again or at least until he goes on and finishes medical school. Faced with this paternal injunction, he takes his own life.
The death of the student presents a moral catastrophe that overpowers Keating’s privileged text of spontaneity and impunity. These are now recoded as degenerate improprieties and their “unproductive” forms of expenditure are tallied against the teacher as infractions within the Calvinistic ledgers of the schoolmasters. The conflicting father is able to easily organize the dismissal of the teacher.
As Keating collects his things from the classroom, the students respond by pledging their allegiance to the teacher and the teachings of the Dead Poets Society. They stand on top of their class desks and recite, “My captain, my captain,” from Walt Whitman’s 1865 tribute to the recently assassinated U.S. president Abraham Lincoln. With this they honor Keating’s role as their navigator through the uncharted course of adolescent squanderings and discoveries.
The Dead Poets Society reflects the profound symbolic and historic investments structuring traditional education and how the currency of the teacher can facilitate new types of energetic and intellective exchanges. What will occur in new virtual environments of the Metaverse? If educational space is to become cyberspace in a socially and politically responsive way, than it behooves us to mark its inception with at least one strategy that is sensitive to the “economies” which mediate and control its symbolic investments.
Suppose we view education as the inscription of subject sovereignties and the socialization of new moral and administrative subjectivities required by the post-industrial information society (“proto-sovereignties”). In that case, the virtual classroom presents an alluring new vehicle for liberating expressive capabilities, massaging sensory intelligences, and prescribing new competencies in terms of workplace requirements or prevailing art and intellectual practices.
The Dead Poets Society reflects the profound symbolic and historic investments structuring traditional education and how the currency of the teacher can facilitate new types of energetic and intellective exchanges.
An instructive approach has been taken up by writers developing a history of computer technology around the theme of the “military information society.”[4] They rightly point to the military’s significant influence on the development of computer-generated simulation environments and information technology in general. Noble, for example, writes about the militarization of learning and the production of what he calls “mental materiel.”[5]
The merging of educational technology and the cognitive sciences received its impetus from recognizing that behaviorist theory had reached diminishing returns. Technical advances in cognitive/instructional technology would be more fruitful.[5] This combination emerged metaphorically in popular culture as the ‘cyborg’ imagery. Machinery is directly implanted into the corporeality of the docile body or, in most cases, designed to interface effectively. Cognitive science since its beginnings has been the “science of the artificial,” with the production of prescribed mental processes modeled on computer procedures and systems foremost on its laboratorial agenda.[6]
The film Lawnmower Man presented a “cyberpunk” vision of the new technology. While the film has been criticized for its overbearing Frankensteinish narrative, its visual and technological settings drew from industry leaders and became a showcase for the potential of virtual reality VR technology. Its poster subheading, “Nature made him an idiot, science made him a god” allows us a foray into the disciplining aspect of the new technology.
Virtual reality uses computer-controlled 3-D graphics and an interactive environment which is oriented from a learner or viewer perspective and which tends to suspend the viewer’s belief that the environment is produced. In virtual reality, the body is encased in a computer-mediated and what Zuboff called an informated environment that continuously records performative data. The user wears a sensor-laden set of goggles and often gloves tied to a megacomputing system capable of tracking and responding to the movements and commands of the user. This system is still in the process of transformation and it is likely that the variety of user interfaces will be marketed and brought into use.
In this story, Dr. Angelo of Virtual Space Industries has major contracts with the US government to experiment with VR to produce better fighting and technology-competent soldiers. His initial work is with chimpanzees, who are fitted into a sensory bodysuit and helmet and hang suspended in a gyroscopic device that allows the body to turn 360 degrees in any direction. In combination with constant injections of vitamins and neurotropic drugs, the chimp is subjected to long training hours of fighting within various electronically simulated environments.
When Dr. Angelo’s chimp escapes and kills a guard, it is hunted down and killed. The investigator then turns to a human subject to continue his work “on the evolution of the human mind.” Jobe is a dim-witted ward of St. Anthony’s Church who makes his living caring for the church and mowing lawns, one of which belongs to Dr. Angelo. Cajoled by the doctor’s argument that he could become smarter and thus avoid “people taking advantage of him,” Jobe agrees to undergo some tests and participate in the VR training.
Unfortunately, the government liaison tampers with the serums and computer learning programs. He installs “Project Five” formulas that were designed to produce extreme forms of aggression for warfare. The continuous work on Jobe had originally transformed him into an attractive, socially graceful, and intelligent subject, but the new program transforms him into a symbolic authority figure and a despotic shaman. Through his electronically enhanced and meticulous training, Jobe becomes a “cyberchrist” and enters the world’s telecommunications networks with the promise that he will give us what we yearn for — a figurehead to lead us.
The Lawnmower Man counters the mythic tendency that VR is becoming a liberation technology, that it will soothe our souls and free our consciousness. Instead, it suggests VR’s trajectory is one of efficiency and training that presents its own positivities and productions. The movie lacks the moral subtlety that might have made it more successful. Still, it serves to pick up on some of the discourse that VR has fit into and exposes a large audience to questions regarding VR technology.
Educational “visionaries” are “tripping over themselves to transform the schools, unwittingly, into a staging ground for playing out militarized scenarios.”[7] Combined with the new imperatives of international capital, which has become totally dependent on the new information technologies, mechanized learning “becomes a site for the actual production of ‘mental materiel’ – for the design and manufacture of ‘intellectual capital.” Public education is implicated as both a laboratory and a site of legitimization for the new technical learning. A new “cognitivist agenda” was responding to the demands of corporations with “problem-solving” skills and the ability to interpret and construct “abstract symbolizations.”
The lessons of film analyses textualize both the romantic and disciplinary notions of education to inform contemporary circulations and ideations of educational policy and practice. A long tradition of involving the viewer in a cinematic experience of suspended belief has resulted in a rich body of textual interpretation that may be helpful for the analysis of virtual reality applications in educational spaces.
Postscript
The Covid-19 era is giving VR new life due to the urgency of social distancing and the possibilities of technology. In Synthetic Worlds: The Business and Culture of Online Games (2005), author Edward Castronova pointed to three trends transforming the gaming version of VR that are relevant to education. Most people associated VR with hardware: goggles, gloves, and other haptic devices. But the advancements in software and network protocols have given confidence to its developments; particularly, software engines like Unreal have accelerated speeds and increased resolution for both augmented and virtual reality. The other development was the enhancement of communities and collaboration. It’s not just about individuals, but individuals working together. Another is the development of commercial markets for virtual environments, items, and even avatars. Particularly with the Metaverse now the focus of corporations like Facebook and Nvidia, we are entering a wild west of virtual life.
Citation APA (7th Edition)
Pennings, A.J. (2021, Nov 04) Symbolic Economies in the Virtual Classroom: Dead Poets and the Lawnmower Man. apennings.com https://apennings.com/meaningful_play/symbolic-economies-in-the-virtual-classroom-dead-poets-and-the-lawnmower-man/
Notes
[1] Hiltz, R.S. (1986) “The Virtual Classroom,” Journal of Communication. Spring.
[2] Gebauer, G. (1989) “The Place of Beginning and End: Caves and Their Systems of Symbols,” In Kamper & Wulf (eds.) Looking Back on the End of the World. (NY: Semiotext(e) Foreign Agents Series). p. 28.
[3] See Chapter 5.
[4] Levidow, L. and Robins, K. (1989) Cyborg Worlds: The Military Information Society. (London: Free Association Books).
[5] Noble, D. D. (1989) “Mental Materiel: The Militarization of Learning and Intelligence in US Education,” in Levidow, L. and Robins, K. Cyborg Worlds: The Military Information Society. (London: Free Association Books). p. 22.
[6] Quote from H.A. Simon (1981) “Cognitive science: the newest science of the artificial,” in D.A. Norman, ed. Perspectives on Cognitive Science. (Hillsdale, NJ: Ablex/Erlbaum). pp. 13-25.
[7] Noble, D. D. (1989) “Mental Materiel: The Militarization of Learning and Intelligence in US Education,” in Levidow, L. and Robins, K. Cyborg Worlds: The Military Information Society. (London: Free Association Books). p. 35.
ibid, p, 34.
© ALL RIGHTS RESERVED

var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Building Dystopian Economies in Facebook’s Metaverse
Posted on | October 31, 2021 | No Comments
Facebook’s new name “Meta” and its interest in a VR metaverse has me flashing back to a talk I gave on April 17, 2008 about Second Life, an online virtual world that was popular before social media began to dominate the Web. This post examines the dynamics and economies of virtual worlds while considering the hybrid conversions and interactions with the world outside it.
It’s no mistake that I’m writing this on Halloween, the holiday of expressive meanings and facades. Virtual environments like Fortnite, Minecraft, and Roblox explode with creative representations, not just as places to visit virtually but opportunities to interact, play, and conduct commerce. Now Metaverse promises new online services like working virtually and digital goods like designable avatars and transactable NFTs (Non-Fungible Tokens). But are people ready to don VR helmets and live, play, and work in a Ready Player One universe?
The talk was held in downtown New York City at the Woolworth Building, known as the “Cathedral of Commerce” when it was built in 1913. The location was strangely appropriate given the topic, a wrap-up of a year-long project at New York University on Second Life. The project involved an animation class taught by Mechthild Schmidt-Feist, and my class, the Political Economy of Digital Media. I still have the tee-shirt my students gave me that says “Got Linden?” a reference to Second Life’s currency, the Linden. Click on the image for a larger view of the mind map notes from my talk:
The talk gave me a chance to tie in discussions from my PhD dissertation, Symbolic Economies and the Politics of Global Cyberspaces (1993). I went back to my dissertation because it looked at money in fictional virtual worlds, from Sir Thomas Moore’s (1516) Utopia to Samuel Butler’s Erewhon (1872) and William Gibson’s cyberspace trilogy. His classic Neuromancer (1984)[1] originally conceived “cyberspace” as a digital world, the “matrix,” that someone could “jack into” with a neural plug in their skull. I also discussed Neal Stephenson’s Snow Crash (1992) that coined the term “Metaverse.”
At the time, I was looking for a way to understand how money could become electronic. My Masters thesis was on banking and telecommunications deregulation and how it led to the “Third World debt crisis” and the privatization of public assets worldwide. But it was empirical and largely descriptive. Cyberpunk was intriguing as a way to theorize money in some “future” settings.
So it was useful to examine historical texts regarding money. Sir Thomas More’s Utopia (1516) was central as it imagined a place WITHOUT MONEY. Utopia, which means “no place,” was an imaginary land ruled by its leader Utopus. The island nation outlawed gold and often ridiculed it. But it was not within certain symbolic formations, including the significance of its leader Utopus. So I used the notion of symbolic thirds from Jean-Joseph Goux to analyze the political economy of utopias and the “dys”-topias of these cyberpunk genre novels.[2] This strategy also involved a reversal. In this case, what happens to a place with an excess of money and other symbolic forms?
Dystopias are places with excesses – of money and other significations. Meanings circulate, slide about, and proliferate. Symbolic economies refer to society’s tendency to both fix meanings and elevate certain things into “symbolic thirds.” Social pressures tend to isolate and elevate a member of a category to a higher position that is then used to judge the other members of that class. Money is the most recognizable symbolic third, but others include certain artists, captains, creative works, monarchs, and other and political leaders. Donald Trump, it can be argued, rose to a type of symbolic currency for the MAGA movement. These thirds evaluate relationships between things, assign and reconcile corresponding values, and facilitate exchanges. I decided to revisit my old work that examined Second Life and now turn to Facebook’s Metaverse to investigate how economies and monies can emerge in these electronic/digital environments.
I had a helpful bridge, the work of Cory Ondrejka, a Second Life co-founder and Chief Technical Officer for Linden Labs, the developer of Second Life. His “Escaping the Gilded Cage: User Created Content and Building the Metaverse” also used the cyberpunk genre as a point of departure and demonstrated how online infrastructures can provide opportunities for different kinds of online worlds and economies to emerge and harness the power of player creativity. He focused on the costs and strategies of producing video games (particularly massively multiplayer online games-MMOGs) and virtual environments to build a online spaces to match the richness and complexity of the real world.[3]
Ondrejka addressed problems that plagued virtual reality and identified four issues with creating content in digital worlds like Second Life and also video games for the consoles such as PlayStation and Xbox. They were difficulties in:
- creating first-class art;
- the lengthy development cycles needed;
- the hours of gameplay that had to be produced;
- the many players that needed to be accommodated, and;
- the large teams that had to be hired and managed effectively to create digital content.
These technical issues continued to be addressed by software engines like Unity and Unreal. They have combined with innovations in hardware such as the Oculus Rift headset goggles and the HTC Vive Virtual Reality System to create significant advances virtual environment and interaction. Increases in computation speed and facility in scripting languages have also empowered professional designers and players to create new immersive experiences.
The resultant tools and techniques of these VR engines include:
- better scripts for avatar and player locomotion;
- responsive user interface synchronization and haptic (feeling) controls;
- editor techniques to create virtual 3D environments such as landscapes and urban architectures;
- adjustable stereo media for VR Head Mounted Displays (HMD);
- advanced plugins to expand and facilitate VR interactions and gameplay elements, and;
- high-volume high-speed/low-latency data networking that connects the temporal and spatial dimensions of a VR/AR environment with the user navigation actions.
They can be combined to continue to improve the desired quality of the virtual experience for the end user, despite expected limitations in available system resources. However, they are insufficient to maintain the gameplay and psychic investments necessary to ensure long-term engagements in a virtual world like the Metaverse.
Creating successful virtual worlds and economies with the traditional model of a single group is highly unlikely. User-created content is a key ingredient for a dynamic online world with a vibrant economy and that requires vesting user participation. Do it yourself (DIY) tools and techniques need to be tied to online and real world rewards.
One of Ondrejka’s inspirations was the The Mystery of Capital by Hernando De Soto, a popular economist throughout the “Third World,” who focused on capitalism being a system of representations and rights. He argued that connecting poor people to property via a system of legal representations was the most effective way to empower them to build wealth. Connecting the construction of online wealth to individuals would be crucial to successful virtual economies.[4]
Ondrejka suggested that online virtual worlds with vibrant economies are really only possible if:
- users are given the power to collaboratively create the content within it;
- each of those users receive broad rights to their creations. These would be primarily property rights over virtual land, in-world games, avatar clothes, etc.;
- they also need to convert those creations into real world capital and wealth.
Online virtual worlds need a system of incentives and symbolic currencies to propel them. The software tools to create and program virtual content are readily available but the legal systems need to be refined for online environments to protect property rights both within the virtual sphere and outside it.
Player-created content was not entirely new to virtual environments. id Software, a small Texas-based company, used the ego-centric perspective to create the first-person shooter (FPS) game, Wolfenstein, in May of 1991. id followed with the extraordinarily successful DOOM in December 1994. DOOM combined a shareware business model with the distribution capabilities of the emerging Internet. Just two months after Netscape introduced its first browser as freeware over the Web, DOOM enthusiasts by the droves were downloading the game by FTP to their PCs, many of them with just a 14.4 kb modem. In a prescient move, id decided to make DOOM’s source code available to its users. Making the code available allowed new modifications to the game called “mods.”
This innovation allowed their fans to create their own 2.5D (not quite 3-D) levels and distribute them to other players. A popular one involved using the characters from the Simpsons’ animated TV show running around the DOOM environment. Homer Simpson was able to renew his health by finding and eating donuts. The US military created a version called Marine DOOM designed to introduce soldiers to urban fighting and the idea of killing. Many of the company’s new employees were recruited because of the excellence of their mods, and the extra help allowed them to create the next stage of their innovative online gameplay, QUAKE.
Second Life was born in June 2003 and offered users the ability to create content using built-in tools. They could develop objects and give them scripted behaviors (i.e., a tree and its leaves swaying in the wind). They could create their own avatar (representation of themselves or an entirely fictitious persona) and architectural structures. They could buy and sell land and any other objects they made because they had an in-world currency and sought to protect intellectual property. Some 99% of the new world was user-created, and no permits, pre-approval processes, or separate submission were required. The key was the ability to perform transactions and maintain property rights.
It will be interesting to watch these dystopian virtual worlds emerge. Gibson used the motif of “the biz” to refer to the dangerous circulations of various currencies in his cyberspace trilogy, both online and in the gritty urbanscape where natural bodies existed. But virtual markets for currency exchanges had existed since at least the 1970s when Reuters introduced its Money Monitor Rates.
We’ve been living in this online/offline world for a while. Databases and spreadsheets structure the “real” world, but we don’t often perceive the changes or attribute causality to online dynamics. Amazon’s impact on retailing is one of the more pronounced effects, but you must go to the mall or see a local store close to conceptualize its effects in the offline world. Amazon excels in its offline infrastructure, but its online capabilities’ integration gives it a special power. Big data collection and search, recommendation algorithms, one-click payment capabilities (licensed from Apple), combined with its delivery information and extensive logistics and warehouse system, give it massive economic power. These trends suggest that the Metaverse, or Nvidia’s Omniverse, will have synergistic economic effects in the offline world.
The dynamism of virtual worlds will depend on the ability of participants to communicate, collaborate and sell items to each other for in-game virtual currency or barter effectively for such things. This market environment will require a legal framework and rules for purchasing and owning in-game items and properties. Converting virtual currencies for real-world currencies and vice-versa are also crucial in the dystopic economies of the metaverse vision.
The type of personal interactions via avatars are what makes the metaverse intriguing, and controversial. Second Life ultimately wasn’t that impressive because people were just walking around in their avatar “costumes” most of the time and wound up insulting or hitting on people with sexual innuendos. I think successful participation will be a matter of how these environments and interactions are organized. Multiplayer games illicit a lot of “trash talk,” but they are structured around the gameplay. Productive online places might need to have dress codes and other parameters for behavior, depending on their purpose.
Ironically, it was Facebook (and the digital camera) that largely displaced Second Life. People quickly became bored of living through avatars in a world of impersonal, jerky, low-resolution graphics. Second Life became a sort of continuous online Halloween party with people hidden in costumes and behind constructed facial and body facades. Instead, people gravitated to Facebook and then Instagram because they preferred a representational style that was closer to real-life. They wanted to post pictures of friends and family, and of course, their cats, dogs, and dinner foods. They wanted to construct stories of themselves and share memes and narratives about what they found funny and important in the world.
So are people ready for the Ready Player One universe? That remains to be seen.
Citation APA (7th Edition)
Pennings, A.J. (2021, Oct 31) Building Dystopian Economies in Facebook’s Metaverse https://apennings.com/meaningful_play/dystopian-economies-in-facebooks-metaverse/
Notes
[1] Gibson, W. (1984) Neuromancer. New York: Ace Books.
[2] Goux, J. (1990) Symbolic Economies. Ithaca: Cornell University Press).
[3] Ondrejka, Cory R., Escaping the Gilded Cage: User Created Content and Building the Metaverse. Available at SSRN: https://ssrn.com/abstract=538362
[4] Soto, Hernando de, (2000) The Mystery of Capital : Why Capitalism Triumphs in the West and Fails Everywhere Else. New York: Basic Books.
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is a Professor at the State University of New York, Korea. Previously, he taught at St. Edwards University in Austin, Texas and was on the faculty of New York University from 2002-2012. He has been examining the emergence of digital money and monetary policy since he wrote his Masters thesis on telecommunications and banking deregulation and how it led to the “Third World Debt Crisis” in the early 1980s.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Cory Ondrejka > cyberspace > Jean-Joseph Goux > metaverse > Neuromancer > Political Economy of Digital Media > Second Life > Snow Crash > Symbolic Economies > symbolic thirds