Anthony J. Pennings, PhD


Digital Spreadsheets as Remediated Technologies

Posted on | May 5, 2021 | No Comments

In his classic (1964) Understanding Media: The Extensions of Man, Marshall McLuhan argued that “the content of any medium is always another medium.”[1] For example, the content of print is the written word, and the content of writing is speech. Likewise, the content of the telex was writing, and the content of television was radio and cinema. The book was notable for coining the phrase, “the medium is the message,” and pointed to the radical psychological and social impacts of technology.

McLuhan had a specific focus on the effects instead of the content transmitted by each medium. He probed how new forms of technologies extended the senses of humans and changed the activities of societies. He invited us to think of the lightbulb, not so much in terms of its luminous content, but in the way it influenced modern society. He noted it creates new environments and changes in lifestyles, particularly at night. This post will examine the media technologies embedded in the digital spreadsheet that have made it a transformative technology and changed modern life.

Mediating “Authentic” Realities

In Remediation: Understanding New Media, Jay Bolter and Robert Grusin extended McLuhan’s ideas to a number of “new media,” including television, computer games, and the World Wide Web. They argued new media technologies are designed to improve upon or “remedy” prior technologies in an attempt to capture or mediate a more “authentic” sense of reality. They used the term “remediation” to refer to this innovation process in media technologies.[2] For example, VR remediates perspectival art, which remediates human vision. TV not only remediates the radio and film but now the windowed look of computers, including the ticker-tape scrolling of information across the screen.

Unfortunately, but understandably, they neglected the spreadsheet.

And yet, the digital spreadsheet is exemplary of the remediation process. Several years ago, I initiated an analysis of the spreadsheet that focuses on the various “media” components of the spreadsheet and how they combine to give it its extraordinary capabilities. To recap, these are:

  1. writing and numerals;
  2. lists;
  3. tables;
  4. cells, and;
  5. formulas.

The digital spreadsheet refashioned these prior media forms: writing, lists, tables, and formulas to create a dynamic meaning-producing technology. Writing and lists have rich historical significance in the organization of palaces, temples, monasteries, as well as armies and navies. Indo-Arabic numbers replaced Roman numerals and, with the introduction of zero and the positional place holding system, expanded the realm of numerical calculation. Numbers and ledgers led to the development of double-entry accounting systems and the rise of merchants and later modern businesses.

Tables helped knowledge disciplines emerge as systems of inquiry and classification, initially areas like accounting, arithmetic, and political economy. Still, later areas such as astronomy, banking, construction, finance, insurance, and shipping depended on printed tables to replace constant calculation. Charles Babbage (1791-1871), a mathematician and an early innovator in mechanical computing, expressed his frustration with constructing tables when he famously said, “I wish to God these calculations had been executed by steam.”

First with VisiCalc and then Lotus 1-2-3, these media elements worked together to form the gridmatic intelligibility of the spreadsheet. Bolter and Grusin proposed a “double logic of remediation” for the representation of reality: transparent immediacy and hypermediacy. Both work to produce meaning. However, the former tries to forget the mediation at work and produce transparent immediacy, such as watching a live basketball game on television. The latter tries to foreground the medium, especially through computer graphics. Financial news programs on TV such as Bloomberg Surveillance mix the immediacy of live news using hosts and guests, with hypermediated indexes of stock markets (DJIA, S&P 500, NASDAQ, etc.) and other economic indicators such as GDP. How do spreadsheets attempt to perceive, display, and produce reality? How do they heal our perception of reality?”

Windows to the World Wide Web

It was the personal computer (PC) that brought the spreadsheet to life. The Apple II brought us VisiCalc in 1976 with 40 columns and 25 rows, a small area that could be navigated quickly using the arrow keys. One of the first formulas developed for the spreadsheet was net present value (NPV) that calculated the return on investment (ROI) for projects, including large purchases of equipment. Microsoft’s Disk Operating System (DOS) was the technical foundation for Lotus 1-2-3 as the IBM PC and “IBM-compatibles” proliferated during the 1980s. The spreadsheet was becoming known as the “killer app” that made buying the “microcomputer” worthwhile. But it was the Graphic User Interface (GUI) that popularized the PC, and thus the spreadsheet.

The Apple Mac marked the shift to the GUI and new desktop metaphor in computing. GUIs replaced the inputted ASCII characters of the command line interface with a more “natural” immediacy provided by the interactivity of the mouse, the point-able cursor, and drop-down menus. The desktop metaphor drew on the iconic necessities of the office: the file, inboxes, trash cans, etc.(p.23) A selection of fonts and typographies remediated both print and handwriting. The use of the Mac required some suspension of disbelief, but humans have been conditioned for this alteration of reality by story-telling and visual narratives in movies and TV.

Microsoft’s Excel was the first spreadsheet to use the graphic user interface (GUI) developed by Xerox PARC and Apple. Designed for the Apple Macintosh, it became a powerful tool that combined the media elements of the spreadsheet to produce more “authentic” versions of reality. An ongoing issue is the way it became a powerful tool for organizing that reality in ways that benefitted certain parties, and not others.

Excel was the center of Microsoft’s own shift to GUIs starting in 1985. Called Windows, it made spreadsheets a key part of its Office software applications package. Microsoft had captured the IBM-compatible PC market with DOS and initially built Windows on top of that OS. Windows 2.0 changed the OS to allow for overlapping windows. Excel became available on Windows in 1987 and soon became the dominant spreadsheet. Lotus had tried to make the transition to GUI with Jazz but missed the market by aiming too low and treating the Mac as a toy.

Windows suggested transparent views for the individual to different realities. But while the emerging PC was moving towards transparent immediacy, the spreadsheet delved into what Bolter and Grusin would call hypermediacy. This is an alternate strategy for attaining an authentic access to the real. Windows promised transparent views of the world, but the spreadsheet offered new extensions of the senses – a surveying and calculative gaze. – by remediating.

Spreadsheets drew on the truth claims of both writing and arithmetic while combining them in powerful ways to organize and produce practical information. They combined and foregrounded the mediums involved to present or remediate a “healed” version of reality. Each medium provides a level of visibility or signification. The WYSIWYG (What You See Is What You Get) environment of the desktop metaphor provided a comfortable level of interactivity for defining categories, inputting data, and organizing formulas and displaying that information in charts and graphs.

The Political Economy of PC-based Spreadsheets

How has the digital spreadsheet changed modern society? Starting with VisiCalc and Lotus 1-2-3, the spreadsheet created new ways to see, categorize, and analyze the world. It combined and remediated previous media to create a signifying and pan-calculative gaze that enhanced the powers of accounting, finance, and management. Drawing on Bolter and Grusin, can we say that digital spreadsheets as remediated technology became a “healed” media? What was its impact on the modern political economy? What was its impact on capitalism?

The spreadsheet amplified existing managerial processes and facilitated new analytical operations. Its grid structure allowed a tracking system to monitor people and things. It connected people with tasks and results, creating new methods of surveillance and evaluation. It could register millions of items as assets in multiple categories. It itemized, tracked, and valued resources while constructing scenarios of future opportunity and value.

Digital spreadsheets introduced a major change of pace and scale to the financial revolution that started with Nixon’s decision to go off gold and on to an “information standard.” The spreadsheet facilitated quick analysis and recalculating loan payment schedules in an era of inflation and dynamic interest rates. It started with accountants and bookkeepers who quickly realized that they could do their jobs with new precision and alacrity.

PCs and spreadsheets started to show up in corporate offices, sometimes to the chagrin of the IT people. The IBM PC legitimized the individual computer in the workplace, and new software applications emerged, including new types of spreadsheet applications such as Borland’s Quattro Pro. Spreadsheet capabilities increased dramatically through the 1980s. These new processes of analyzing assets allowed for the shift to a new era of spreadsheet capitalism.

Reaganomics’ emphasis on the financial resurgence and the globalization of news meant that money-capital could flow more freely. It’s no surprise that the digital spreadsheet brought in the era of leveraged buyouts (LBOs) and widescale privatization of public assets. Companies could be analyzed and their assets separated into different categories/companies. Spreadsheets could determine NPV, and plans could be presented to investment bankers for short-term loans. Then certain assets could be sold off to pay off the loans and cash in big rewards.

Similarly, the assets of public agencies could be itemized, valued, and sold off or securitized and listed on share markets/stock exchanges. The “Third World Debt Crisis” created by the oil shocks of the 1970s created new incentives to find and sell off public assets to pay off government loans. This logic happened to telecommunications companies worldwide. Previously, PTTs (Post, Telephone, and Telegraph) were government-owned operations that returned profits to the Treasury. But the calculative rationality of the spreadsheet was quickly turned to analyzing the PTTs and summing the value of all the telephone poles, maintenance trucks, switches, and other assets. At first, these companies were turned into state-owned enterprises (SOEs), but they were eventually sold off to other companies or listed on share markets. By 2000, the top companies in most countries, in terms of market capitalization, were former PTTs, now transformed into privatized “telcos.”

Capitalism is highly variable. Regulations, legislation, and technologies change the political economy and shape the flows of information and money. World Trade Organization (WTO) meetings in 1996 and 1997 reduced tariffs on computers and other IT-related products and pressured countries to liberalize telecommunications and complete PTT privatization. By the late 1990s, these telcos were adopting the new Internet Protocols (IP) that allowed for the World Wide Web. Cisco Systems and Juniper Networks were instrumental in developing new switching and routing systems that allowed telcos to convert into broadband providers and dominate the ISP markets.

A spreadsheet is a tool, and it was also a world view – a reality by categories, data sets, and numbers. As the world moved into the financialization and globalization of the post-oil crisis Reagan era, the PC-based spreadsheet was forged into a powerful new “remediated” technology. Was it responsible for a new era in capitalism? Where combinations of media framed by the computer windows guided and shaped the perceptions of a new era of capitalism. Today, we have Apple’s iWork Numbers, Google Sheets, and LibreOffice Calc but Microsoft Excel is still the dominant spreadsheet. But how has Microsoft scaled Excel, particularly the use of lists and tables with Access and SQL language?


[1] McLuhan, Marshall. Understanding Media: The Extensions of Man. New York: McGraw-Hill, 1964. Print.
[2] Bolter, J. D, and Richard A. Grusin. Remediation: Understanding New Media. Cambridge, Mass: MIT Press, 1999. Print.


AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

Internet Policy, Part 4: Obama and the Return of Net Neutrality, Temporarily

Posted on | March 26, 2021 | No Comments

The highly competitive Internet services provider (ISP) environment of the 1990s was significantly altered by Federal Communications Commission (FCC) during the Bush Administration. Two Bush appointments to the FCC Chair position guided ISP policies towards a more deregulated environment. The result, however, was a more oligopolistic market structure and less competition in the Internet space. Furthermore, these policies raised concerns that powerful ISPs could influence the flow of data through the Internet and discriminate against competing content providers to the detriment of consumers.

The FCC is an independent commission but can lean in political directions. Under the leadership of Michael Powell (January 22, 2001 – March 17, 2005), Republican from Virginia and son of General Colin Powell, FCC decisions favored cable companies. In the summer of 2005, the FCC now guided by the new FCC Chairman Kevin J. Martin Republican from North Carolina (March 18, 2005 – January 19, 2009) guided decisions that favored telcos. The FCC made cable modem services and broadband services by telecommunications companies Title I unregulated “information services.” This has raised ongoing concerns that powerful ISPs influence the flow and speed of data through the Internet and could discriminate against competing content providers or users to the determent of consumers.[1]

This post examines the Obama administration’s approach to Internet regulation and the issue of net neutrality. This involved reviving “Title II” regulation that works to guarantee the equal treatment of content throughout the Internet. Previously, I examined the legal and regulatory components of common carriage and the emergence of net neutrality as an enabling framework for Internet innovation and growth.

Comedian John Oliver explained net neutrality in his Last Week Tonight Show published on Jun 1, 2014.

The Internet’s political and social impact was becoming more apparent with the social media presidential campaign of Barack Obama in 2008. It was recognized by the Pew Research Center that some 74% of Internet users interacted with election information. Many citizens received news online, communicated with others about elections, and received information from campaigns via email or other online sources.

In 2010, the Obama administration began to write new rules dealing with Internet providers that would require ISPs to treat all traffic equally. In what were called the “Open Internet” rules, FCC Chairman Julius Genachowski, Democratic from Washington, D.C.(June 29, 2009-May 17, 2013) sought to restrict telecom providers from blocking or slowing down specific Internet services. Verizon sued the agency to overturn those rules in a case that was finally decided in early 2014. It determined the FCC didn’t have the power to require ISPs to treat all traffic equally due to their new Title I designations. The judge was sympathetic to the consumer’s plight though, and directed the ISPs to inform subscribers when they slow traffic or block services.

After the appeal by Verizon, the DC circuit court sent the FCC back to the drawing boards. Judge David Tatel said that the FCC did not have the authority under the current regulatory conditions to treat telcos as “common carriers” that must pass data content through their networks without interference or preference. The result of Verizon vs. the FCC was that without a new regulatory classification, the FCC wouldn’t have the authority to actually enforce restricting the big ISPs from banning or blocking legal websites, throttling or degrading traffic on the basis of content, and limiting “paid prioritization” of Internet services. The latter, the so-called “fast lanes” for companies like Google and Netflix were particularly contentious.[2]

So, on November 10, 2014, President Obama went on the offensive and asked the FCC to “implement the strongest possible rules to protect net neutrality” and to stop oligopolistic ISPs from blocking, slowing down, or otherwise discriminating against lawful content. Tom Wheeler, the incoming FCC Chairman, from California (November 4, 2013 – January 20, 2017), sought a new classification from the legacy of the Communications Act of 1934 by invoking Title II “common carrier” distinctions for broadband providers.

To its credit, the FCC had been extremely helpful in creating data communications networks in the past. The FCC’s classification of data services in Computer I as being “online” and not “communications” provided timely benefits. For example, it allowed early PCs with modems to connect to ISPs over telephone lines for hours without paying toll charges to the providers of local telephone service. But with a competitive Internet, opening up the deregulated broadband capabilities to telcos seemed excessive.

“Information services” under Title I is a more deregulatory stance that allows the telcos to impose more control over the Internet. “Information services” under Title I refers to “the offering of a capability for generating, acquiring, storing, transforming, processing, retrieving, utilizing, or making available information via telecommunications.” As mentioned previously, under the George W. Bush’s FCC, cable companies in 2002 and then telcos in 2005 were classified as Title I information services. This led to a major consolidation of US broadband service that started to be dominated by large integrated service providers such as AT&T, Comcast, Sprint, and Verizon. These companies began trying to merge with content providers, raising the specter of monolithic companies controlling information and invading privacy.

On February 26, 2015, the FCC’s new “Open Internet” rules went into effect based on Title II of the Communications Act of 1934 and Section 706 of the Telecommunications Act of 1996. The latter gave the FCC authority to regulate broadband networks, including imposing net neutrality rules on Internet service providers. Section 706 directs the FCC and state utility commissions to encourage the deployment of advanced telecommunications capability to all Americans by removing barriers to infrastructure investment and promoting competition in the local telecommunications markets.

But Section 706 authority only kicks in when the FCC finds that “advanced telecommunications capability” is “not being deployed to all Americans in a reasonable and timely fashion.”

In other words, the case needs to made that the US Internet infrastructure is lacking. For example, the FCC established 25 Mbps download/3 Mbps upload as the new standard for “advanced telecommunications capacity” for residential service. This is actually a pretty low benchmark for urban broadband users as only 8% of America’s city dwellers lack access to that level of service. But that still left some 55 million Americans behind as rural areas were largely underserved, especially in tribal lands.

In early 2015, President Obama went began to point attention towards broadband access. Consequently Chairman Wheeler announced that the FCC’s Connect America Fund will disburse $11 billion to support modernizing Internet infrastructure in rural areas. It also reformed the E-rate program to support fiber deployment and Wi-Fi service to the nation’s schools and libraries.[3]

Open Internet rules were meant to protect the free flow of content and promote innovation and investment in America’s broadband networks. It was grounded in multiple sources of authority, including Title II of the Communications Act of 1934 and Section 706 of the Telecommunications Act of 1996. In addition to providing consumer protections by restricting the blocking, throttling, and paid prioritization of Internet services, the FCC strove to promote competition by ensuring that all broadband providers have access to poles and conduits for the physical plant.

They also did not require providers to get the FCC’s permission to offer new rate plans or allow new services. Nor did they require companies to lease access to their networks and monitor interconnection complaints, a key provision that promoted ISP competition. A key dilemma was to promote the ubiquity of the Internet, while exempting broadband customers from universal service fees.

The election of Donald Trump presented new challenges to Net Neutrality and the potential of a new reversal. Tom Wheeler resigned from the FCC, allowing Trump to pick a new Democrat to the FCC and a Republican. The new FCC voted 3-2 to begin eliminating Obama’s net neutrality rules and reclassify home and mobile broadband service providers as Title I information services. A new FCC Chairman, Ajit Pai, argued that the web was too competitive to regulate effectively, and throttling some web applications and services websites might help Internet users. The FCC began seeking comments about eliminating the Title II classification. Replacing the Obama net neutrality rules was put to the vote by the end of the year, and the FCC once again returned to Title I deregulation through a declaratory ruling.


[1] Ross, B.L. and Shumate, B.A., Rein, W. “Regulating Broadband Under Title II? Not So Fast.” Bloomberg BNA. N.p., 25 June 2014. Web. 18 June 2017.
[2] Finley, Klint. “Internet Providers Insist They Love Net Neutrality. Seriously?” Wired. Conde Nast, 18 May 2017. Web. 18 June 2017.
[3] “What Section 706 Means for Net Neutrality, Municipal Networks, and Universal Broadband.” Benton Foundation, 13 Feb. 2015. Web. 18 June 2017.



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

Will Offshore Wind Power Print Money?

Posted on | March 15, 2021 | No Comments

Research is showing that offshore wind farms can increase biodiversity in oceans. Like sunken ships, windmill installations present unique opportunities for facilitating marine life. These new habitats can create artificial reefs and marine life-protection areas. Undersea hard surfaces rapidly collect a wide range of marine organisms that build and support local ecosystems. They also provide some refuge from trawlers and other industrial fishing operations.

This post will examine the prospects of wind energy, one of the promising alternative renewable energies that will work with hydropower, solar, and even small-scale nuclear energy to power the smart electrical grids of the future. Is offshore wind feasible? What are the downsides? Will it be profitable?
Can media economics help us understand the economics of wind power?

Personally, wind power hasn’t impressed me in the past. In graduate school in Hawaii, I remember a big windmill near the North Shore surf spots that didn’t seem to do much. Driving up into San Francisco along Interstate 5, the windmills seem big and slow. Flying over Oklahoma, the wind farms are a bit more impressive. But I didn’t understand the engineering and science of wind energy.

The physics of windmills means big is better. The larger the propellers can be built, the more efficient they become. Bigger windmills capture more wind, and that produces more torque. The more propellers can harvest the power of the wind, the more electricity they can produce. Wind and torque combine to transform mechanical energy into electricity.

torque and angular speed

Unlike solar, wind power is not directly contingent on solar rays but on larger climatic events. The US Department of the Interior‘s Bureau of Ocean Energy Management (BOEM) has been conducting environmental impact studies and is giving conditional permission to build offshore wind farms. Contracts to provide wind electricity as low as 5.8 cents per kilowatt-hour are being negotiated. Massachusetts, Virginia, and the far coast of Long Island, New York are some of the major sites under development. While previously a global laggard, the US is expected to become a major offshore electricity contributor after 2024.

The future of US offshore wind energy is dependent on several economic variables. One is power purchase agreements (PPAs) that businesses and other organizations use to solidify long-term purchases of electricity. Another is renewable portfolio standards (RPSs) that obligate US states to procure a certain percentage of renewable energy. RPSs have contributed to nearly half of the growth in renewable energies since 2000. Tax incentives are important and depend on political winds. The US Treasury extended safe harbor tax credits for renewable energies, including offshore wind in light of the COVID-19 pandemic. Offshore wind auctions are also crucial as cry “location, location, location” resonates soundly in this industry.

Renewable critics like the Manhattan Institute have been been critical of offshore windmills, arguing that they decline some 4.5% in efficiency every year. Another concern is who will pick up the decommissioning costs of deconstructing and recycling the windmills. But the technology is new as are the maintenance and regulatory practices.

Wind could be a significant boost for coastal communities. Major cities that were wedded to the ocean due to shipping are likely to benefit as offshore wind might provide cheap electricity and much-needed economic benefits. In terms of jobs and the revitalization of shore-based businesses a wide range of services will be needed. Energy control centers, undersea construction, equipment supply, and maintenance operations, are just some of the opportunities that are emerging around ocean-based renewable energy sources.

The economics of offshore wind energy are very much like media economics – high upfront costs and low marginal costs. Book publishing requires editors and pays author royalties. It also needs paper, printing presses, and the distribution capabilities required to produce fiction and non-fiction works. While some books may not be profitable, a best-seller can provide significant returns for the publisher. Movies require extensive upfront expenses in production and post-production, but each showing in cinemas worldwide costs relatively little. Wind power requires a major capital influx to set up. But the wind is free, so once operational, the windmill begins to produce electricity. Lubrication and other maintenance activities are needed at times, but electricity is created as long as the wind is blowing. If the infrastructure is set up efficiently, it will print money.


AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea. Born in New York, he had a chance to teach at Marist College near his home town of Goshen before spending most of his academic career at New York University. Before joining SUNY, he moved to Austin, Texas and has taught in the MBA program at St. Edwards University. He started his academic career at Victoria University in New Zealand. He has also spent a decade as a Fellow at the East-West Center in Honolulu, Hawaii.

COVID-19 and US Economic Policy Responses

Posted on | March 8, 2021 | No Comments

COVID-19 was recognized in early 2020 and began to spread rapidly in March. The World Health Organization (WHO) identified the virus in January, and later in the month, the CDC confirmed the first US Coronavirus case. On March 13, President Trump declared the spreading coronavirus a national emergency as the US registered its 100th death. Many restaurants and other high contact industries began to shut down. Transportation and tourism ground to a halt. As a result, the US’ economic management processes worked to design a response. In this post, I look at how the Federal Reserve and Congress (House and Senate) addressed the economic ramifications of the emerging pandemic and produced actions that reverberated through the US, including the so-called K-shaped recovery.

Stock market since 2021

The US economy went into steep decline in the second quarter (April, May, June) while the virus spread and the Federal Reserve’s monetary policy and the CARES Act was being implemented. According to the Bureau of Economic Analysis (BEA), in the second quarter of 2020, US real Gross Domestic Product (GDP), contracted by 31.4 percent (9 percent at a quarterly rate). It was the starkest economic decline since the government started keeping records in 1947.

Starting March 3, the FOMC reduced the Fed Funds Rate 1.5 percentage points to 0-0.25%, making it official at its March 15 FOMC meeting. The Fed Funds Rate is the interest rate that banks purchase money from each other through its FEDWIRE network. This gives them the reserves that can be lent out at higher rates for car loans, home mortgages, and industrial capacity. The loans can also be invested in appreciating financial assets such as Bitcoin, currencies, equities, gold, etc. Rather surprising was the Fed decision to reduce the reserve ratio to 0 from its traditional 10%. This meant banks no longer had to hold a percentage of their deposits in their vaults or at the Federal Reserve. The Fed also offered a narrative framework, or “forward guidance” on the interest rates, stating they will remain low until unemployment recedes and inflation increases to roughly 2% percent.

COVID Unemployment

The Fed simultaneously announced that it would begin to purchase securities “in the amounts needed to support smooth market functioning and effective transmission of monetary policy to broader financial conditions.” After its mid-March meeting, the Fed said it would begin buying some $500 billion in Treasury securities and $200 billion in government-guaranteed mortgage-backed securities. This is a version of the quantitative easing (QE) used, along with the $700 billion Troubled Asset Relief Program (TARP), to recover from the 2007 financial crisis.

Over the course of the year, the Fed bond portfolio increased by $2.5 trillion from $3.9 trillion to $6.6 trillion. The purchases injected money into the economy and QE kept interest rates low, helping to keep mortgages cheap and the housing industry booming. The $6.6 trillion balance is a lot, but it can also be used to draw money out of the economy to help reduce inflation. That is what distinguishes printing money from QE. Printing money puts cash into the economy without adequate means to extract it during inflationary periods. Ideally, the Fed can sell off its balances and subtract money from the economy.

Congress worked on stimulating the economy as well. The Senate drew on the House of Representative’s Middle Class Health Benefits Tax Repeal Act, originally introduced in the U.S. Congress on January 24, 2019. All spending bills must originate from the House of Representatives, so the Senate used it a “shell bill” to begin working on economic and public health relief. They filled it in with additional content to combat the virus and protect the economy. On March 27, 2020, President Trump signed the CARES (Coronavirus Aid, Relief, and Economic Security) Act into law.

At over US$2 trillion, CARES was the largest rescue package in US history. It was twice the amount of the American Recovery and Reinvestment Act of 2009 (ARRA) that totaled $831 billion and helped revive the stalled US economy after the credit crisis. The CARES Act expanded unemployment benefits, including those for freelancers and gig workers, and gave direct payments to families. It also gave cash for grounded airlines, money for states and local governments, and half a trillion dollars in loans for corporations (although banning stock buybacks).

The result was a dramatic turnaround in GDP, not always the best economic indicator, but a key historical one. The third quarter (July, August, and September) grew dramatically. According to BEA, US real GDP increased at an annual rate of 33.1 per cent (7.4 percent at a quarterly rate). Compared to the 9 percent contraction in the 2nd quarter, this was a stunning reversal, the so-called V-shaped recovery. The BEA then reported that real GDP rose again by 4% in the fourth quarter.

Instead a V-shaped recovery, talk of a K-shaped economy emerged, meaning that the economy was diverging. The economic crash hit different sectors unevenly, and the recovery even more so. The well-off and professionals, especially those that could telework, did well. At the same time, much of the rest of the economy faltered, often depending on racial, gender, industrial sector, and geographical differences.

Fueled in part by Trump’s 2017 Tax Cuts and Jobs Act, significant amounts of money moved into appreciating assets. Many well-off people just had more money to invest. But it was also consequential in that combined with the Fed’s low-interest rates, it spurred unprecedented speculation and borrowing on margin for investment purposes. With these monetary and fiscal stimulus packages, the financial markets recovered quickly and continued to rise into 2021.

A year ago, the S&P 500 fell some 20% from its highs in a record 16 days. A key measure of the top 500 listed companies and the market overall, it is also a major indicator of the economy. A year later, the S&P 500 recovered from its 2,304 low to a near-record close of 3,931 on Feb 17. Overall, the S&P 500 returned 15.15% in 2020.

The Dow Jones Industrial Average (DJIA) is another important indicator of the economy and financial markets, and one of the oldest (Shown above). It indexes the top 30 “blue chip” companies. In other words, companies with pricing power over their products such as Apple, Chevron, Coca-Cola, Disney, and Proctor & Gamble. The “Dow” crashed to 18,951 on March 23 from a high of just over 29,300 three weeks earlier. The dollar was also down, as was crude oil and many commodities, including gold. The Dow continued to rise and recovered to nearly 31,500 two months into the Biden presidency.

On March 6, 2021, the Senate passed a new $1.9 trillion coronavirus relief package. It came at a time that stock markets were at record highs, Bitcoin had ballooned to over $50,000 and concerns about inflation due to increased spending and especially diminished supply chains had emerged. But the K-shaped economy was still evident as warped imbalances of the Trump tax cut and low interest rates helped the people who didn’t really need help, while many others struggled.

The new Covid-19 response has three main areas: pandemic response ($400 billion), including 14 billion for vaccine distribution; direct relief to struggling families ($1 trillion), notably the $1,400 checks for individuals and unemployment benefits of $300/week; and support for communities (in multi-year tranches) and small businesses ($440 billion), especially tourism areas hit hard by the pandemic and transit systems.

We entered 2021 with an unbalanced economy – a roaring stock market but massive poverty. Years of supply-side economics gave us a highly technological society and appreciating financial assets. But it was based on globalized supply chains and highly dependent on Russia and Saudi Arabia to support petro-intensive lifestyles and economic practices. Tax cuts transferred much of US wealth to the higher income brackets. Trump’s US$1.3 trillion tax cuts exacerbated the imbalances as the former president racked up US$7.8 trillion in national debt from his inauguration on January 20, 2017, to the capitol riots on January 6, 2021, when the electoral college votes for President Biden were tallied and declared him the winner.

We had a medical emergency; the new COVID-19 legislation is paying the bill and hopefully taking a bit of the kick out of the K-shaped recovery. What is important in the current legislation is giving support to the sick and dispossessed, including those affected by closed businesses and the 9.5 million jobs that disappeared over the last year. Will we have inflation? The best cure for inflation is stopping the pandemic and restoring the circuits of food and other vital commodities.


AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea. Originally from New York, he started his academic career Victoria University in Wellington, New Zealand before returning to New York to teach at Marist College and spending most of his career at New York University. He has also spent time at the East-West Center in Honolulu, Hawaii. When not in the Republic of Korea, he lives in Austin, Texas.

Five Generations of Wireless Technology

Posted on | February 8, 2021 | No Comments

The ubiquity, ease, and sophistication of mobile services have proven to be an extraordinarily popular addition to modern social and productive life. The term geckobeach“generations” has been applied to wireless technology classifications as a way to refer to the major disruptions and innovations in the state of mobile technology and associated services. These innovations include the move to data and the Internet protocols associated with the convergence of multiple forms of communications media (cable, mobile, wireline) and the wide array of services that are becoming increasingly available on portable devices like laptops and smartphones. We are now on the cusp of the 5th generation rollout of wireless services with intriguing implications for enterprise mobility, “m-commerce,” public safety, and a wide array of new entertainment and personal productivity services.

By 1982, the Federal Communications Commission (FCC) had recognized the importance of the emerging wireless communications market and began to define Cellular Market Areas (CMA) and assigning area based radio licenses. It split the 40 MHz of radio spectrum it had allocated to cellular into two market segments; half would go to the local telephone companies in each geographical area and the other to interested non-telephone companies by lottery. Although AT&T’s Bell labs had effectively begun the cellular market, it had estimated the 2000 market to be slightly less than a million subscribers and consequently abandoned it during its divestiture of the regional phone companies. Meanwhile, financier Michael Milken began a process of helping the McCaw family buy up the other licenses, making them multibillionaires when they sold out to AT&T in the mid-1990s.

The first generation (1G) of wireless phones were large analog voice machines and their data transmission capability was virtually nonexistent. This initial generation was developed in the 1980s through a combination of lotteries and the rollout of cellular sites and integrated networks. It used multiple base stations with each providing service to small adjoining cell areas. Its most popular phone was the Motorola DynaTAC known sometimes as “the brick”, now immortalized by financier Gordon Gecko’s early morning beach stroll in Wall Street (1986). 1G was hampered by a multitude of standards such as AMPS, TACs, and NMT that competed for acceptance. The Advanced Mobile Phone System (AMPS) was the first standardized cellular service in the world and used mainly in the US.

The second generation (2G) of wireless technology was the first to provide data services of any significance. By the early 1990s, GSM (Global System for Mobile Communications) Motorola introduced the StarTAC in 1996was introduced first in Europe and in the U.S. by T-Mobile and other countries worldwide. GSM standards were developed in 1982 by the Groupe Spécial Mobile committee, an offshoot of the European Conference of Postal and Telecommunications Administrations (CEPT). GSM was the standard that would allow national telecoms around the world to provide mobile services. Although voice services improved significantly, the top data speed was only 14.4 Kbps.

The second generation also marked the introduction of CDMA (Code Division Multiple Access techniques). Multiple access technologies cram multiple phone calls or Internet connections into one radio channel. AT&T utilized Time-Division Multiple Access techniques (TDMA)-based systems, while Bell Atlantic Mobile (later Verizon) introduced CDMA in 1996. This second generation digital technology reduced power consumption and carried more traffic while voice quality did improve, and security became more adept. The Motorola StarTac phone was originally developed for AMPS but was sold for both TDMA and CDMA systems.

Innovations sparked the development of the 2.5G standards that provided faster data speeds. The additional “half” a generation referred to the use of data packets. Known as the General Packet Radio Service (GPRS), the new standards could provide 56-171 Kbps of digital service. It has been used for Short Message Service (SMS), otherwise known as “text messaging” and MMS (Multimedia Messaging Service) services, WAP (Wireless Application Protocol), as well as Internet access. Being able to send a message with emojis, pictures, video, and even audio content to another device provided a significant boost to the mobile phone’s utility.

An advanced form of GPRS called EDGE (Enhanced Data Rates for Global Evolution) was used for the first Apple mobile phone, considered the first version using 3G technology.

Third generation (3G) network technology was introduced by Japan’s NTT DoCoMo in 1998. Still, it was adopted slowly in other countries, mainly because of the difficulties obtaining additional electromagnetic spectrum needed for the new towers and services. 3G droidxtechnologies provided a range of new services, including better voice quality and faster speeds. Multimedia services like Internet access, mobile TV, and video calls became available. Telecom and application services such as file downloads and file sharing made it easy to retrieve, install and share apps. 3G radio standards have been largely specified by the International Mobile Telecommunications-2000 (IMT-2000) of the International Telecommunication Union but the major carriers continued to evolve their own systems such as Sprint and Verizon’s CDMA 2000 and AT&T and T-Mobile’s Universal Mobile Telecommunications System (UMTS), an upgrade of GSM based on the ITU’s IMT-2000 standard set, but an expensive one as it required new base stations and frequency allocations.

A 3.5 generation became available with the introduction of High Speed Packet Access (HSPA) with promises of 14.4Mbps although 3.5-7.2 were more likely.

Fourth generation wireless technology sought to provide mobile all-IP communications and high-speed Internet access to htc-evo, the first 4G phonelaptops with USB wireless modems, smartphones, and other mobile devices. Sprint released the first 4G phone in March of 2010 at the communication industry’s annual CTIA event in Las Vegas. With a 4.3 inch screen, two cameras, and Android 2.1 OS the new phone was able to tap into the new IP environment Fourth generation (4G) technology is being rolled out in various forms with a dedication to broadband data and Internet protocols with services such as VoIP, IPTV, live video streams, online gaming, and multimedia applications for mobile users.

While 3G was based on two parallel infrastructures using both circuit-switched and packet-switched networking, 4G relied on packet-switching protocols. 4G LTE (Long Term Evolution) refers to wireless broadband IP technology developed by the Third Generation Partnership Project (3GPP). “Long Term Evolution” meant the progression from 2G GSM to 3G UMTS and into the future with LTE. The 3GPP, an industry trade group, designed the technology with the potential for 100 Mbps downstream and 30 Mbps upstream. Always subject to various environmental influences, data rates could reach 1 Gbps speeds in the next ten years.[2]

4G phones were developed by Apple (iPhone 5-7), Samsung, and others to access WiMax (Worldwide Interoperability for Microwave Access) using the IEEE Standard 802.16 with a range of some 30 miles and transmission speeds of 75 Mbps to 200Mbps.

4G WiMax provides data rates similar to 802.11 Wi-Fi standards with the range and quality of cellular networks. The difference in technology has been the softer handoffs between base stations that allow for more effective mobility over longer distances. Going to IP enables mobile technology to integrate into the all-IP next-generation network (NGN) that is forming to offer services across broadband, cable, and satellite communication mediums.

In October 2020, Apple unveiled the first iPhones to support 5th generation (5G) connectivity with the iPhone 12. This meant Apple had to add new chips, antennas, and radiofrequency filters into the new phone. 5G wireless communications represent a major new set of challenges and opportunities. The frequencies used require higher levels of power and more base stations because the range of transmission is shorter than LTE. It will also afford new opportunities such as faster connections up to 10x faster than LTE and reduced latency. Faster speeds mean new and enhanced cloud-based services to games and videos, virtual and augmented realities, IoT in the homes and factories, and enhanced telemedicine applications.

5G uses frequencies that are 10 to 100 times higher than the radio waves used for 4G and WiFi networks. We need to know more about the power dynamics of 5G and under what conditions, if any, it can break molecular bonds or provide health risks from long-term exposure.


[1] For a history of wireless communications.
[2] This is a great review of the 4 generations of wireless technologies.



AnthonybwAnthony J. Pennings, Ph.D. is Professor at the Department of Technology and Society, State University of New York, Korea. From 2002-2012 was on the faculty of New York University. Previously, he taught at Hannam University in South Korea, Marist College in New York, Victoria University in New Zealand. He keeps his American home in Austin, Texas and has taught there in the Digital Media MBA program atSt. Edwards University He joyfully spent 9 years at the East-West Center in Honolulu, Hawaii.

US Internet Policy, Part 3: The FCC and Consolidation of Broadband

Posted on | February 5, 2021 | No Comments

In this post, I look at the transition of Internet data communications from a competitive market structure to a few Internet Service Providers (ISPs). As digital technology allowed cable and telecommunications companies (telcos) to transition from traditional telephony to packet-switched Internet Protocol (IP) services, deregulation allowed them to dominate broadband services. It also allowed them to not only move data, but diverge from the traditional “common carriage” communications policy that separated the transfer of data from providing content like entertainment and news.

In Part I of this series, I looked at the emergence of the ISPs and the regulatory framework in the USA that classified them as “enhanced services.” This designation was based on the Federal Communications Commission’s (FCC) Second Computer Inquiry in 1981 that exempted online services from a number of requirements that had been imposed on telephone networks. Part II discussed the transition from dial-up modems in the early days of data communications to high-speed Digital Subscriber Lines (DSL). These “broadband” connections accelerated the business and consumer adoption of the Internet in the late 1990s. In Part 4, I will address issues of net neutrality facing the Biden administration in an era of “smart” or “edge technologies” that includes the Internet of Things (IoT) and “connected” cars.

Despite the design and the efforts of the Clinton-Gore administration to create a competitive environment, the Internet came to be increasingly controlled by a small number of ISPs. It is important to understand the policy environment and administrative actions that changed the Internet into the oligopolistic market structure that dominates broadband today. Policy changes allowed telcos to transition from the neutral transmitters of communication to the communicators themselves.

Broadband services in the USA is dominated by large integrated service providers such as AT&T, Comcast, Sprint, and Verizon. These companies have pursued “triple play” service bundles, combining high-speed Internet, cable TV, and IP phone services. Some also provide mobile services. These companies have been merging with content providers to distribute entertainment, education, and news, as well as move all the other Internet traffic. AT&T merged with Time-Warner, giving them access to Warner Bros., HBO, and Turner/CNN. Comcast has completed its merger with NBC, and Verizon bought AOL and Yahoo! Unfortunately, these deals have failed to return the huge rewards they were aiming for and deterred sufficient broadband rollout.

The highly competitive Internet services provider environment during the 1990s was significantly compromised by the Bush administration’s Federal Communications Commission (FCC). Their decisions favored cable companies and telcos and led to a consolidation of control over the Internet. The FCC’s actions raised concerns that powerful ISPs could influence the flow of data through the Internet and discriminate against some content providers or users to the detriment of consumers.

In 2002, the FCC ruled that “cable modem service” was an information service, and not a telecommunications service. Cable companies like Charter, Xfinity, Cox, and Time-Warner became unregulated broadband providers and were exempted from the common-carrier regulations and network access requirements imposed on the telcos. The Supreme Court decision in National Cable and Telecommunications Association vs. Brand X Internet Services meant that cable modem services would become Title I “information services” despite major criticism by Justice Scalia who argued that cable TV clearly offered both content services and telecommunications services. The Justice had no hesitation in calling it “bad law.”[2]

Then in 2005, another FCC decision effectively made telcos unregulated ISPs. FCC WC Docket 02-33 allowed their DSL broadband services to also become unregulated “information services.” This effectively allowed a few telcos such as Verizon and BellSouth to take over what had previously been a competitive ISP industry. The ruling allowed them to offer broadband fiber and DSL Internet access transmission while presenting challenges to previous requirements such as allowing other ISPs “access to facilities” and interconnection. Smaller ISPs had been allowed to physically connect to the “common carrier” telco facilities so that their customers could access the larger Internet.

Internet innovation came from other sources and distracted the public from broadband carrier issues. Facebook and Flickr were launched in 2004. Twitter, Microsoft’s Xbox Live, and online music streaming Spotify went online in 2006. Google bought Android in 2006 and YouTube the next year. Netflix started its streaming service in 2007, and the first iPhone was also released that year.

The success of these innovations did not escape the telcos’ view, who wanted a piece of the action. They wanted to move beyond being just carriers of information to providers of entertainment and informative content. This was evidenced by Verizon’s introduction of FIOS (Fiber Optic Service) TV service in 2005 and AT&T’s U-verse in 2006. ISPs looked to dominate home broadband service by bundling TV, Internet, and telephone voice service over their high-speed IP networks.

In 2003, Columbia Law professor Tim Wu coined the term ‘net neutrality’ to stress the importance of allowing the free flow of data for the Internet’s future. It is based on the notion of “common carriage,” a legal framework developed to ensure that railroads would serve all businesses and municipalities. It basically means that the network should stay neutral and let the bits flow interrupted from device to device at the highest speeds available. This is how the Internet was designed, but the networks have been around since the telegraph and telephone and have developed their own legal and technical ways to survive.

The Internet’s political and social impact was becoming more apparent with the presidential campaign of Barack Obama in 2008. It was recognized by the Pew Research Center that some 74% of Internet users interacted with election information. A significant number of citizens received their news online, communicated with others about elections, and received information from campaigns via email or other online sources.

In 2010, the Obama administration began to write new rules dealing with Internet providers that would require ISPs to treat all traffic equally. In what were called the “Open Internet” rules, the new administration began to design a framework to restrict telecom providers from blocking or slowing down specific Internet services.

In the next post, I will look at the development of net neutrality rules under the Obama. Later, the Trump administration renewed attempts to rid the ISP’s from net neutrality interference by returning to Title I. A major question for the Biden administration is the possible return to Title II rules and strengthened rules on net neutrality.


AnthonybwAnthony J. Pennings, PhD is Professor at the Department of Technology and Society, State University of New York, Korea. Originally from New York, he started his academic career Victoria University in Wellington, New Zealand before returning to New York to teach at Marist College and spending most of his career at New York University. He has also spent time at the East-West Center in Honolulu, Hawaii. When not in the Republic of Korea, he lives in Austin, Texas.

Korea in a Post Covid-19 World, Part 3: The Green New Deal

Posted on | January 29, 2021 | No Comments

This post is my third on the Korean New Deal as a response to the COVID-19 pandemic. In the first post, I discussed the origins of the New Deal in the US and its reemergence as the Green New Deal in the UK and US. In the second, I discussed Korea’s Digital New Deal and its emphasis on “DNA” – Data, Network, and Artificial Intelligence (AI) to strengthen Korea’s industrial, education, and transportation infrastructure. In a future post, I will look at Korea’s efforts to build a more extensive and inclusive social safety net for its 50+ million people.

In this post, I examine Korea’s concerns about its quality of life and some of its plans for addressing related economic and environmental issues. Despite impressive economic growth and infrastructure development, the country suffers from congested highways, industrial waste, and regular occurrences of high particle content in its air. Consequently, the Moon administration embraced a Green New Deal in mid-July 2020 to address these issues and pursue opportunities for green growth industries with export potential.

President Moon presented the argument:

    The Government will pave the way toward sustainable growth through the Green New Deal. We will create new markets, industries and jobs while actively responding to climate change as a responsible member of the international community.

Areas of particular concern are low-carbon and decentralized energy, urban and water infrastructure, and green solutions that can be commercially viable.

The Korean Green New Deal recognizes the calls for climate and environmental action as well as the opportunities inherent in the transition to a green economy. Bouts of air pollution due to its reliance on coal, heavy vehicle traffic, and proximity to industrial centers domestically and in China plague the country. Consequently, it wants to support green industries and achieve a better balance between the economy and nature.

The Moon administration plans to make way for the new generation of renewable-powered and and digitally-connected vehicles. These include electric vehicles (EVs) and hydrogen cars and increasingly software-driven “smart” cars. It wants to take over a million diesel vehicles off the road to reduce emissions and support the transition to renewable energy vehicles. Korea has to run to catch up with Chinese and Tesla EVs, but it has the devotion of its domestic car consumers and Hyundai’s Ionique EV is an attractive start. More than 90% of the cars currently on their roads are produced domestically.[1]

One of the challenges of the carbon economy will be to replace the taxes on petroleum imports that helped build an extraordinary road infrastructure throughout Korea. Tax revenues on fuel have been decreasing around the world as vehicles have become more efficient. Likely solutions involve increasing fuel taxes or road user charges that trade a petrol tax for a fee for kilometers traveled.

Its “fast-follower” economic strategy and capabilities will be put to the test to keep Korean manufacturers relevant in the rapidly evolving autonomous anc connected automobile market. But it could also mimic its Android strategy and have Hyundai or Kia team up with Apple or Google for automobile data and software expertise for energy management and higher levels of autonomous driving.

Hydrogen is another automobile technology under consideration. To be viable, it needs to address issues of cost, safety, and infrastructure. Hydrogen can be produced from hydrocarbon molecules with gasification, high heat, or the addition of carbon monoxide to water. It can also be produced with fermentation or through electrolysis, the separation of water into hydrogen and oxygen with electricity. Producing this simple fuel can be expensive but idle capacity in its nuclear power facilities at night has been one strategy to produce the non-toxic fuel. Renewable sources with low marginal costs like solar and wind can ideally be used to make the gas in the future.[2] Despite the tragedy of the Hindenburg balloon explosion, hydrogen is still safer than gasoline in most environments. It can be vented quickly and disperses away from a vehicle in case of an accident.

The big issue has been hydrogen for combustion or hydrogen for fuel-cell electricity. Although hydrogen combustion only produces water, the heat of the reaction can subsequently produce dangerous nitrous oxides. This does not occur in a fuel cell that uses a chemical transition involving hydrogen to release electricity that drives an electric motor. Both strategies would require pumping hydrogen into an automobile’s fuel tank and both would emit water.

Priorities from the Korean Ministry of Economy and Finance keynote speech on the Green New Deal:

Green New Deal

The refueling infrastructure presents a “chicken or egg” dilemma for both electric and hydrogen-based vehicles. Many consumers have concerns that they will not be able to obtain the needed fuel conveniently and in a timely manner. A network of electric charging stations are springing up in unusual places. The shopping center next to my university campus has a Tesla charging station in the basement parking lot so their high-end consumers can shop at local boutiques and frequent the restaurants. As they do not emit toxic fumes, EV charging stations can be located in a wide variety of locations. High-speed recharging and wireless charging capabilities will hasten the transition to electric vehicles. Hydrogen presents different challenges.

Hydrogen is increasingly used in industrial applications and is a key ingredient in decarbonization strategies. However, its future in automobile propulsion is still questionable due primarily to the lack of refueling infrastructure. Unlike electric recharging, hydrogen requires “gas stations” for refueling due to storage issues and potential dangers due to its volatility. Hydrogen can be transported in small quantities as compressed gas in pressurized cylinders on “tube trucks” to refueling stations for light-duty vehicles. Liquefaction is expensive and requires extremely low temperatures (-253 degrees C). Compared to the US, Korea has few hydrogen gas pipelines or natural gas pipelines into which they can blend hydrogen. Producing hydrogen at the refueling station with alternative energy may be the best strategy for widespread utilization of the gas.

A crucial green response includes building smart electric grids for the energy management of traditional and new eco-friendly, low-carbon power generation systems. The transition from centralized legacy coal and nuclear plants to decentralized renewable-powered generation systems requires extensive hardware and software developments. Intelligent grids (and microgrids) implementing innovations in the management of energy production, energy storage, as well as energy transmission and distribution systems represent both challenges and opportunities to monetize new solutions for an “Internet of Electricity.

Smart grids need to skillfully manage the intermittent sources of electricity to maintain steady flows to communities and industries. Traditional coal, oil, and nuclear power plants are notable for producing a consistent and precise “baseload” amount of electricity throughout the day.[2] While some renewables like hydroelectric power from dams provide consistent electricity, other renewables may require “smart” solutions to know when to store and integrate additional electricity from alternative sources.

One problem that needs to be continuously addressed is the transmission facilities to incorporate electricity from solar and wind projects that are rural. Particular emphasis is on drawing power from the 42 small island regions surrounding the peninsula that might be suitable for large-scale wind farms, solar, or wave power. One of South Korea’s biggest windfarms will be built off the southwest shore of the country. Hanwha is one of the largest solar cell producers in the world as well as solar power-plant construction and project financing.

Korea also hopes to capitalize on new greenhouse gas (GHG) reducing technologies and desalination process efficiencies that could come with cheap energy. While GHG capture technologies are not being used to any significant extent, other technologies can reduce emissions. The green remodeling of buildings with LEED (Leadership in Energy and Environmental Design) certified technologies will bring both jobs and savings. This includes smart meters in public housing and clean green factories and industrial complexes.

Some are concerned that the Korean New Deal is likely to be heavy on government involvement and lite on government spending. President Moon updated the spending figures recently when he addressed the World Economic Forum at Davos:

Economies thrive on problems, real or even conjured. Taking on challenges and finding innovative ways to engage citizens and companies in productive activities produces wealth as well as options to shape the quality of life. The move to a post-carbon society will raise questions, create debates, and present new opportunities. The Green economy offers possibilities for cleaner air, land, and sea while ultimately producing more energy for mobility, production, and comfort.

In the next post I will focus on Korea New Deal’s attempt to build a jobs and a social safety net.


[1] Lee, E. (2019, July 15). Car ownership in Korea hit 23.44 million in June 2019. Less than 10 percent are imports. Import share of cars where at 9.7% – Pulse by Maeil Business News Korea.
[2] Near-zero marginal costs is an economic concept that refers to the eventual production of a good or service at a very low costs per unit.
[3] Benjamin Matek, Karl Gawell, The Benefits of Baseload Renewables: A Misunderstood Energy Technology, The Electricity Journal, Volume 28, Issue 2, 2015, Pages 101-112,
ISSN 1040-6190,


AnthonybwAnthony J. Pennings, PhD is Professor at the Department of Technology and Society, State University of New York, Korea. Originally from New York, he started his academic career Victoria University in Wellington, New Zealand before returning to New York to teach at Marist College and spending most of his career at New York University. He has also spent time at the East-West Center in Honolulu, Hawaii. When not in the Republic of Korea, he lives in Austin, Texas.

Korea in a Post Covid-19 World, Part 2: Merging Digital and Green New Deals

Posted on | January 3, 2021 | No Comments

I’ve been lucky enough to ride out most of the Covid-19 epidemic here in the Republic of Korea. I miss being home in Austin, TX, but I’ve been safe and relatively free to travel and shop, even if I have to wear a mask everywhere I go. It’s a small price to pay for the relative freedom of going out to eat and exercise on my bike in the parks that are regularly available. Korea, for the most part, has avoided major lockdown measures and still led the OECD in economic growth during the pandemic.

Green New Deal

This is the second post on the Korean New Deal that was recently reiterated by President Moon at the 2021 Davos World Economic forum. In the first post on the Korean New Deal, I introduced the initial New Deal and looked at the emergence of the Green New Deal in Europe and the USA. In the third post I will go into the Korean Green New Deal in more detail.

This post discusses the recent responses by Korea to the COVID-19 pandemic and its economic repercussions by examining the Digital New Deal. These posts are not policy analyses as much as they introduce some of the goals and rationale involved with the Korean New Deals. Case studies are difficult to generalize. Still, these examinations are meant to be suggestive of some strategies worth examining by other countries.

The Korean New Deal was proposed to the public by President Moon Jae-in’s administration after a convincing spring 2020 election win in the National Assembly by the ruling Democratic Party of Korea (DPK). The Korean New Deal was designed and is being implemented with a potential new wave of the COVID-19 pandemic in mind. The notion of “sleeping with the enemy” was invoked to caution a premature return to normal activities and accelerate a transition plan to a greener, smarter, and more sustainable growth model with a major goal of being carbon-neutral by 2050.

Korea’s New Deal has two components: a Digital New Deal and a Green New Deal. President Moon explained:

    This Korean New Deal is a new national development strategy to leap from being a fast-follower to a pace-setter. In the belief that our country’s future hinges on it, we will resolutely push ahead with the Korean New Deal, which will erect two pillars – a Digital New Deal and Green New Deal – side by side atop the foundation of an inclusive nation and of values that put people first.

Left without North Korea’s natural resources by the Armistice Agreement in 1953 that split Korea at the 38th parallel, South Korea pursued an export model with a significant emphasis on science and technology. This meant improving on products that were already familiar to western society: ships, cars, semiconductors, televisions, etc. This is the “fast-follower” strategy mentioned in the quote above by President Moon. More recently, smartphones and popular music and film have added to the economic mix as well as the soft power helpful for smooth economic and political relations.

Now South Korea wants to expand its development strategy to be a “pace-setter” by leveraging its highly trained human resources with innovation. Earlier work addressed the prospects of a Fourth Industrial Revolution (FIR) – new products and processes based on innovations in digital, biological, and materials science. The Presidential Committee on the Fourth Industrial Revolution (PCFIR) was set up after Moon was elected in 2017 and started to drive consensus-building. This would mobilize economic strategies that commercialize and implement advances in artificial intelligence (AI), the Internet of Things (IoT), 3D printing, robotics, genetic engineering, nanotechnologies, quantum computing, and other technologies. This was ideal for a high tech society like Korea’s but as the COVID-19 crisis emerged, the New Deal signaled a more people-oriented approach and not just economic growth.

In this post, I again draw on the keynote speech by Dae Joong Lee from the Ministry of Finance and Economy. In “Linking the Korean New Deal with Innovation and Technology in the Post Covid-19 Era”, presented at the Korea Workshop on Innovation and Digital Technology in a Post-Covid-19 World held in November 2020. It was sponsored by the World Bank’s International Development Agency (IDA) and the Korean Ministry of Economy and Finance.

The Digital New Deal

Dae Joong Lee’s presentation on the Digital New Deal introduced an acronym that was new to me – “DNA.” Not the biological Deoxyribonucleic Acid in each of our cells, but “Data, Networks, and Artificial Intelligence.” One of the Digital New Deal’s first objectives is to find ways to feed data into AI. This includes disclosing data from the public sphere and introducing an incentive system to gather data from other sectors to feed AI development.

All ministries were ordered to release non-sensitive public data over the coming year to “usher in a data economy that opens the free flow of information and ideas.” Korea, like most countries, is struggling with privacy issues and needs to improve on the Personal Information Privacy Act (PIPA), which is vague and lacks punitive strength.

Networks are one of Korea’s core digital strengths and provide the foundation for many other infrastructure endeavors. Broadband speeds are some of the highest in the world at averages of 168.26 Mbps (12th) for fixed landlines and 166.70 Mbps (2nd) for mobile, after the United Arab Emirates. 5G continues to roll out across the nation for consumer and industry use.

With relatively high incomes and literacy, it is no surprise that the country has one of the highest mobile use rates in the world. A complication for Korea is that it is both an important supplier of 5G equipment as well as a chip producer for other 5G equipment manufacturers.

Reminiscent of Vice-President Gore’s E-rate in the US during the late 1990s, digitalization of education infrastructures is a high priority. Gore’s plan taxed landline telephone users to update schools with important equipment and infrastructure. The Digital New Deal will provide Wi-fi to schools, re-supply new computers for faculty, and replace old servers and network equipment in educational environments. Students in some 1,200 schools are targeted to get 240,000 tablet PCs. Online content, particularly on the 4th Industrial Revolution (FIR), will also be developed.

A more complicated development is the integration of “DNA” in smart communities and industrial applications. These include the goals of producing 108 smart cities and governance outfitted with 5G, connected management centers, cloud computing for public information, and protected by advanced cybersecurity.

The Digital New Deal includes ten new industrial complexes with computerized control centers and 12,000 smart factories with another 10,000 workshops and 100,000 stores equipped with the newest process management technologies.[1] Korea is already a leader in industrial robotics, and, recently, Hyundai acquired Boston Dynamics, an innovator in robot manipulation, mobility, and vision.

Logistically, they want to build major smart distribution systems like Amazon, with associated certification systems. These logistical centers would be shared by many SMEs and be part of the support infrastructure for over 300,000 microbusinesses that would also have access to teleconferencing centers and commercial space for offices and design studios.

As part of a new infrastructure for autonomous vehicles, they propose to develop a Cooperative Intelligent Transport (C-ITS) system to upgrade their roads. These control systems would coordinate pedestrians, bicycles, automobiles, and commercial vehicles for road safety and enhanced traffic flow. Already a major automobile manufacturer, Korea is producing “automatrix” road management models for domestic use and export. Registered cars in South Korea hit nearly 23.5 million units by the summer of 2019.[2] But these will eventually be replaced with connected cars powered by electric batteries or hydrogen.

Korea also set out to develop a public safety network for first responders such policemen, firefighters, public officials and others involved in emergency management and disaster risk reduction. Several disasters, including the Sewol ferry sinking on April 16, 2014, that killed 304 people, mainly students on a field trip, as well as train fires, were exacerbated by poor communications. Technical standards, guided by the Safe-Net Forum, have led to a new public safety (PS-LTE) network with versions for railroads (LTE-R) and maritime (LTE-M) communications.

In the next post on this topic, I will discuss the Korean Green New Deal.


[1] Just to reiterate, these are the goals of the Moon administration.
[2] Lee, E. (2019, July 15). Car ownership in Korea hits 23.44 mn by June, import share at 9.7% – Pulse by Maeil Business News Korea.


AnthonybwAnthony J. Pennings, PhD is Professor at the Department of Technology and Society, State University of New York, Korea. Originally from New York, he started his academic career Victoria University in Wellington, New Zealand before returning to New York to teach at Marist College and spending most of his career at New York University. He has also spent time at the East-West Center in Honolulu, Hawaii. When not in the Republic of Korea, he lives in Austin, Texas.

keep looking »
  • Referencing this Material

    Copyrights apply to all materials on this blog but fair use conditions allow limited use of ideas and quotations. Please cite the permalinks of the articles/posts.
    Citing a post in APA style would look like:
    Pennings, A. (2015, April 17). Diffusion and the Five Characteristics of Innovation Adoption. Retrieved from
    MLA style citation would look like: "Diffusion and the Five Characteristics of Innovation Adoption." Anthony J. Pennings, PhD. Web. 18 June 2015. The date would be the day you accessed the information. View the Writing Criteria link at the top of this page to link to an online APA reference manual.

  • About Me

    Professor at State University of New York (SUNY) Korea since 2016. Moved to Austin, Texas in August 2012 to join the Digital Media Management program at St. Edwards University. Spent the previous decade on the faculty at New York University teaching and researching information systems, digital economics, and strategic communications.

    You can reach me at:

    Follow apennings on Twitter

  • About me

  • Writings by Category

  • Flag Counter
  • Pages

  • Calendar

    May 2021
    M T W T F S S
  • Disclaimer

    The opinions expressed here do not necessarily reflect the views of my employers, past or present.