Anthony J. Pennings, PhD

WRITINGS ON DIGITAL ECONOMICS, ENERGY STRATEGIES, AND GLOBAL COMMUNICATIONS

Internet Policy, Part 4: Obama and the Return of Net Neutrality, Temporarily

Posted on | March 26, 2021 | No Comments

The highly competitive Internet services provider (ISP) environment of the 1990s was significantly altered by Federal Communications Commission (FCC) during the Bush Administration. Two Bush appointments to the FCC Chair position guided ISP policies towards a more deregulated environment. The result, however, was a more oligopolistic market structure and less competition in the Internet space. Furthermore, these policies raised concerns that powerful ISPs could influence the flow of data through the Internet and discriminate against competing content providers to the detriment of consumers.

The FCC is an independent commission but can lean in political directions. Under the leadership of Michael Powell (January 22, 2001 – March 17, 2005), Republican from Virginia and son of General Colin Powell, FCC decisions favored cable companies. In the summer of 2005, the FCC now guided by the new FCC Chairman Kevin J. Martin Republican from North Carolina (March 18, 2005 – January 19, 2009) guided decisions that favored telcos. The FCC made cable modem services and broadband services by telecommunications companies Title I unregulated “information services.” This has raised ongoing concerns that powerful ISPs influence the flow and speed of data through the Internet and could discriminate against competing content providers or users to the determent of consumers.[1]

This post examines the Obama administration’s approach to Internet regulation and the issue of net neutrality. This involved reviving “Title II” regulation that works to guarantee the equal treatment of content throughout the Internet. Previously, I examined the legal and regulatory components of common carriage and the emergence of net neutrality as an enabling framework for Internet innovation and growth.

Comedian John Oliver explained net neutrality in his Last Week Tonight Show published on Jun 1, 2014.

The Internet’s political and social impact was becoming more apparent with the social media presidential campaign of Barack Obama in 2008. It was recognized by the Pew Research Center that some 74% of Internet users interacted with election information. Many citizens received news online, communicated with others about elections, and received information from campaigns via email or other online sources.

In 2010, the Obama administration began to write new rules dealing with Internet providers that would require ISPs to treat all traffic equally. In what were called the “Open Internet” rules, FCC Chairman Julius Genachowski, Democratic from Washington, D.C.(June 29, 2009-May 17, 2013) sought to restrict telecom providers from blocking or slowing down specific Internet services. Verizon sued the agency to overturn those rules in a case that was finally decided in early 2014. It determined the FCC didn’t have the power to require ISPs to treat all traffic equally due to their new Title I designations. The judge was sympathetic to the consumer’s plight though, and directed the ISPs to inform subscribers when they slow traffic or block services.

After the appeal by Verizon, the DC circuit court sent the FCC back to the drawing boards. Judge David Tatel said that the FCC did not have the authority under the current regulatory conditions to treat telcos as “common carriers” that must pass data content through their networks without interference or preference. The result of Verizon vs. the FCC was that without a new regulatory classification, the FCC wouldn’t have the authority to actually enforce restricting the big ISPs from banning or blocking legal websites, throttling or degrading traffic on the basis of content, and limiting “paid prioritization” of Internet services. The latter, the so-called “fast lanes” for companies like Google and Netflix were particularly contentious.[2]

So, on November 10, 2014, President Obama went on the offensive and asked the FCC to “implement the strongest possible rules to protect net neutrality” and to stop oligopolistic ISPs from blocking, slowing down, or otherwise discriminating against lawful content. Tom Wheeler, the incoming FCC Chairman, from California (November 4, 2013 – January 20, 2017), sought a new classification from the legacy of the Communications Act of 1934 by invoking Title II “common carrier” distinctions for broadband providers.

To its credit, the FCC had been extremely helpful in creating data communications networks in the past. The FCC’s classification of data services in Computer I as being “online” and not “communications” provided timely benefits. For example, it allowed early PCs with modems to connect to ISPs over telephone lines for hours without paying toll charges to the providers of local telephone service. But with a competitive Internet, opening up the deregulated broadband capabilities to telcos seemed excessive.

“Information services” under Title I is a more deregulatory stance that allows the telcos to impose more control over the Internet. “Information services” under Title I refers to “the offering of a capability for generating, acquiring, storing, transforming, processing, retrieving, utilizing, or making available information via telecommunications.” As mentioned previously, under the George W. Bush’s FCC, cable companies in 2002 and then telcos in 2005 were classified as Title I information services. This led to a major consolidation of US broadband service that started to be dominated by large integrated service providers such as AT&T, Comcast, Sprint, and Verizon. These companies began trying to merge with content providers, raising the specter of monolithic companies controlling information and invading privacy.

On February 26, 2015, the FCC’s new “Open Internet” rules went into effect based on Title II of the Communications Act of 1934 and Section 706 of the Telecommunications Act of 1996. The latter gave the FCC authority to regulate broadband networks, including imposing net neutrality rules on Internet service providers. Section 706 directs the FCC and state utility commissions to encourage the deployment of advanced telecommunications capability to all Americans by removing barriers to infrastructure investment and promoting competition in the local telecommunications markets.

But Section 706 authority only kicks in when the FCC finds that “advanced telecommunications capability” is “not being deployed to all Americans in a reasonable and timely fashion.”

In other words, the case needs to made that the US Internet infrastructure is lacking. For example, the FCC established 25 Mbps download/3 Mbps upload as the new standard for “advanced telecommunications capacity” for residential service. This is actually a pretty low benchmark for urban broadband users as only 8% of America’s city dwellers lack access to that level of service. But that still left some 55 million Americans behind as rural areas were largely underserved, especially in tribal lands.

In early 2015, President Obama went began to point attention towards broadband access. Consequently Chairman Wheeler announced that the FCC’s Connect America Fund will disburse $11 billion to support modernizing Internet infrastructure in rural areas. It also reformed the E-rate program to support fiber deployment and Wi-Fi service to the nation’s schools and libraries.[3]

Open Internet rules were meant to protect the free flow of content and promote innovation and investment in America’s broadband networks. It was grounded in multiple sources of authority, including Title II of the Communications Act of 1934 and Section 706 of the Telecommunications Act of 1996. In addition to providing consumer protections by restricting the blocking, throttling, and paid prioritization of Internet services, the FCC strove to promote competition by ensuring that all broadband providers have access to poles and conduits for the physical plant.

They also did not require providers to get the FCC’s permission to offer new rate plans or allow new services. Nor did they require companies to lease access to their networks and monitor interconnection complaints, a key provision that promoted ISP competition. A key dilemma was to promote the ubiquity of the Internet, while exempting broadband customers from universal service fees.

The election of Donald Trump presented new challenges to Net Neutrality and the potential of a new reversal. Tom Wheeler resigned from the FCC, allowing Trump to pick a new Democrat to the FCC and a Republican. The new FCC voted 3-2 to begin eliminating Obama’s net neutrality rules and reclassify home and mobile broadband service providers as Title I information services. A new FCC Chairman, Ajit Pai, argued that the web was too competitive to regulate effectively, and throttling some web applications and services websites might help Internet users. The FCC began seeking comments about eliminating the Title II classification. Replacing the Obama net neutrality rules was put to the vote by the end of the year, and the FCC once again returned to Title I deregulation through a declaratory ruling.

Notes

[1] Ross, B.L. and Shumate, B.A., Rein, W. “Regulating Broadband Under Title II? Not So Fast.” Bloomberg BNA. N.p., 25 June 2014. Web. 18 June 2017.
[2] Finley, Klint. “Internet Providers Insist They Love Net Neutrality. Seriously?” Wired. Conde Nast, 18 May 2017. Web. 18 June 2017.
[3] “What Section 706 Means for Net Neutrality, Municipal Networks, and Universal Broadband.” Benton Foundation, 13 Feb. 2015. Web. 18 June 2017.

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

Will Offshore Wind Power Print Money?

Posted on | March 15, 2021 | No Comments

Research is showing that offshore wind farms can increase biodiversity in oceans. Like sunken ships, windmill installations present unique opportunities for facilitating marine life. These new habitats create artificial reefs and marine life-protection areas. Undersea hard surfaces rapidly collect a wide range of marine organisms that build and support local ecosystems. They also provide some refuge from trawlers and other industrial fishing operations.

This post will examine the prospects of wind energy, one of the promising alternative renewable energies that will work with hydropower, solar, and even small-scale nuclear energy to power the smart electrical grids of the future. Is offshore wind feasible? What are the downsides? Will it be profitable? Can it literally “print money” once it is operational?

Wind and torque combine to transform mechanical energy into electricity. The physics of windmills means that big is better. The larger the propellers that can be built, the more efficient they become. Bigger windmills capture more wind, and that produces more torque. The more propellers can harvest the power of the wind, the more electricity they can produce. The equation belows describes the relationship between wind speed, torque, and power output.

torque and angular speed

Media economics can help us understand the economics of wind power. Media projects like books and films, and even digital software, require huge expenditures up front but the cost of each succeeding unit of output is quite low. What is the cost of each individual copy of Windows 11? Renewable energies tend to have the same characteristics, what economists call low marginal costs. Once a windmill is manufactured and installed, the cost of each kilowatt produced is low. The marginal cost of each unit of output decreases. Granted, the costs of replacement, recycling, or reusing these large machines are valid points of concern.

Personally, wind power hasn’t impressed me in the past. In graduate school in Hawaii, I remember a big windmill near the North Shore surf spots that didn’t seem to do much. Driving up into San Francisco along Interstate 5, the windmills seem big and slow. Flying regularly over northern Texas and Oklahoma, the wind farms become a bit more impressive. Understanding the fundamental economics and the basic engineering and science of wind energy is useful for policy analysis.

Unlike solar, wind power is not directly contingent on solar rays but on larger climatic events. The US Department of the Interior‘s Bureau of Ocean Energy Management (BOEM) has been conducting environmental impact studies and is giving conditional permission to build offshore wind farms. Contracts to provide wind electricity as low as 5.8 cents per kilowatt-hour are being negotiated. Massachusetts, Virginia, and the far coast of Long Island, New York are some of the major sites under development. While previously a global laggard, the US is expected to become a major offshore electricity contributor after 2024.

The future of US offshore wind energy is dependent on several economic variables. One is power purchase agreements (PPAs) that businesses and other organizations use to solidify long-term purchases of electricity. Another is renewable portfolio standards (RPSs) that obligate US states to procure a certain percentage of renewable energy. RPSs have contributed to nearly half of the growth in renewable energies since 2000. Tax incentives are important and depend on political winds. The US Treasury extended safe harbor tax credits for renewable energies, including offshore wind in light of the COVID-19 pandemic. Offshore wind auctions are also crucial as the cry “location, location, location” resonates soundly in this industry.

Renewable critics like the Manhattan Institute have been been critical of offshore windmills, arguing that they decline some 4.5% in efficiency every year. Another concern is who will pick up the decommissioning costs of deconstructing and recycling the windmills. But the technology is new as are the maintenance, recycling, and regulatory practices.

Wind could be a significant boost for coastal communities. Major cities that were wedded to the ocean due to fishing and shipping are likely to benefit as offshore wind might provide cheap electricity and much-needed economic benefits. Tourism will benefit from cheap electricity as did Las Vegas when it had access to power from the Hoover Dam. In terms of jobs and the revitalization of shore-based businesses a wide range of services will be needed. Energy control centers, undersea construction, equipment supply, and maintenance operations, are just some of the opportunities that are emerging around ocean-based renewable energy sources.

In summary, the economics of offshore wind energy are very much like media economics – high upfront costs and low marginal costs. Book publishing requires editors and pays author royalties. It also needs paper, printing presses, and the distribution capabilities required to produce fiction and non-fiction works. While some books may not be profitable, a best-seller can provide significant returns for the publisher. Movies require extensive upfront expenses in production and post-production, but each showing in cinemas worldwide costs relatively little. Wind power requires a major capital influx to set up. But the wind is free, so once operational, the windmill begins to produce electricity. Lubrication and other maintenance activities are needed at times, but electricity is created as long as the wind is blowing. If the infrastructure is set up efficiently, it will print money.

Ⓒ ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea. Born in New York, he had a chance to teach at Marist College near his home town of Goshen before spending most of his academic career at New York University. Before joining SUNY, he moved to Austin, Texas and has taught in the MBA program at St. Edwards University. He started his academic career at Victoria University in New Zealand. He has also spent a decade as a Fellow at the East-West Center in Honolulu, Hawaii.

COVID-19 and US Economic Policy Responses

Posted on | March 8, 2021 | No Comments

COVID-19 was recognized in early 2020 and began to spread rapidly in March of that year. The World Health Organization (WHO) identified the virus in January, and later in the month, the CDC confirmed the first US Coronavirus case. On March 13, President Trump declared the spreading coronavirus a national emergency as the US registered its 100th death. Many restaurants and other high contact industries began to shut down. Transportation and tourism ground to a halt. As a result, the US’ economic management processes worked to design a response.

In this post, I look at how the Federal Reserve and Congress (House and Senate), as well as two administrations, addressed the economic conditions and ramifications of the emerging viral pandemic. Starting in March 2020, they produced monetary and fiscal actions that reverberated through the US economy. What impact did it have on the so-called K-shaped recovery? How, if any, did the responses influence price deflation or inflation in the subsequent years?

Stock market since 2021

The US economy went into steep decline in the second quarter (April, May, June) while the virus spread and the Federal Reserve’s monetary policy and the CARES Act was being implemented. According to the Bureau of Economic Analysis (BEA), in the second quarter of 2020, US real Gross Domestic Product (GDP), contracted by 31.4 percent (9 percent at a quarterly rate). It was the starkest economic decline since the government started keeping records in 1947.

Starting March 3, the FOMC reduced the Fed Funds Rate 1.5 percentage points to 0-0.25%, making it official at its March 15th FOMC meeting. The Fed Funds Rate is the interest rate that banks purchase money from each other through its FEDWIRE network. This gives them more reserves that can be lent out at higher rates for car loans, home mortgages, and industrial capacity. The loans can also be invested in financial assets such as Bitcoin, currencies, equities, gold, etc.

Rather surprising was the Fed decision to reduce the reserve ratio to 0 from its traditional 10%. This reduction meant banks no longer had to hold a percentage of their deposits in their vaults or at the Federal Reserve. The Fed also offered a narrative framework, or “forward guidance” on their interest rates, stating they would remain low until unemployment receded and inflation increased to roughly 2% percent.

COVID Unemployment

The Fed simultaneously announced that it would begin to purchase securities “in the amounts needed to support smooth market functioning and effective transmission of monetary policy to broader financial conditions.” After its mid-March meeting, the Fed said it would begin buying some $500 billion in Treasury securities and $200 billion in government-guaranteed mortgage-backed securities. This version of quantitative easing (QE) was used, along with the $700 billion Troubled Asset Relief Program (TARP), (TARP), to recover from the 2007 financial crisis.

Over the course of the year, the Fed bond portfolio increased by $2.5 trillion from $3.9 trillion to $6.6 trillion. The purchases injected money into the economy and QE kept interest rates low, helping to keep mortgages cheap and the housing industry booming. The $6.6 trillion balance is a lot, but it can also be used to draw money out of the economy to help reduce inflation. That is what distinguishes “printing money” from QE. Printing money puts cash into the economy without adequate means to extract it during inflationary periods. Ideally, the Fed can sell off its balances and subtract money from the economy. But QE and low-interest rates became so embedded in the economy that it was not easy to let interest rates rise.

Congress worked on stimulating the economy as well. The Senate drew on the House of Representative’s Middle Class Health Benefits Tax Repeal Act, originally introduced in the U.S. Congress on January 24, 2019. All spending bills must originate from the House of Representatives, so the Senate used it a “shell bill” to begin working on economic and public health relief. They filled it in with additional content to combat the virus and protect the economy. On March 27, 2020, President Trump signed the CARES (Coronavirus Aid, Relief, and Economic Security) Act into law.

At over US$2 trillion, CARES was the largest rescue package in US history. It was twice the amount of the American Recovery and Reinvestment Act of 2009 (ARRA) that totaled $831 billion and helped revive the stalled US economy after the credit crisis. The CARES Act expanded unemployment benefits, including those for freelancers and gig workers, and gave direct payments to families. It also gave cash for grounded airlines, money for states and local governments, and half a trillion dollars in loans for corporations (although banning stock buybacks).

The result was a dramatic turnaround in GDP, not always the best economic indicator, but a key historical one. The third quarter (July, August, and September) grew dramatically. According to BEA, US real GDP increased at an annual rate of 33.1 per cent (7.4 percent at a quarterly rate). Compared to the 9 percent contraction in the 2nd quarter, this was a stunning reversal, the so-called V-shaped recovery. The BEA then reported that real GDP rose again by 4% in the fourth quarter.

Instead of a V-shaped recovery, talk of a K-shaped economy emerged, meaning that the economy was diverging. The economic crash hit different sectors unevenly, and the recovery even more so. The well-off and professionals, especially those that could telework, did well. At the same time, many in the rest of the economy faltered, often depending on racial, gender, industrial sector, and geographical differences.

Another stimulus bill was signed by President Trump in early December of 2020. The $900 billion stimulus averted a government shutdown and sent out $600 to every eligible American. Trump had wanted $2000 checks, but the delay was holding up vaccine distribution, and many people were facing eviction and the loss of unemployment benefits.

Fueled in part by Trump’s 2017 Tax Cuts and Jobs Act, significant amounts of money moved into appreciating assets. As a result, many well-off people just had more money to invest. But it was also consequential in that combined with the Fed’s low-interest rates, it spurred unprecedented speculation and borrowing on margin for investment purposes. With these monetary and fiscal stimulus packages, the financial markets recovered quickly and continued to rise into 2021.

A year ago, the S&P 500 fell some 20% from its highs in a record 16 days. A key measure of the top 500 listed companies and the market overall, it is also a major indicator of the economy. A year later, the S&P 500 recovered from its 2,304 low to a near-record close of 3,931 on Feb 17. Overall, the S&P 500 returned 15.15% in 2020.

The Dow Jones Industrial Average (DJIA) is another important indicator of the economy and financial markets, and one of the oldest (Shown above). It indexes the top 30 “blue chip” companies. In other words, companies with pricing power over their products such as Apple, Chevron, Coca-Cola, Disney, and Proctor & Gamble. The “Dow” crashed to 18,951 on March 23 from a high of just over 29,300 three weeks earlier. The dollar was also down, as was crude oil and many commodities, including gold. The Dow continued to rise and recovered to nearly 31,500 two months into the Biden presidency.

On March 6, 2021, the Senate passed a new $1.9 trillion coronavirus relief package. It came when stock markets were at record highs, Bitcoin had ballooned to over $50,000, and concerns about inflation due to increased spending and significantly diminished supply chains had emerged. The bill, known as the American Rescue Plan Act of 2021 or “Build Back Better I” proved prophetic as a new Delta variant of the virus appeared in the summer of 2021.

The new Covid-19 response has three main areas: pandemic response ($400 billion), including 14 billion for vaccine distribution; direct relief to struggling families ($1 trillion), notably the $1,400 checks for individuals and unemployment benefits of $300/week; and support for communities (in multi-year tranches) and small businesses ($440 billion), especially tourism areas hit hard by the pandemic and transit systems.

We entered 2021 with an unbalanced economy – a roaring stock market but massive poverty. Years of supply-side economics gave us a highly technological society and appreciating financial assets. But it was based on globalized supply chains and highly dependent on Russia and Saudi Arabia to support petro-intensive lifestyles and economic practices. Tax cuts transferred much of US wealth to the higher income brackets. Trump’s US$1.3 trillion tax cuts exacerbated the imbalances as the former president racked up US$7.8 trillion in national debt from his inauguration on January 20, 2017, to the capitol riots on January 6, 2021, when the electoral college votes for President Biden were tallied and declared him the winner.

A fifth major stimulus package, the $1.9 trillion American Rescue Plan, was signed into law by President Biden on March 11, 2021. It helped states, cities, counties, states, and tribal governments cover increased expenditures from the COVID-19 pandemic and replenish lost revenue. Vaccinations increased dramatically, but in the middle of 2021, the “Delta variant” emerged, bringing in a new wave of hospitalizations and death. The K-shaped recovery had a new meaning – the vaccinated could resume normal activities while the unvaccinated made up the majority of those hospitalized and dying.

Inflation started to rise as the all items index increased 6.2 percent for the year leading up to October 2021. A year of stimulus spending, tax cuts, and low-interest rates, and raising wages promised a booming economy, but it was met by chip shortages, clogged shipping ports, and low unemployment. Most painfully, the price of crude oil had steadily increased since hitting $25 a barrel after the COVID-19 exploded a year before. At roughly $83 a barrel, it was the largest increase since mid-2005 when it went over $155 a barrel.

Concerns about inflation entered the US policy discussions during the Fall of 2021. The Infrastructure Investment and Jobs Act that passed in Senate in August was delayed as the progressives wanted it tied to the third part of President Biden’s “Build Back Better” agenda. Media pressure forced the progressives to relent and pass the infrastructure bill without the American Families Plan that had been watered down to $1.7 trillion over ten years.

Infrastructure Bill

We had a medical emergency; the new COVID-19 legislation is paying the bill and hopefully taking a bit of the kick out of the K-shaped recovery. What is important in the current legislation is giving support to the sick and dispossessed, including those affected by closed businesses and the 9.5 million jobs that disappeared over the last year. Is inflation a major problem? The best cure for inflation is stopping the pandemic and restoring the circuits of food and other vital commodities.

Citation APA (7th Edition)

Pennings, A.J. (2021, Mar 08). COVID-19 and US Economic Policy Respones. apennings.com https://apennings.com/dystopian-economies/covid-19-and-the-us-economic-policy-response/

Ⓒ ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea where he teaches financial economics and sustainable development. Originally from New York, he started his academic career Victoria University in Wellington, New Zealand before returning to New York to teach at Marist College and New York University. He has also spent time at the East-West Center in Honolulu, Hawaii. When not in Korea, he lives in Austin, Texas.

Five Generations of Wireless Technology

Posted on | February 8, 2021 | No Comments

The ubiquity, ease, and sophistication of mobile services have proven to be an extraordinarily popular addition to modern social and productive life. The term geckobeach“generations” has been applied to wireless technology classifications as a way to refer to the major disruptions and innovations in the state of mobile technology and associated services. These innovations include the move to data and the Internet protocols associated with the convergence of multiple forms of communications media (cable, mobile, wireline) and the wide array of services that are becoming increasingly available on portable devices like laptops and smartphones. We are now on the cusp of the 5th generation rollout of wireless services with intriguing implications for enterprise mobility, “m-commerce,” public safety, and a wide array of new entertainment and personal productivity services.

By 1982, the Federal Communications Commission (FCC) had recognized the importance of the emerging wireless communications market and began to define Cellular Market Areas (CMA) and assigning area based radio licenses. It split the 40 MHz of radio spectrum it had allocated to cellular into two market segments; half would go to the local telephone companies in each geographical area and the other to interested non-telephone companies by lottery. Although AT&T’s Bell labs had effectively begun the cellular market, it had estimated the 2000 market to be slightly less than a million subscribers and consequently abandoned it during its divestiture of the regional phone companies. Meanwhile, financier Michael Milken began a process of helping the McCaw family buy up the other licenses, making them multibillionaires when they sold out to AT&T in the mid-1990s.

The first generation (1G) of wireless phones were large analog voice machines and their data transmission capability was virtually nonexistent. This initial generation was developed in the 1980s through a combination of lotteries and the rollout of cellular sites and integrated networks. It used multiple base stations with each providing service to small adjoining cell areas. Its most popular phone was the Motorola DynaTAC known sometimes as “the brick”, now immortalized by financier Gordon Gecko’s early morning beach stroll in Wall Street (1986). 1G was hampered by a multitude of standards such as AMPS, TACs, and NMT that competed for acceptance. The Advanced Mobile Phone System (AMPS) was the first standardized cellular service in the world and used mainly in the US.

The second generation (2G) of wireless technology was the first to provide data services of any significance. By the early 1990s, GSM (Global System for Mobile Communications) Motorola introduced the StarTAC in 1996was introduced first in Europe and in the U.S. by T-Mobile and other countries worldwide. GSM standards were developed in 1982 by the Groupe Spécial Mobile committee, an offshoot of the European Conference of Postal and Telecommunications Administrations (CEPT). GSM was the standard that would allow national telecoms around the world to provide mobile services. Although voice services improved significantly, the top data speed was only 14.4 Kbps.

The second generation also marked the introduction of CDMA (Code Division Multiple Access techniques). Multiple access technologies cram multiple phone calls or Internet connections into one radio channel. AT&T utilized Time-Division Multiple Access techniques (TDMA)-based systems, while Bell Atlantic Mobile (later Verizon) introduced CDMA in 1996. This second generation digital technology reduced power consumption and carried more traffic while voice quality did improve, and security became more adept. The Motorola StarTac phone was originally developed for AMPS but was sold for both TDMA and CDMA systems.

Innovations sparked the development of the 2.5G standards that provided faster data speeds. The additional “half” a generation referred to the use of data packets. Known as the General Packet Radio Service (GPRS), the new standards could provide 56-171 Kbps of digital service. It has been used for Short Message Service (SMS), otherwise known as “text messaging” and MMS (Multimedia Messaging Service) services, WAP (Wireless Application Protocol), as well as Internet access. Being able to send a message with emojis, pictures, video, and even audio content to another device provided a significant boost to the mobile phone’s utility.

An advanced form of GPRS called EDGE (Enhanced Data Rates for Global Evolution) was used for the first Apple mobile phone, considered the first version using 3G technology.

Third generation (3G) network technology was introduced by Japan’s NTT DoCoMo in 1998. Still, it was adopted slowly in other countries, mainly because of the difficulties obtaining additional electromagnetic spectrum needed for the new towers and services. 3G droidxtechnologies provided a range of new services, including better voice quality and faster speeds. Multimedia services like Internet access, mobile TV, and video calls became available. Telecom and application services such as file downloads and file sharing made it easy to retrieve, install and share apps. 3G radio standards have been largely specified by the International Mobile Telecommunications-2000 (IMT-2000) of the International Telecommunication Union but the major carriers continued to evolve their own systems such as Sprint and Verizon’s CDMA 2000 and AT&T and T-Mobile’s Universal Mobile Telecommunications System (UMTS), an upgrade of GSM based on the ITU’s IMT-2000 standard set, but an expensive one as it required new base stations and frequency allocations.

A 3.5 generation became available with the introduction of High Speed Packet Access (HSPA) with promises of 14.4Mbps although 3.5-7.2 were more likely.

Fourth generation wireless technology sought to provide mobile all-IP communications and high-speed Internet access to htc-evo, the first 4G phonelaptops with USB wireless modems, smartphones, and other mobile devices. Sprint released the first 4G phone in March of 2010 at the communication industry’s annual CTIA event in Las Vegas. With a 4.3 inch screen, two cameras, and Android 2.1 OS the new phone was able to tap into the new IP environment Fourth generation (4G) technology is being rolled out in various forms with a dedication to broadband data and Internet protocols with services such as VoIP, IPTV, live video streams, online gaming, and multimedia applications for mobile users.

While 3G was based on two parallel infrastructures using both circuit-switched and packet-switched networking, 4G relied on packet-switching protocols. 4G LTE (Long Term Evolution) refers to wireless broadband IP technology developed by the Third Generation Partnership Project (3GPP). “Long Term Evolution” meant the progression from 2G GSM to 3G UMTS and into the future with LTE. The 3GPP, an industry trade group, designed the technology with the potential for 100 Mbps downstream and 30 Mbps upstream. Always subject to various environmental influences, data rates could reach 1 Gbps speeds in the next ten years.[2]

4G phones were developed by Apple (iPhone 5-7), Samsung, and others to access WiMax (Worldwide Interoperability for Microwave Access) using the IEEE Standard 802.16 with a range of some 30 miles and transmission speeds of 75 Mbps to 200Mbps.

4G WiMax provides data rates similar to 802.11 Wi-Fi standards with the range and quality of cellular networks. The difference in technology has been the softer handoffs between base stations that allow for more effective mobility over longer distances. Going to IP enables mobile technology to integrate into the all-IP next-generation network (NGN) that is forming to offer services across broadband, cable, and satellite communication mediums.

In October 2020, Apple unveiled the first iPhones to support 5th generation (5G) connectivity with the iPhone 12. This meant Apple had to add new chips, antennas, and radiofrequency filters into the new phone. 5G wireless communications represent a major new set of challenges and opportunities. The frequencies used require higher levels of power and more base stations because the range of transmission is shorter than LTE. It will also afford new opportunities such as faster connections up to 10x faster than LTE and reduced latency. Faster speeds mean new and enhanced cloud-based services to games and videos, virtual and augmented realities, IoT in the homes and factories, and enhanced telemedicine applications.

5G uses frequencies that are 10 to 100 times higher than the radio waves used for 4G and WiFi networks. We need to know more about the power dynamics of 5G and under what conditions, if any, it can break molecular bonds or provide health risks from long-term exposure.

Notes

[1] For a history of wireless communications.
[2] This is a great review of the 4 generations of wireless technologies.

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, Ph.D. is Professor at the Department of Technology and Society, State University of New York, Korea. From 2002-2012 was on the faculty of New York University. Previously, he taught at Hannam University in South Korea, Marist College in New York, Victoria University in New Zealand. He keeps his American home in Austin, Texas and has taught there in the Digital Media MBA program atSt. Edwards University He joyfully spent 9 years at the East-West Center in Honolulu, Hawaii.

US Internet Policy, Part 3: The FCC and Consolidation of Broadband

Posted on | February 5, 2021 | No Comments

In this post, I look at the transition of Internet data communications from a competitive market structure to a few Internet Service Providers (ISPs). As digital technology allowed cable and telecommunications companies (telcos) to transition from traditional telephony to packet-switched Internet Protocol (IP) services, deregulation allowed them to dominate broadband services. It also allowed them to not only move data, but diverge from the traditional “common carriage” communications policy that separated the transfer of data from providing content like entertainment and news.

In Part I of this series, I looked at the emergence of the ISPs and the regulatory framework in the USA that classified them as “enhanced services.” This designation was based on the Federal Communications Commission’s (FCC) Second Computer Inquiry in 1981 that exempted online services from a number of requirements that had been imposed on telephone networks. Part II discussed the transition from dial-up modems in the early days of data communications to high-speed Digital Subscriber Lines (DSL). These “broadband” connections accelerated the business and consumer adoption of the Internet in the late 1990s. In Part 4, I will address issues of net neutrality facing the Biden administration in an era of “smart” or “edge technologies” that includes the Internet of Things (IoT) and “connected” cars.

Despite the design and the efforts of the Clinton-Gore administration to create a competitive environment, the Internet came to be increasingly controlled by a small number of ISPs. It is important to understand the policy environment and administrative actions that changed the Internet into the oligopolistic market structure that dominates broadband today. Policy changes allowed telcos to transition from the neutral transmitters of communication to the communicators themselves.

Broadband services in the USA is dominated by large integrated service providers such as AT&T, Comcast, Sprint, and Verizon. These companies have pursued “triple play” service bundles, combining high-speed Internet, cable TV, and IP phone services. Some also provide mobile services. These companies have been merging with content providers to distribute entertainment, education, and news, as well as move all the other Internet traffic. AT&T merged with Time-Warner, giving them access to Warner Bros., HBO, and Turner/CNN. Comcast has completed its merger with NBC, and Verizon bought AOL and Yahoo! Unfortunately, these deals have failed to return the huge rewards they were aiming for and deterred sufficient broadband rollout.

The highly competitive Internet services provider environment during the 1990s was significantly compromised by the Bush administration’s Federal Communications Commission (FCC). Their decisions favored cable companies and telcos and led to a consolidation of control over the Internet. The FCC’s actions raised concerns that powerful ISPs could influence the flow of data through the Internet and discriminate against some content providers or users to the detriment of consumers.

In 2002, the FCC ruled that “cable modem service” was an information service, and not a telecommunications service. Cable companies like Charter, Xfinity, Cox, and Time-Warner became unregulated broadband providers and were exempted from the common-carrier regulations and network access requirements imposed on the telcos. The Supreme Court decision in National Cable and Telecommunications Association vs. Brand X Internet Services meant that cable modem services would become Title I “information services” despite major criticism by Justice Scalia who argued that cable TV clearly offered both content services and telecommunications services. The Justice had no hesitation in calling it “bad law.”[2]

Then in 2005, another FCC decision effectively made telcos unregulated ISPs. FCC WC Docket 02-33 allowed their DSL broadband services to also become unregulated “information services.” This effectively allowed a few telcos such as Verizon and BellSouth to take over what had previously been a competitive ISP industry. The ruling allowed them to offer broadband fiber and DSL Internet access transmission while presenting challenges to previous requirements such as allowing other ISPs “access to facilities” and interconnection. Smaller ISPs had been allowed to physically connect to the “common carrier” telco facilities so that their customers could access the larger Internet.

Internet innovation came from other sources and distracted the public from broadband carrier issues. Facebook and Flickr were launched in 2004. Twitter, Microsoft’s Xbox Live, and online music streaming Spotify went online in 2006. Google bought Android in 2006 and YouTube the next year. Netflix started its streaming service in 2007, and the first iPhone was also released that year.

The success of these innovations did not escape the telcos’ view, who wanted a piece of the action. They wanted to move beyond being just carriers of information to providers of entertainment and informative content. This was evidenced by Verizon’s introduction of FIOS (Fiber Optic Service) TV service in 2005 and AT&T’s U-verse in 2006. ISPs looked to dominate home broadband service by bundling TV, Internet, and telephone voice service over their high-speed IP networks.

In 2003, Columbia Law professor Tim Wu coined the term ‘net neutrality’ to stress the importance of allowing the free flow of data for the Internet’s future. It is based on the notion of “common carriage,” a legal framework developed to ensure that railroads would serve all businesses and municipalities. It basically means that the network should stay neutral and let the bits flow interrupted from device to device at the highest speeds available. This is how the Internet was designed, but the networks have been around since the telegraph and telephone and have developed their own legal and technical ways to survive.

The Internet’s political and social impact was becoming more apparent with the presidential campaign of Barack Obama in 2008. It was recognized by the Pew Research Center that some 74% of Internet users interacted with election information. A significant number of citizens received their news online, communicated with others about elections, and received information from campaigns via email or other online sources.

In 2010, the Obama administration began to write new rules dealing with Internet providers that would require ISPs to treat all traffic equally. In what were called the “Open Internet” rules, the new administration began to design a framework to restrict telecom providers from blocking or slowing down specific Internet services.

In the next post, I will look at the development of net neutrality rules under the Obama administration. Later, the Trump administration renewed attempts to rid the ISP’s from net neutrality interference by returning to Title I. A major question for the Biden administration is the possible return to Title II rules and strengthened rules on net neutrality.

Ⓒ ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor at the Department of Technology and Society, State University of New York, Korea. Originally from New York, he started his academic career Victoria University in Wellington, New Zealand before returning to New York to teach at Marist College and spending most of his career at New York University. He has also spent time at the East-West Center in Honolulu, Hawaii. When not in the Republic of Korea, he lives in Austin, Texas.

Korea in a Post Covid-19 World, Part 3: The Green New Deal

Posted on | January 29, 2021 | No Comments

This post is my third on the Korean New Deal as a response to the COVID-19 pandemic. In the first post, I discussed the origins of the New Deal in the US and its reemergence as the Green New Deal in the UK and US. In the second, I discussed Korea’s Digital New Deal and its emphasis on “DNA” – Data, Network, and Artificial Intelligence (AI) to strengthen Korea’s industrial, education, and transportation infrastructure. In a future post, I will look at Korea’s efforts to build a more extensive and inclusive social safety net for its 50+ million people.

In this post, I examine Korea’s concerns about its quality of life and some of its plans for addressing related economic and environmental issues. Despite impressive economic growth and infrastructure development, the country suffers from congested highways, industrial waste, and regular occurrences of high particle content in its air. Consequently, the Moon administration embraced a Green New Deal in mid-July 2020 to address these issues and pursue opportunities for green growth industries with export potential.

President Moon presented the argument:

    The Government will pave the way toward sustainable growth through the Green New Deal. We will create new markets, industries and jobs while actively responding to climate change as a responsible member of the international community.

Areas of particular concern are low-carbon and decentralized energy, urban and water infrastructure, and green solutions that can be commercially viable.

The Korean Green New Deal recognizes the calls for climate and environmental action as well as the opportunities inherent in the transition to a green economy. Bouts of air pollution due to its reliance on coal, heavy vehicle traffic, and proximity to industrial centers domestically and in China plague the country. Consequently, it wants to support green industries and achieve a better balance between the economy and nature.

The Moon administration plans to make way for the new generation of renewable-powered and and digitally-connected vehicles. These include electric vehicles (EVs) and hydrogen cars and increasingly software-driven “smart” cars. It wants to take over a million diesel vehicles off the road to reduce emissions and support the transition to renewable energy vehicles. Korea has to run to catch up with Chinese and Tesla EVs, but it has the devotion of its domestic car consumers and Hyundai’s Ionique EV is an attractive start. More than 90% of the cars currently on their roads are produced domestically.[1]

One of the challenges of the carbon economy will be to replace the taxes on petroleum imports that helped build an extraordinary road infrastructure throughout Korea. Tax revenues on fuel have been decreasing around the world as vehicles have become more efficient. Likely solutions involve increasing fuel taxes or road user charges that trade a petrol tax for a fee for kilometers traveled.

Its “fast-follower” economic strategy and capabilities will be put to the test to keep Korean manufacturers relevant in the rapidly evolving autonomous anc connected automobile market. But it could also mimic its Android strategy and have Hyundai or Kia team up with Apple or Google for automobile data and software expertise for energy management and higher levels of autonomous driving.

Hydrogen is another automobile technology under consideration. To be viable, it needs to address issues of cost, safety, and infrastructure. Hydrogen can be produced from hydrocarbon molecules with gasification, high heat, or the addition of carbon monoxide to water. It can also be produced with fermentation or through electrolysis, the separation of water into hydrogen and oxygen with electricity. Producing this simple fuel can be expensive but idle capacity in its nuclear power facilities at night has been one strategy to produce the non-toxic fuel. Renewable sources with low marginal costs like solar and wind can ideally be used to make the gas in the future.[2] Despite the tragedy of the Hindenburg balloon explosion, hydrogen is still safer than gasoline in most environments. It can be vented quickly and disperses away from a vehicle in case of an accident.

The big issue has been hydrogen for combustion or hydrogen for fuel-cell electricity. Although hydrogen combustion only produces water, the heat of the reaction can subsequently produce dangerous nitrous oxides. This does not occur in a fuel cell that uses a chemical transition involving hydrogen to release electricity that drives an electric motor. Both strategies would require pumping hydrogen into an automobile’s fuel tank and both would emit water.

Priorities from the Korean Ministry of Economy and Finance keynote speech on the Green New Deal:

Green New Deal

The refueling infrastructure presents a “chicken or egg” dilemma for both electric and hydrogen-based vehicles. Many consumers have concerns that they will not be able to obtain the needed fuel conveniently and in a timely manner. A network of electric charging stations are springing up in unusual places. The shopping center next to my university campus has a Tesla charging station in the basement parking lot so their high-end consumers can shop at local boutiques and frequent the restaurants. As they do not emit toxic fumes, EV charging stations can be located in a wide variety of locations. High-speed recharging and wireless charging capabilities will hasten the transition to electric vehicles. Hydrogen presents different challenges.

Hydrogen is increasingly used in industrial applications and is a key ingredient in decarbonization strategies. However, its future in automobile propulsion is still questionable due primarily to the lack of refueling infrastructure. Unlike electric recharging, hydrogen requires “gas stations” for refueling due to storage issues and potential dangers due to its volatility. Hydrogen can be transported in small quantities as compressed gas in pressurized cylinders on “tube trucks” to refueling stations for light-duty vehicles. Liquefaction is expensive and requires extremely low temperatures (-253 degrees C). Compared to the US, Korea has few hydrogen gas pipelines or natural gas pipelines into which they can blend hydrogen. Producing hydrogen at the refueling station with alternative energy may be the best strategy for widespread utilization of the gas.

A crucial green response includes building smart electric grids for the energy management of traditional and new eco-friendly, low-carbon power generation systems. The transition from centralized legacy coal and nuclear plants to decentralized renewable-powered generation systems requires extensive hardware and software developments. Intelligent grids (and microgrids) implementing innovations in the management of energy production, energy storage, as well as energy transmission and distribution systems represent both challenges and opportunities to monetize new solutions for an “Internet of Electricity.

Smart grids need to skillfully manage the intermittent sources of electricity to maintain steady flows to communities and industries. Traditional coal, oil, and nuclear power plants are notable for producing a consistent and precise “baseload” amount of electricity throughout the day.[2] While some renewables like hydroelectric power from dams provide consistent electricity, other renewables may require “smart” solutions to know when to store and integrate additional electricity from alternative sources.

One problem that needs to be continuously addressed is the transmission facilities to incorporate electricity from solar and wind projects that are rural. Particular emphasis is on drawing power from the 42 small island regions surrounding the peninsula that might be suitable for large-scale wind farms, solar, or wave power. One of South Korea’s biggest windfarms will be built off the southwest shore of the country. Hanwha is one of the largest solar cell producers in the world as well as solar power-plant construction and project financing.

Korea also hopes to capitalize on new greenhouse gas (GHG) reducing technologies and desalination process efficiencies that could come with cheap energy. While GHG capture technologies are not being used to any significant extent, other technologies can reduce emissions. The green remodeling of buildings with LEED (Leadership in Energy and Environmental Design) certified technologies will bring both jobs and savings. This includes smart meters in public housing and clean green factories and industrial complexes.

Some are concerned that the Korean New Deal is likely to be heavy on government involvement and lite on government spending. President Moon updated the spending figures recently when he addressed the World Economic Forum at Davos:

Economies thrive on problems, real or even conjured. Taking on challenges and finding innovative ways to engage citizens and companies in productive activities produces wealth as well as options to shape the quality of life. The move to a post-carbon society will raise questions, create debates, and present new opportunities. The Green economy offers possibilities for cleaner air, land, and sea while ultimately producing more energy for mobility, production, and comfort.

In the next post I will focus on Korea New Deal’s attempt to build a jobs and a social safety net.

Notes

[1] Lee, E. (2019, July 15). Car ownership in Korea hit 23.44 million in June 2019. Less than 10 percent are imports. Import share of cars where at 9.7% – Pulse by Maeil Business News Korea.
[2] Near-zero marginal costs is an economic concept that refers to the eventual production of a good or service at a very low costs per unit.
[3] Benjamin Matek, Karl Gawell, The Benefits of Baseload Renewables: A Misunderstood Energy Technology, The Electricity Journal, Volume 28, Issue 2, 2015, Pages 101-112,
ISSN 1040-6190, https://doi.org/10.1016/j.tej.2015.02.001.

Ⓒ ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor at the Department of Technology and Society, State University of New York, Korea. Originally from New York, he started his academic career Victoria University in Wellington, New Zealand before returning to New York to teach at Marist College and spending most of his career at New York University. He has also spent time at the East-West Center in Honolulu, Hawaii. When not in the Republic of Korea, he lives in Austin, Texas.

Korea in a Post Covid-19 World, Part 2: Merging Digital and Green New Deals

Posted on | January 3, 2021 | No Comments

I’ve been lucky enough to ride out most of the Covid-19 epidemic here in the Republic of Korea. I miss being home in Austin, TX, but I’ve been safe and relatively free to travel and shop, even if I have to wear a mask everywhere I go. It’s a small price to pay for the relative freedom of going out to eat and exercise on my bike in the parks that are regularly available. Korea, for the most part, has avoided major lockdown measures and still led the OECD in economic growth during the pandemic.

Green New Deal

This is the second post on the Korean New Deal that was recently reiterated by President Moon at the 2021 Davos World Economic forum. In the first post on the Korean New Deal, I introduced the initial New Deal and looked at the emergence of the Green New Deal in Europe and the USA. In the third post I will go into the Korean Green New Deal in more detail.

This post discusses the recent responses by Korea to the COVID-19 pandemic and its economic repercussions by examining the Digital New Deal. These posts are not policy analyses as much as they introduce some of the goals and rationale involved with the Korean New Deals. Case studies are difficult to generalize. Still, these examinations are meant to be suggestive of some strategies worth examining by other countries.

The Korean New Deal was proposed to the public by President Moon Jae-in’s administration after a convincing spring 2020 election win in the National Assembly by the ruling Democratic Party of Korea (DPK). The Korean New Deal was designed and is being implemented with a potential new wave of the COVID-19 pandemic in mind. The notion of “sleeping with the enemy” was invoked to caution a premature return to normal activities and accelerate a transition plan to a greener, smarter, and more sustainable growth model with a major goal of being carbon-neutral by 2050.

Korea’s New Deal has two components: a Digital New Deal and a Green New Deal. President Moon explained:

    This Korean New Deal is a new national development strategy to leap from being a fast-follower to a pace-setter. In the belief that our country’s future hinges on it, we will resolutely push ahead with the Korean New Deal, which will erect two pillars – a Digital New Deal and Green New Deal – side by side atop the foundation of an inclusive nation and of values that put people first.

Left without North Korea’s natural resources by the Armistice Agreement in 1953 that split Korea at the 38th parallel, South Korea pursued an export model with a significant emphasis on science and technology. This meant improving on products that were already familiar to western society: ships, cars, semiconductors, televisions, etc. This is the “fast-follower” strategy mentioned in the quote above by President Moon. More recently, smartphones and popular music and film have added to the economic mix as well as the soft power helpful for smooth economic and political relations.

Now South Korea wants to expand its development strategy to be a “pace-setter” by leveraging its highly trained human resources with innovation. Earlier work addressed the prospects of a Fourth Industrial Revolution (FIR) – new products and processes based on innovations in digital, biological, and materials science. The Presidential Committee on the Fourth Industrial Revolution (PCFIR) was set up after Moon was elected in 2017 and started to drive consensus-building. This would mobilize economic strategies that commercialize and implement advances in artificial intelligence (AI), the Internet of Things (IoT), 3D printing, robotics, genetic engineering, nanotechnologies, quantum computing, and other technologies. This was ideal for a high tech society like Korea’s but as the COVID-19 crisis emerged, the New Deal signaled a more people-oriented approach and not just economic growth.

In this post, I again draw on the keynote speech by Dae Joong Lee from the Ministry of Finance and Economy. In “Linking the Korean New Deal with Innovation and Technology in the Post Covid-19 Era”, presented at the Korea Workshop on Innovation and Digital Technology in a Post-Covid-19 World held in November 2020. It was sponsored by the World Bank’s International Development Agency (IDA) and the Korean Ministry of Economy and Finance.

The Digital New Deal

Dae Joong Lee’s presentation on the Digital New Deal introduced an acronym that was new to me – “DNA.” Not the biological Deoxyribonucleic Acid in each of our cells, but “Data, Networks, and Artificial Intelligence.” One of the Digital New Deal’s first objectives is to find ways to feed data into AI. This includes disclosing data from the public sphere and introducing an incentive system to gather data from other sectors to feed AI development.

All ministries were ordered to release non-sensitive public data over the coming year to “usher in a data economy that opens the free flow of information and ideas.” Korea, like most countries, is struggling with privacy issues and needs to improve on the Personal Information Privacy Act (PIPA), which is vague and lacks punitive strength.

Networks are one of Korea’s core digital strengths and provide the foundation for many other infrastructure endeavors. Broadband speeds are some of the highest in the world at averages of 168.26 Mbps (12th) for fixed landlines and 166.70 Mbps (2nd) for mobile, after the United Arab Emirates. 5G continues to roll out across the nation for consumer and industry use.

With relatively high incomes and literacy, it is no surprise that the country has one of the highest mobile use rates in the world. A complication for Korea is that it is both an important supplier of 5G equipment as well as a chip producer for other 5G equipment manufacturers.

Reminiscent of Vice-President Gore’s E-rate in the US during the late 1990s, digitalization of education infrastructures is a high priority. Gore’s plan taxed landline telephone users to update schools with important equipment and infrastructure. The Digital New Deal will provide Wi-fi to schools, re-supply new computers for faculty, and replace old servers and network equipment in educational environments. Students in some 1,200 schools are targeted to get 240,000 tablet PCs. Online content, particularly on the 4th Industrial Revolution (FIR), will also be developed.

A more complicated development is the integration of “DNA” in smart communities and industrial applications. These include the goals of producing 108 smart cities and governance outfitted with 5G, connected management centers, cloud computing for public information, and protected by advanced cybersecurity.

The Digital New Deal includes ten new industrial complexes with computerized control centers and 12,000 smart factories with another 10,000 workshops and 100,000 stores equipped with the newest process management technologies.[1] Korea is already a leader in industrial robotics, and, recently, Hyundai acquired Boston Dynamics, an innovator in robot manipulation, mobility, and vision.

Logistically, they want to build major smart distribution systems like Amazon, with associated certification systems. These logistical centers would be shared by many SMEs and be part of the support infrastructure for over 300,000 microbusinesses that would also have access to teleconferencing centers and commercial space for offices and design studios.

As part of a new infrastructure for autonomous vehicles, they propose to develop a Cooperative Intelligent Transport (C-ITS) system to upgrade their roads. These control systems would coordinate pedestrians, bicycles, automobiles, and commercial vehicles for road safety and enhanced traffic flow. Already a major automobile manufacturer, Korea is producing “automatrix” road management models for domestic use and export. Registered cars in South Korea hit nearly 23.5 million units by the summer of 2019.[2] But these will eventually be replaced with connected cars powered by electric batteries or hydrogen.

Korea also set out to develop a public safety network for first responders such policemen, firefighters, public officials and others involved in emergency management and disaster risk reduction. Several disasters, including the Sewol ferry sinking on April 16, 2014, that killed 304 people, mainly students on a field trip, as well as train fires, were exacerbated by poor communications. Technical standards, guided by the Safe-Net Forum, have led to a new public safety (PS-LTE) network with versions for railroads (LTE-R) and maritime (LTE-M) communications.

In the next post on this topic, I will discuss the Korean Green New Deal.

Notes

[1] Just to reiterate, these are the goals of the Moon administration.
[2] Lee, E. (2019, July 15). Car ownership in Korea hits 23.44 mn by June, import share at 9.7% – Pulse by Maeil Business News Korea.

Ⓒ ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor at the Department of Technology and Society, State University of New York, Korea. Originally from New York, he started his academic career Victoria University in Wellington, New Zealand before returning to New York to teach at Marist College and spending most of his career at New York University. He has also spent time at the East-West Center in Honolulu, Hawaii. When not in the Republic of Korea, he lives in Austin, Texas.

The Two Santa Claus Theory of US Economic Growth and the Prospects of Modern Monetary Theory (MMT)

Posted on | December 27, 2020 | No Comments

So, when someone says to me how do we pay for the Green New Deal? I say well Congress appropriates the money and then the Treasury instructs the Fed to credit the appropriate accounts. And that is how it is paid for. And then the Green New Deal people say, “yeah that!” – Warren B. Mosler, Founding MMT theorist and author of Soft Currency Economics.

This quote is slightly tongue-in-cheek due to its understatement and matter of fact-ness. However, it is a procedural and factual statement of how the US government pays its bills. It does not tax or float bonds to pay for government spending. Likewise, it does not “print” any meaningful amounts of money, although all those activities raise money that is added to the government’s balance sheets.

The government spends money like most of us now, with online banking. The difference is, they don’t really have to “balance their checkbook.” That doesn’t mean they can spend indiscriminately and without consequence, as will be discussed below. But economic theory has largely ignored the dynamics of money and the crucial role of government spending in kickstarting the economy. The quote above does hint at a solution or a strategy to address some significant economic policy issues and environmental problems facing contemporary society.

I remember babysitting my car one January morning in 2003 (it’s a New York City alternative parking thing) and reading the Wall Street Journal. The article was disparaging the government spending surpluses that had been built up during the Clinton administration. This wasn’t a total surprise, as I was teaching economics down the street at New York University at the time, but we hadn’t had many surpluses to critique in the last several decades and the article challenged many reigning economic myths. The crux of the argument as I remember it was that debt is a significant player in global finance.

This post examines that contention and its implications for government fiscal policy. It looks at the role of federal spending and the implications of both debt and deficits for infrastructure spending and action against climate change and global pollution. We also need to confront unemployment due to automation and new technical innovations such as artificial intelligence and the Internet of Things (IoT). It examines the historical spending and tax practices of both Democrat and Republican parties and the implications of a relatively new theoretical focus called Modern Monetary Theory (MMT).

So, the Bush administration proceeded to navigate the return to deficits and reverse the surplus with a variety of spending measures, including expanding Medicare to pay for drugs and wars in Afghanistan and Iraq. As US Vice-President Dick Cheney used to say, “Reagan proved deficits don’t matter.” The former Secretary of Defense and CEO of Halliburton, a major defense logistics contractor, did know that they matter to the private sector.

President Ronald Reagan faced a tough economy when he was elected, much like President Obama would inherit 28 years later. Reagan drastically cut taxes and increased government spending, primarily on defense. As a result, he nearly tripled the federal debt during his two presidential terms. Consequently, by policy or default, he followed the “Two Santa Claus Theory.”

deficits

This perspective was set forward in “Taxes and the Two Santa Claus Theory” by Wall Street Journal editorial writer Jude Wanniski. He argued that the Democrats should be the spending “Santa Claus” and redistribute wealth while the Republicans should be the tax reduction “Santa Claus” and help spur income growth.

The theory gained traction in Republican circles as Watergate came to a head and the country struggled with vestiges of the Vietnam War. Wanniski had a meeting in 1974 with Dick Cheney, Donald Rumsfeld, and Arthur Laffer, the creator of infamous “Laffer Curve” that hypothesized that lower tax rates would increase government revenues. A consensus was forming that would be known as “trickle-down economics” and even nicknamed “Voodoo economics” by the first President Bush. The official face of the theory was known as “supply-side economics” as it was meant to reward “suppliers” of goods and services with lower taxes and decreased regulation.

It also became conflated with a new type of market fundamentalism promoted by Chicago school Nobel Prize winners Frederick Hayek and Milton Friedman. Hayek wrote The Road to Serfdom at the end of World War II that was a popular critique of the role of government in the economy. Friedman was also known for his anti-government stance. He championed markets and the price mechanism as more efficient forms of economic activity. His major contribution was in establishing a direct relationship between the quantity of money in the economy and price levels.

As the economy went into the deep “stagflation” recession of the late 1970s due to the two oil crises and the subsequent growth of Eurodollar markets, Hayek and Friedman found their ideas to be very popular, immortalized by Ronald Reagan’s classic inaugural line in 1981, “In this present crisis, government is not the solution to our problem, government IS the problem.”

The Federal Reserve increased interest rates to 20% by June 1981 and the prime interest rate, an important economic measure, exceeded 21% by the summer of 1982. It squashed the inflation but created a an even worse recession. In response, Reagan embraced both Santa Claus strategies: lower taxes and increase spending. The nation loved him for it.

It was politically expedient for Reagan to combine the two strategies. While criticizing “liberals” for their “tax and spend policies” Reagan did little to cut overall spending. He did shame “welfare mothers” – code for unmarried black women and in 1981, and cut Aid to Families with Dependent Children (AFDC) and other programs that targeted the poor. But the move was largely symbolic and more of a political message to his base, including Reagan Democrats, who resented the black migration to the North and the employment competition they faced as the automobile and other industries suffered competition from Germany and Japan.

Reagan also spent heavily on the military as he proceeded to create a post-détente Cold War II. He championed the MX nuclear missile and “Star Wars” that funded artificial intelligence and eventually the Internet (NSFNET) in an attempt to create a space-based defensive shield around the USA. The new deficit spending helped the economy recover and also create a global financial superstructure with US treasury bonds as a major hedge for the protection for traders’ positions.[1]

Reagan also pushed two of the most extensive tax cuts in American history. Following Kennedy’s cut of top marginal tax rate from 90% to 70%, Reagan cut them to 50% in his early years; in 1986, he further reduced the rate to 28%.

That latter point may not be that much of a positive, as Reaganomics set the conditions for massive wealth inequalities and the transfer of public wealth to private hands. Starting with the striking air controllers, Reagan aggressively shut down union activities. Still, Reaganomics did create a new set of economic conditions that rewarded entrepreneurship and “suppliers,” as well as stimulate technological development.

The 1980s economy was ripe to commercialize the technological developments of the Cold War and Space Race. Intercontinental ballistic missiles and NASA’s Apollo Moon program helped launch communications satellites and refined the transistor for their guidance systems into the microprocessor “chip.” By the 1980s, CNN and MTV were using satellites to equip cable TV with new 24/7 content. Apple was started by kids from Silicon Valley because it was initially a community or “industrial cluster” built on military spending and they grew up with electronics as part of their culture. Bill Gates quit Harvard as soon as he saw that the first Intel microprocessors were being used to create the Altair microcomputer.

Fiscal policy (tax adjustments and government spending) has a significant impact on the economy. John Maynard Keynes largely laid out the theories on fiscal and macroeconomic policy in the years between the great wars. The British economist and financial trader had been very concerned about the austerity measures imposed on Germany after World War I. He had been on the British Treasury team that went to the Versailles Peace Treaty but soon resigned in disgust, fearing the results of the austerity measures placed on Germany.

The Allies imposed crushing reparations on Germany that drove the country into a frenzy of inflation, starvation, and disillusion. His book Economic Consequences of the Peace (1919) was an extraordinary economic policy analysis and warned of major problems if the German economy was not stabilized. Keynes all but predicted the rise of Nazi Germany.

Keynes followed policy analysis with economic theory in his crowning achievement, The General Theory of Employment, Interest, and Money, published in 1936 during the height of the Great Depression. This classic book provided the rationale for government intervention in the economy. President Franklin Delano Roosevelt (FDR) was already deeply committed to the set of interventionist policies that would become known as the “New Deal,” but Keynes legitimized that intervention and provided a set of conceptual tools for analysis and policy formulation. Subsequent industrial mobilization for World War II solidified the importance of government spending, and in its successful wake solidified Keynes’ role as the dominant voice in economics. Keynesianism became the guiding star for managing the economy.

A variant “Santa Claus” policy perspective has circulated in Democratic circles for the last few years called Modern Monetary Theory (MMT). It argues that governments have a monopoly on the production of their money, and with it, the responsibility to use it effectively for policy purposes, even if it leads to larger deficits. Barring excess inflation in the economy, governments that can produce their own money should be willing to spend generously to ensure high levels of employment and a growing economy.

MMT was envisioned early in the 1990s by financial trader Warren Mosler and championed politically more recently by Stephanie Kelton, a Public Policy and Economics professor at Stony Brook University in New York. Unlike most economists who tend to marginalize money and central bank operations, Mossler’s and other traders’ financial viability depended on understanding the Fed’s monetary policy. Kelton, a former Bernie Sanders policy advisor, recognized the implications of Modern Monetary Theory for progressive objectives.

Everyone who played the board game Monopoly knows that the game starts with money handed out to each player. Likewise, the MMT argument is that government has consistently led economic development by spending money into the economy, which then can be used for various economic activities. Government spending creates money and expands the economy and rarely “crowds out” additional investment, as is one of the usual criticisms of MMT.

Mosler argues that the economy starts with a nation-state that “wants to provision itself.” It wants to pay for education, healthcare, infrastructure, military spending, etc., depending on the political consensus. So it creates a “tax liability,” which has to be paid in a specified currency. The government then creates that currency, and people look for opportunities and work to pay the tax, as well as build some savings and wealth.

This process creates “unemployment,” what MMT calls people looking for paid work in the currency they can use to pay the tax. Many people don’t work in modern society; they could be jail, or managing a family, or retired. These are not unemployed people because they are not looking for sources to pay their taxes.

The government’s ability to ensure a currency’s acceptance as a viable form of payment, and primarily through their taxability, makes spending US currency a likely mechanism for economic growth and guidance. The dollar is accepted as currency because it is the only tender that can be used to pay US taxes, but it is also desirable because it has ingrained itself in the market dynamics of society.

This charges government with a significant responsibility to survey the economy and the money supply effectively and responsibly. It means that the government has to spend and monitor the economy. It is not a household that has to live within its means, the same limitations do not constrain it. Just like the Monopoly game, it has to put some currency on the table to keep the game going.

MMT is not a license to spend indiscriminately as inflation is a significant concern. Inflation occurs when too much economic demand or too little supply of a good or service causes an increase in prices. But inflation coming from too much money is relatively easy to manage. Most hyperinflation cases come from disruptions in supply, such as the loss of manufacturing in the Weimar Republic after WWI or the decline of agriculture in Zimbabwe. Increases in taxes and regulations on business and finance can counter most inflation cases if spending deficits trigger price increases.

Other concerns about government spending involve exchange rates and debts to other countries. Dealing with the first means ensuring that the currency can float in regards to other currencies. The Nixon shock of the 1970s meant going off the gold standard and transiting to what Walter Wriston, the former CEO of Citibank, called the “Information Standard,” a global surveillance system based on international news and virtual financial markets. These systems allowed exchange rates to float and enable a currency to make certain adjustments by letting its value change in relation to other currencies.

Countries should also avoid going heavily in debt to other countries, and especially avoid borrowing money that requires repayment in a foreign currency. Walter Wriston used to say the “countries never go bankrupt.” Maybe not, but it creates a set of other critical dynamics. These include the temptation of creditor nations to continually extend credit to avoid economic declines. The absence of a bankruptcy mechanism also means they exert pressure on debtor countries to “structurally adjust” their economies to adjust to the concerns of creditor nations.

An example is the “Third World Debt Crisis” that recycled OPEC petrodollars into developing countries in the 1980s. Debt resulted in pressure to privatize public assets into securities that could be listed on global financial markets. Curiously, this led to the transition of government telecommunications agencies into private or semi-state corporations and facilitated the adoption of Internet Protocols that led to the World Wide Web. However, it also led to the privatization of water and other public resources and pressure to increase taxes and reduce social services.

As Kelton points out, MMT challenges our contemporary conceptions of money, deficits, and debt. One of the most dangerous metaphors we use to conduct our public policy is the notion of a “fiscal house.” This metaphor is based on the conflation of government finances with household finances and the idea of “living within our means.” And primarily, this means recognizing government is not a household that has to reconcile its checkbook. Governments should not live within their “means” but expand the realm of economic possibility.

MMT is not a socialist or utopian panacea for the economy; it is essentially an understanding of central bank operations and the role of money in the economy. However, it provides an opportunity to examine whether the Green New Deal or other Post-Covid-19 plans to address climate change’s challenges will be a drain on economic growth or an opportunity to create thriving sustainable economies.

Carbon-based combustible fuels are no longer the most efficient energy sources, but they require new smart grids and other infrastructure to be readily available. To mitigate climate change and pollution while ensuring low unemployment in an age of automation and artificial intelligence, it will be important to understand government spending. Engaging with MMT can provide insights into the fiscal spending process and challenge public policy to develop plans for sustainable economic growth and prosperity, while avoiding inflation and other negative effects of government spending.

Citation APA (7th Edition)

Pennings, A.J. (2020, Jan 27). The Two Santa Claus Theory of US Economic Growth and the Prospects of Modern Monetary Theory (MMT). apennings.com https://apennings.com/dystopian-economies/the-two-santa-claus-theory-of-economic-growth-and-the-prospects-of-modern-monetary-theory-mmt/

Notes

[1] Remember that Alexander Hamilton traded New York City’s status as the nation’s capital for the opportunity to assume the state’s Revolutionary War debt as the basis for a Bank of the United States. As a result, the government moved to a swamp in Virginia that would become Washington DC and New York City became the nation’s financial center. Likewise, in a digital financial environment that trades globally everywhere and all the time, Treasury bonds play a crucial role in coordinating wealth and as a hedge against risk in volatile markets.

Ⓒ ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor at the Department of Technology and Society, State University of New York, Korea. He started his academic career Victoria University in Wellington, New Zealand before returning to New York to teach at Marist College and New York University teaching comparative political economy and digital economics. When not in the Republic of Korea, he lives in Austin, Texas.

« go backkeep looking »
  • Referencing this Material

    Copyrights apply to all materials on this blog but fair use conditions allow limited use of ideas and quotations. Please cite the permalinks of the articles/posts.
    Citing a post in APA style would look like:
    Pennings, A. (2015, April 17). Diffusion and the Five Characteristics of Innovation Adoption. Retrieved from https://apennings.com/characteristics-of-digital-media/diffusion-and-the-five-characteristics-of-innovation-adoption/
    MLA style citation would look like: "Diffusion and the Five Characteristics of Innovation Adoption." Anthony J. Pennings, PhD. Web. 18 June 2015. The date would be the day you accessed the information. View the Writing Criteria link at the top of this page to link to an online APA reference manual.

  • About Me

    Professor at State University of New York (SUNY) Korea since 2016. Moved to Austin, Texas in August 2012 to join the Digital Media Management program at St. Edwards University. Spent the previous decade on the faculty at New York University teaching and researching information systems, digital economics, and strategic communications.

    You can reach me at:

    apennings70@gmail.com
    anthony.pennings@sunykorea.ac.kr

    Follow apennings on Twitter

  • About me

  • Writings by Category

  • Flag Counter
  • Pages

  • Calendar

    February 2025
    M T W T F S S
     12
    3456789
    10111213141516
    17181920212223
    2425262728  
  • Disclaimer

    The opinions expressed here do not necessarily reflect the views of my employers, past or present.