Anthony J. Pennings, PhD

WRITINGS ON DIGITAL ECONOMICS, ENERGY STRATEGIES, AND GLOBAL COMMUNICATIONS

Weak Domestic Dollar, Strong Global Dollar

Posted on | August 3, 2022 | No Comments

The summer of 2022 was a good time for Americans to travel overseas. Everything was expensive at home and cheap abroad. In other words, the dollar was weak domestically and strong globally. In this post, I examine the dynamics of the US dollar and why it operates differently within the domestic US and globally. This description explains how US dollars are created internally and internationally, as well as their significance as a global reserve currency and a mediating vehicle for transactions worldwide.[1]

In the wake of the coronavirus (COVID-19) epidemic and its stimulus spending, supply shocks, and other dramatic tragedies that included the deaths of over a million people in the US alone, the world experienced significant economic disruption, including a serious bout of price rises. We can add the Russian invasion of Ukraine to that turbulence with its prospects for reduced trade in natural gas, metals, oil, wheat, uranium, and other resources that contributed to shortages worldwide and famine in many places in the global South that have become grain dependent.

In the US, we also have political divisions that have polarized policy perspectives, particularly on major spending issues such as the latest Senate budget reconciliation deal tentatively titled the Inflation Reduction Act (IRA), which will address inflation through tax changes, capping drug costs, and climate change measures.

These are some of the main issues that influence the dollar’s strength internally and out in the global system. The consequences are immense as we interrogate government spending and consider alternatives to the US dollar as the world’s primary transacting currency, such as Bitcoin or the Chinese Yuan.

Fed M1 Money Supply

How are Dollars Created?

In both the US and internationally, US-denominated dollars are primarily created by private banks as they make loans. The fractional reserve banking system that most of the world runs on takes deposits and lends out money at prescribed interest rates, contingent on levels of risk, compliance, and the quality of collateral. Banks are usually restricted in loan creation by a reserve requirement set by a central bank. The Federal Reserve had traditionally required US banks to put 10 percent of their deposits in vaults or stored at the Fed. But since the pandemic started, they reduced that requirement to zero percent. This meant that US banks could lend out anything that came in the door.

Internationally, banks have developed a system of lending that is based on what are called “Eurodollars.” These are US currencies outside the jurisdiction of the Federal Reserve and other US regulatory agencies and provide the majority of the world’s transacting currency. These deposits are typically made in countries where there are fewer regulatory restrictions on foreign currency transactions. Once deposited, the U.S. dollars held in foreign banks can be lent out to other banks or financial institutions in the form of loans or interbank deposits.

This shadow money exists on the ledgers of banks and operates with no reserve requirements. Eurodollars have been active since the 1950s but exploded in the 1970s when three things happened: Nixon took us off the Bretton Woods’ Dollar-Gold standard; global oil transactions were set in US dollars; and; communications networks went digital. More on Eurodollars below.[2]

Financial institutions and market participants have developed various financial products and instruments that facilitate the creation and trading of Eurodollars. These innovations include Eurodollar deposits, certificates of deposit (CDs), Eurodollar loans, and Eurodollar futures contracts. Eurodollars can also be created through financial market transactions, such as the issuance of Eurodollar-denominated bonds or Eurodollar futures contracts. These instruments allow investors and institutions to gain exposure to U.S. dollar-denominated assets without directly holding physical dollars.

Overall, the creation of Eurodollars is a result of international banking activities, financial market transactions, and the global demand for U.S. dollar-denominated assets. Eurodollars play a significant role in international finance and are essential for facilitating cross-border trade, investment, and financial transactions.

The Domestic Dollar

While bank lending is the primary way dollars are created, some other processes are mentioned below that are relevant to the creation of the dollar in the US and its spending implications. These include actually printing them, interest rates, reserve requirements, and government spending.
The Fed can literally print money, which it does for the US Treasury. It keeps about 1.5 trillion in paper-based dollars circulating in the economy. That includes about 12 billion one dollar bills, 9 billion twenty dollar bills, and 12 billion hundred dollar bills. Another $2 billion is minted as various coins.

But mostly, what people call “printing” money is the Fed creating bank reserves in the process of targeting interest rates and “quantitative easing” (QE). It does this by buying government bonds from banks with its magical computer mouse, an unlimited credit card. This buying increases the reserves that banks can lend out to borrowers. But bank reserves are not real money unless they are converted into loans for people and businesses. Conversely, it can reduce the amount of bank reserves by selling government securities. That transaction absorbs bank reserves and can potentially reduce lending and thus pressures on inflation. So, in retrospect, the Fed’s role in money-making is not always predictable, and certainly not “money printing” unless you consider its role as the government’s bank.

Most people don’t understand how the US federal government creates money. It’s quite simple, although politicians don’t like talking about it. As Warren Mosler, a financial trader and author of Soft Currency Economics put it, “Congress appropriates the money and then the Treasury instructs the Fed to credit the appropriate accounts.” That’s it. The Fed just sends an electronic message changing figures on computer-based ledgers. But government spending does need Congressional approval and all US spending bills must originate from the House of Representatives.

The government doesn’t need to tax to get the money to spend. It doesn’t need to borrow the money. Taxing and borrowing have their purposes, but they are not required for the government to spend money because the US government issues its own currency. So why tax and why borrow if the government can just issue money to provision itself and pay its obligations?

Warren Mosler, the founder of MMT, argued that taxing keeps everyone on the national currency system and can also reduce inflationary pressures. The purpose of taxing is to keep the national currency relevant and reduce the amount of something. Taxing creates the demand for government-issued currency because they must be paid in US dollars. Taxes can reduce discretionary spending and alleviate social inequality. Taxes can channel additional resources toward national activities as they did during the Cold War and Space Race when the top marginal rates were above 90 percent.[3]

Borrowing money by auctioning off Treasury bonds creates a monetary instrument that pays interest. It’s basically another way for the government to issue its currency and keep markets operating in a dollar system. Also important, in fact quite critical, is its role as collateral in borrowing and hedge against financial risk. Its larger denominations useful for repurchase agreements (“repos”) and futures markets.

Repos are collateralized loans where a borrower of cash exchanges securities such as US Treasuries (the collateral) to the lender with an agreement to repurchase or buy them back at a specified price and time. Treasuries were critical for the Reagan Revolution’s global finance offensive. It tripled the US debt from $738 billion to $2.1 trillion, making the US the world’s largest debtor nation. This debt also provided the collateral needed for the expansion of US money-capital. US treasures provided for the liquidity to make the modern financial sphere possible.

Consequently, according to Stephanie Kelton, debt and deficits are a “myth” for a government that can issue currency. The COVID-19 response was unprecedented spending by the US government. The Senate used an old House bill to create the CARES Act of 2020, with $2.2 trillion in initial stimulus spending. Trump’s son-in-law Jared Kushner would manage the PPP while the COVID-19 vaccines were being developed by Operation Warp Speed. Another $900 billion was added before the end of the year, mostly for direct payments to individuals for the holiday season.

In 2021, the Democrats legislated another $2 trillion in the American Rescue Plan to address the K-shaped recovery. It expanded unemployment insurance, extended the enhanced child tax credit, and supported the Centers for Medicare & Medicaid Services (CMS) to ensure all Americans had access to free vaccinations. Was the COVID-19 spending the cause of the rise of inflation that rose steadily, hitting 9 percent in June of 2022?

We are still in the wake of the COVID-19 pandemic. Shortages emerged from businesses shutting down, factories closing, and shipping containers stranded at sea or in ports waiting for trucks to distribute their goods. Disposable income increased from government stimulus and inflated assets. People forced to study or work at home shifted spending from services to goods. Instead of going to a restaurant, they bought food at the local supermarket. Instead of going to the cinema, they watched Hulu, Netflix, or one of the new Internet-based streaming services.

Corporations raised prices, and purchasing power decreased. As a result, the dollar weakened domestically as the money supply inflated in 2020, and to a lesser extent in 2021. The chart above shows the M1 money supply levels since 1990 (In billions of US dollars). The increase in 2020 was dramatic.

The International US Dollar

As former US Treasury Secretary John Connally once said to his counterparts around the world at an economic summit, the dollar “is our currency, but it’s your problem.” The value of the dollar is currently at a 20-year high against other major currencies, creating a massive problem for everyone outside America buying dollar-denominated goods, which is about 85 percent of all international trade. Oil, and nearly all raw materials, from aluminum to wheat and zinc, are priced in US dollars.

As the world’s “reserve” currency, dollars are in demand around the world, primarily for transactional purposes. It is the mediating currency for transactions among dozens of different countries. If Argentina wants to do business with Australia, it conducts that business through US dollars. Why is the dollar currently very strong overseas? The problem is the significant shortages of the US dollar.

Two interrelated systems loosely organize US-denominated dollars overseas. The more official system is the US dollar reserves held by central banks. The US dollar became the world’s primary reserve currency due to the Bretton Woods Agreements at the end of World War II. The post-war trading system would be based on the US dollar backed by the gold in vaults at Fort Knox and the NY Federal Reserve Bank. The US dollar maintained this link with gold until the 1970s, when President Nixon ended the dollar’s convertibility into gold. Most countries stayed with the dollar and also began buying US Treasury securities as a safe store of the currency.

More than half of the official currency reserves are US dollars with the Euro taking up another quarter. This is actually a historic low. The Japanese yen, pounds sterling, and to a lesser, the Chinese renminbi constitute the rest of the major reserves held by central banks. Bitcoin and other blockchain cryptocurrencies are possibilities for the future, but as we see, currencies have been digital for quite a while. They just haven’t been blockchained-based. Central banks are also developing central bank digital currencies (CBDCs) that may make a multilateral transacting system outside the Eurodollar system more feasible.

Central banks get US dollars through its economy’s international trade, Foreign Direct Investment (FDI), and liquidity swaps. US aid programs, military bases, and trade deficits, especially buying foreign cars and oil, contribute to the spread of the US dollar. It was important for the US to run trade deficits to support the world’s need for dollars. The US oil independence has been particularly problematic for the world’s currency system. Many refer to the “dollar hegemony,” as it is challenging to manage, often destructive, and the US often doesn’t even seem politically motivated to try.

The other system involves the digitally created Eurodollar deposits in internationally networked banks. Eurodollars are not “Euros,” the currency of the European Union. Instead, these are US-denominated virtual currencies created and used outside the US. They not only operate outside the geography of the US, but also the legal jurisdiction of the US. Eurodollars are used by various big banks, money market and hedge funds, insurance companies, and many big corporations.

Eurodollars are primarily lent out (created) in exchange for collateral such as US Treasuries. Depending on the quality of the collateral, the lender also attaches a small fee called a “haircut” to cover any potential liquidity costs. Since the Eurodollar system works on lending, it makes sense that any bank or corporation borrowing money would want to make that transaction as cheap as possible. The price goes down by using good collateral to make it a more secure loan. One can envision a hierarchy of types of collateral with US Treasuries on top, but corporate junk bonds, mortgage-backed securities, sovereign debt, and even gold being other grades.

Overseas dollar markets are unregulated and proprietary, so accurate figures are hard to come by. As it is based on lending, the loans are often refinanced. A major issue is liquidity, being able to respond to the changing conditions in global financial markets that reach over a hundred trillion dollars. The cost and availability of the US dollar can shift for many reasons, including:

  • Changes in US interest rates
  • Shifts in global risk assessments
  • Periods of market stress such as coronavirus epidemics.

As a result, we currently see a considerable dollar shortage across the globe, dramatically increasing its value while causing depreciation of other currencies worldwide. While the international US dollar demand remains high, we can expect energy and food shortages, as well as the transmission of economic shocks such as Sri Lanka recently experienced. Goldman Sachs calculates that dollar strength is adding about $20 to a barrel oil for local currencies. These will have significant effects on both the global financial system and the global economy.

Currency swaps are one way to alleviate stresses in the global monetary system. They are repo agreements where a central bank sells a specified amount of a currency to another central bank in exchange for their currency at the prevailing market exchange rate. The first central bank agrees to buy back its currency at the same exchange rate (with interest) on a specified future date. A central bank initiates this swap transaction so they use the currency it swapped for to lend into their domestic economy. The Swiss National Bank in conjunction with the New York Fed regularly auctions off dollars to other central banks in liquidity swap operations. These are repos with needed collateral that used to be called foreign exchange swaps and have become more utilized as dollar liquidity has diminished.

Significant data gaps make assessing the risks and vulnerabilities of the global currency system challenging. Additional disclosure and data collection are needed to improve the transparency and possible regulation of the global US dollar system. In the meantime, we need to theoretically assess the dynamics of the domestic dollar administration and the possibilities of transforming the international monetary system. While it is highly unlikely that the US dollar will lose its reserve currency status, the current worldwide dollar shortage and its consequences are bringing increased scrutiny and calls for monetary reform, including more multicurrency settlements of energy.

Notes

[1] If you want a refresher on what causes inflation see my Great Monetary Surge of 2020 and the Return of Inflation from earlier this year.
[2] I did my Masters thesis on the emergence of the Eurodollar system and how it became electronic in the 1970s. I want to acknowledge Jeff Snider and the Eurodollar University for sparking the resurgence of my interest in the global currency.
[3] Unlike monetary policy enacted by the Federal Reserve which works to manage the economy by “lifting all boats” and not target individual industries, MMT will likely have to be more targeted. Although it can still focus on infrastructure and other enabling systems.

Citation APA (7th Edition)

Pennings, A.J. (2022, August 2). Weak Dollar, Strong Dollar. apennings.com. https://apennings.com/dystopian-economies/weak-dollar-strong-dollar/

Ⓒ ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea. From 2002-2012 was on the faculty of New York University where he taught comparative political economy, digital economics and traditional macroeconomics. He also taught in Digital Media MBA atSt. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

MMT in a Post-Covid-19 Environment

Posted on | July 29, 2022 | No Comments

Modern Monetary Theory (MMT) is a set of descriptions about government (fiscal) policy originating from the financial world. Traders of government bonds began to inquire about Federal spending and fiscal monetary processes in the post-Reagan environment as the US debt increased substantially. They looked beyond the curtain to processes of money creation and the roles of bonds and taxes in the political economy.

What is the role of money? How does the US government get its money? How does it spend money? What are the constraints on government spending? This post looks at MMT and its prospects by examining the economic problems associated with COVID-19 virus pandemic.

We hear the term “printing money” a lot, usually by gold or bitcoin enthusiasts who believe in establishing strict financial constraints. By establishing “hard money” and limiting the quantity of money in an economy, they hope to see their assets rise in value while keeping prices down. Certainly, governments do print some of their money for public use, but the preponderance of funds are entries in digital ledger accounts. I prefer the imagery of the “magical mouse” to the printing press. But I digress.

So how does the US government spend money? In the US, pursuant to Congressional legislative action, the Federal Reserve credits the appropriate accounts at the Department of the Treasury. That’s pretty much it.

Currency-issuing governments do not have the same financial constraints as a household, a business, or even a municipal government, state, or province. They can use their sovereign currency to pay for whatever is available to buy in their own denominated currency. Those are pretty important points so you may want to read them again. They don’t need to print, tax, or auction off bonds to to spend. But it is registered.

Stephanie Kelton at Stony Brook University in New York has been a strong academic and policy proponent of MMT and its possibilities. A professor of economics, she has also been a consultant to Democratic candidates. In this video, she explains how MMT sees the spending process and some of the implications of seeing money and the deficit in a new way.

Kelton and the MTT theorists have been clear about the downsides of these spending processes. While a currency may not have the restrictions many believe, resources can face limitations. Labor, education, land, materials, and even imagination are called into service under such conditions. Shortages can mean rising prices, has we have seen in the COVID shutdown of businesses and supply chains. MTT proponents have been mindful of this stress and the potential of rising prices from its inception.

But the problem of increasing costs from resource limitations never really emerged since the 1970s until the Covid-19 pandemic. Globalization (particularly the Chinese workforce), a large working force (including women), and technological innovations such at IT and the Internet increased productivity and managed prices at relatively low levels. Also, spending, despite deficits, stayed within manageable levels. Then, however, the pandemic began to shut down businesses and disrupt traditional supply lines. Also, the Federal Reserve, the US central bank, reduced interest rates and increased bank reserves. At the same time, the Federal government began to spend trillions on Payroll Protection Program (PPP) and other stimulus spending.

Stimulus spending continued into 2021 to address the K-shaped economy, a split in continuing economic benefits to those who could adjust to the pandemic and those hit the hardest. As people were hunkering down at home, many adjusted by working online or taking classes via Zoom. Others lost their jobs or had to quit to take care of children not attending school. Some $5 trillion was spent as COVID-19 stimulus, and the pandemic severely limited work and availability of goods and services. As a result, the economy was strained and thrown into disarray.

During 2021, prices started to rise, especially oil that had crashed in 2018, and to less than $20 a barrel in 2020 as the pandemic took hold. Oil production shut down until the roaring return of demand in late 2021 and 2022. In February 2022, as Russia attacked Ukraine, oil access was further reduced, as was the availability of fertilizers and materials needed to fight climate change.

In addition to resources, MMT also needs the ideas and policies to put the currency to work efficiently. Wars have been the defacto spending rationalization for governments. Military spending creates jobs and innovative technologies that often spill over into the larger economy.[1]

How do we collectively decide on alternative spending strategies? Should consumers drive social demand? Can universal healthcare drive down business costs? Can renewable energy infrastructure investment kickstart a new industrial era-based cheap electricity inputs? Should increased spending go into a war on climate change?

How do we recognize the constraints in these strategies? Materials? Labor? Criticisms of centrally planned economies abounded in the 20th century. Most concerns have lamented the lack of price signals aggregated from individual consumers and profit-seeking private firms.

Others have argued that government spending would “crowd out” the private sector by absorbing available investment capital. This would decrease private sector activity. But Kelton argues current economics gets it backward. Spending comes first.

Do they sufficiently allocate resources in a society? The Paycheck Protection Program (PPP) stimulus program under the Trump administration was probably the wildest expenditure of government money in modern history. But did it matter if the goal was to stop the slide into a major recession?

Notes

[1] Unlike monetary policy enacted by the Federal Reserve which works to manage the economy by “lifting all boats” and not target individual industries, MTT will likely have to be more targeted.

Ⓒ ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea. From 2002-2012 was on the faculty of New York University where he taught comparative political economy, digital economics and traditional macroeconomics. He also taught in Digital Media MBA atSt. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

Al Gore, Atari Democrats, and the “Invention” of the Internet

Posted on | July 9, 2022 | No Comments

This is the fourth part of a narrative about how the Internet changed from a military network to a wide-scale global system of interconnected networks. Part I discussed the impact of the Strategic Defense Initiative (SDI) or “Star Wars” on funding for the National Science Foundation’s adoption of the ARPANET. In Part II, I looked at how the fears about nuclear war and Japan’s artificial intelligence (AI) propelled early funding on the Internet. Finally, Part III introduced the “Atari Democrats” and their early role in crafting the legislation in creating the Internet.

This post is a follow-up to make some points about Al Gore and the Atari Democrat’s involvement in the success of the Internet and how political leadership is needed for technological innovation and implementation. What is particularly needed is to draw lessons for future infrastructure, open-source software, and other enabling systems of innovation, social connectedness, and civic-minded entrepreneurship. These include smart energy grids, mobility networks, healthcare systems, e-commerce platforms, and crypto-blockchain exchanges.

The story of Al Gore “inventing” the Internet started after CNN’s Wolf Blitzer interviewed the Vice-President in 1999 and gained traction during the 2000 Presidential campaign against George W. Bush. The accusation circulated that Gore claimed he “invented the Internet” and the phrase was used to tag the Vietnam vet and Vice President as a “liar” and someone who couldn’t be trusted. The issue says a lot about way election campaigns operate, about the role of science and technology in the economy, and especially about the impact of governance and statecraft in economic and technological development. Here is what he actually said:

Of course, the most controversial part of this interview about Vice President Gore’s plans to announce his presidential candidacy was this statement, “During my service in the United States Congress, I took the initiative in creating the Internet.” That was turned into “inventing the Internet” and was used against him in the 2000 presidential elections. The meanings are quite different.

“Inventing” suggests a combination of technical imagination and physical manipulation usually reserved for engineers. We do after all, want our buildings to remain upright and our tires to stay on our cars as we ride down the road. To “create” has a much more flexible meaning, indicating more of an art or a craft. There was no reason to say he invented the Internet except to frame it in a way that suggested he designed it technically, which does sound implausible.

Gore never claimed to have engineering prowess but was never able to adequately counter this critique. Gore would win the popular vote in 2000 but failed in his bid for the Presidency. The Supreme Court ruled he had lost Florida’s electoral votes in a close and controversial election in the “Sunshine” state. It’s hard to say how much this particular meme contributed to the loss but the “inventing” narrative stuck, and has persisted in modern politics in subtle ways.

The controversy probably says more about how little we understand innovation and technological development and how impoverished our conversations have been about the development of data networks and information technologies. The history of information technologies and particularly communications networking has been one of the interplay between technical innovation, market dynamics and intellectual leadership guiding policy actions, including military and research funding.

The data communications infrastructure, undoubtedly the world’s largest machine, required a set of political skills, both collective and individualized, to be implemented. In addition to the engineering skills that created the famed data packets and their TCP/IP (Transmission Control Protocol and Internet Protocol) protocols, political skills were needed for the funding, for the regulatory changes, and the global power needed to guide the international frameworks that shape what are now often called Information and Communications Technologies (ICT). These frameworks included key developments at the International Telecommunications Union (ITU), the World Intellectual Property Organization (WIPO) and the World Trade Organization (WTO).

Al Gore got support from somebody generally considered to be the “real inventors” of the Internet. While Republicans continued to ridicule and “swiftboat” Gore for trying to claim he “invented the Internet.” many in the scientific community including the engineers who designed the Internet, verified Gore’s role.Robert Kahn and Vinton Cerf acknowledged Gore initiatives as both a Congressman and Senator.

    As far back as the 1970s Congressman Gore promoted the idea of high speed telecommunications as an engine for both economic growth and the improvement of our educational system. He was the first elected official to grasp the potential of computer communications to have a broader impact than just improving the conduct of science and scholarship. Though easily forgotten, now, at the time this was an unproven and controversial concept. Our work on the Internet started in 1973 and was based on even earlier work that took place in the mid-late 1960s.

    But the Internet, as we know it today, was not deployed until 1983. When the Internet was still in the early stages of its deployment, Congressman Gore provided intellectual leadership by helping create the vision of the potential benefits of high speed computing and communication. As an example, he sponsored hearings on how advanced technologies might be put to use in areas like coordinating the response of government agencies to natural disasters and other crises. – Vint Cerf

Senator Gore was heavily involved in the 1980s sponsoring legislation to research and connect supercomputers. Gore was an important member of the “Atari Democrats.” Along with Senator Gary Hart, Robert Reich, and other Democrats, they pushed forward “high tech” ideas and legislation for funding and research. The meaning of the term varied but “Atari Democrat” generally referred to a pro-technology and pro-free trade social liberal Democrat.

Atari was a very successful game arcade, console, and software company in the 1970s. Started by Nolan Bushnell, it gave a start to Steve Jobs and Steve Wozniak, among others. The term began to stick to some Democrats around 1982 and generally linked them to the Democrats’ Greens and an emerging “neo-liberal” wing. It also suggested that they were “young moderates who saw investment and “high technology” as an answer and complement to the New Deal.” [1]

The New York Times discussed the Atari Democrats and tensions that emerged during the 1980s between the traditional Democratic liberals and the Atari Democrats. The latter were attempting to find a middle ground on the economy and international affairs with the Republicans while the former courted union workers, many of whom had shifted to Reagan and the Republicans.[2]

One of the emerging issues pf the time was the trade deficit with Japan whose cars and electronics were making significant inroads into the US economy. Gore and other Democrats were particularly concerned about losing the race for artificial intelligence. The Japanese “Fifth Generation” AI project was launched in 1982 by the country’s Japan’s Ministry of International Trade and Industry (MITI) that had a reputation at the time for guiding Japan’s very successful export strategy.

Known as the national Fifth Generation Computer Systems (FGCS) project, the AI project was carried out by ICOT, (later AITRG), part of the Japan Information Processing Development Corporation (JIPDEC), the Advanced IT Research Group, (AITRG) a research institute that brought in Japanese computer manufacturers (JCMs), and a few other electronics industry firms. A major US concern was that the combination of government involvement and the Keiretsu corporate/industrial structure of the Japanese political economy would give them a major advantage in advanced computing innovations.

Congress was concerned about the competition over high-speed processors and new software systems that were recognized at the time as a crucial components in developing a number of new military armaments, especially the space-based “Star Wars” missile defense system that President Reagan had proposed as the Strategic Defense Initiative (SDI). Any system of satellites and weaponry forming a defensive shield against nuclear attack would need advanced microprocessors and supercomputing capabilities. It would need artificial intelligence (AI).

The likely vehicle for this research was National Science Foundation (NSF), the brainchild of Vannevar Bush, who managed Science and Technology for the US during World War II. That included the Manhattan Project that created the Atomic Bomb. The NSF was formed during the 1950s with established areas of research in biology, chemistry, mathematics, and physics. In 1962, it set up its first computing science program within its Mathematical Sciences Division. At first it encouraged the use of computers in each of these fields and later towards providing a general computing infrastructure, including setting up university computer centers in the mid-1950s that would be available to all researchers. In 1968, an Office of Computing Activities began subsidizing computer networking. They funded some 30 regional centers to help universities make more efficient use of scarce computer resources and timesharing capabilities.

In 1984, a year after TCP/IP was institutionalized by the military, the NSF created the Office of Advanced Scientific Computing, whose mandate was to create several supercomputing centers around the US. [2] Over the next year, five centers were funded by the NSF.

General Atomics — San Diego Supercomputer Center, SDSC
University of Illinois at Urbana-Champaign — National Center for Supercomputing Applications, NCSA
Carnegie Mellon University — Pittsburgh Supercomputer Center, PSC
Cornell University — Cornell Theory Center, CTC
Princeton University — John von Neumann National Supercomputer Center, JvNC

However, it soon became apparent that they would not adequately serve the scientific community. Gore began to support high-performance computing and networking projects, particularly the National Science Foundation Authorization Act where he added two amendments, one calling for more research on the “Greenhouse Effect” and other calling for an investigation of future options for communications networks for connecting research computers. This Computer Network Study Act would specifically examine the requirements for data transmission capabilities conducted through fiber optics, data security, and software capability. The NSFNET decision to chose TCP/IP in 1985 as the protocol for the planned National Science Foundation Network (NSFNET) would pave the way for the Internet.

In the Supercomputer Network Study Act of 1986 Gore proposed to direct the Office of Science and Technology Policy (the Office) to study critical issues and options regarding communications networks for supercomputers at universities and Federal research facilities in the United States and required the Office to report the results to the Congress within a year. The bill got attached to the Senate Bill S. 2184: National Science Foundation Authorization Act for Fiscal Year 1987 but it was never passed.

Still, a report was produced that pointed to the potential role of the NSF in networking supercomputers and in 1987 the NSF agreed to manage the NSFNET backbone with Merit and IBM. In October 1988 Gore sponsored additional legislation for “data superhighways” in the 100th Congress. S.2918 National High-Performance Computer Technology Act of 1988 And later H.R.3131 – National High-Performance Computer Technology Act of 1989 sponsored by Rep. Doug Walgren to amend the National Science and Technology Policy, Organization, and Priorities Act of 1976 and direct the President, through the Federal Coordinating Council for Science, Engineering, and Technology (Council), to create a National High-Performance Computer Technology Plan and to fund a 3 Gigabit network for the NSFNET.

It paved the way for S.272 High-Performance Computing and the National Research and Education Network (1991-1992) sponsored by Al Gore that passed and was signed by President George H.W. Bush on December 9, 1991. Often called the Gore Bill, it led to the development of the National Information Infrastructure (NII) and the funding of the National Research and Education Network (NREN).

The Gore Bill began discussion of the “Information Superhighway” that enticed cable, broadcast, telecommunications, satellite, and wireless companies to start developing their digital strategies. It also provided the groundwork for Gore’s international campaign for a Global Information Infrastructure (GII) that would lead to the relatively seamless and cheap data communications of the World Wide Web.

Its $600 million appropriation also funded the National Center for Supercomputing Applications (NCSA) at the University of Illinois, where graduate student Marc Andreessen and others created Mosaic, the early Web browser that became Netscape. The Netscape IPO started the Internet’s commercial boom of the 1990s.

As chairman of the Senate Subcommittee on Science, Technology, and Space, Gore held hearings on these issues. During a 1989 hearing colloquy with Dr. Craig Fields of ARPA and Dr. William Wulf of NSF, Gore solicited information about what constituted a high-speed network and where technology was headed. He asked how much sooner NSFnet speed could be enhanced 30-fold if more Federal funding was provided. During this hearing, Gore made fun of himself during an exchange about high-speed networking speeds: “That’s all right. I think of my [1988] presidential campaign as a gigaflop.” [The witness had explained that “gigaflop” referred to one billion floating point operations per second.]

It’s not my intention to obsess on the man or the personality. Rather, Gore is interesting because he has been a successful legislator and a mover of public opinion. He can also take credit for much of the climate change discussion. He has worked hard to bring the topic to the public’s attention and mobilize action on markets for carbon credits and the acceleration of developments in alternative energy. His actions and tactics are worth studying as we need more leaders, perhaps “Atari Democrats,” who can create positive futures rather than obsessing on tearing down what we have.

Notes

[1] E. J. Dionne, Special to The New York Times. WASHINGTON TALK; Greening of Democrats: An 80’s Mix of Idealism And Shrewd Politics. The New York Times, 14 June 1989, www.nytimes.com/1989/06/14/us/washington-talk-greening-democrats-80-s-mix-idealism-shrewd-politics.html. Accessed April 24th, 2019.

[2] Wayne, Leslie. Designing a New Economics for the “Atari Democrats.” The New York Times, 26 Sept. 1982, www.nytimes.com/1982/09/26/business/designing-a-new-economics-for-the-atari-democrats.html.

Share

© ALL RIGHTS RESERVED

AnthonybwAnthony J. Pennings, PhD is a Professor at the State University of New York, South Korea. Previously, he taught at St. Edwards University in Austin, Texas and was on the faculty of New York University from 2002-2012. He began his teaching career at Victoria University in Wellington, New Zealand and was a Fellow at the East-West Center in Hawaii in the 1990s.

U.S. Internet Policy, Part 6: Net Neutrality, Broadband Infrastructure and the Digital Divide

Posted on | June 4, 2022 | No Comments

In the era of the COVID-19 pandemic, the Internet proved to be more critical than ever. Parents telecommuted to work, kids telelearned at home, and streaming media services entertained both. Sick people began to use telemedicine instead of visiting the doctor or hospital. Many people applied for jobs online. Families also embraced the relative safety and convenience of e-commerce delivering commodities to the home.

Yet broadband services in the US were often not available, affordable, or up to the task of allowing all the people in a home to access the bandwidth needed. Previously I examined the decline of ISPs and the dominance of traditional telcos over broadband. Also, I wrote about the Trump’s administration’s support for ending net neutrality by the FCC. This post looks at the plans to increase funding for broadband infrastructure in the US and list some of the challenges facing the Biden Administration’s Internet policy.

The digital divide proved to be more consequential than ever as the K-shaped recovery took shape, exacerbating income divisions. The divide has been particularly stressful on American families as schools and other activities for kids closed down during the Covid-19 pandemic. Some 20 million Americans had none, or very slow Internet service. while another 100 million could not afford broadband.

Inequalities were deepened by the types of jobs affected, as contact jobs in service industries were particularly hard hit while more professional jobs that could be conducted online did well. Also, financial assets continued to appreciate due to the Federal Reserve’s low-interest rates, and quantitative easing kept mortgages cheap and raised home prices.

During his first 100 days, Biden proposed a $2.25 trillion infrastructure package focused on updating transportation infrastructure as well as funding to fight climate change and other provisions to prop up American families. It has also incorporated the The Accessible, Affordable Internet for All Act introduced by US Senator Amy Klobuchar (D-MN), Co-chair of the Senate Broadband Caucus, and House Majority Whip James E. Clyburn (D-SC) in early 2021. This plan to modernize underserved communities involved:

– $80 billion to deploy high-speed broadband infrastructure;
– $5 billion for a related secured loan program; and a
– New office within the National Telecommunications and Information Administration (NTIA) to monitor and ensure transparency in these projects.

In early November 2021, the House passed the Infrastructure Investment and Jobs Act bill 223-202 allocating over US$1 trillion in spending for much-needed infrastructure, with $579 billion in new spending, including US$65 billion for broadband.

The Senate had organized and passed the bill in the summer but it was held up in the House of Representatives. The House progressives wanted the bill tied to a third phase of the Build Back Better program with increased social spending for healthcare, new housing, and climate change. Eventually, Speaker of the House Nancy Pelosi organized enough votes to pass the measure independently and President Biden signed the bill on November 15, 2021.

Infrastructure Bill

The infrastructure bill allocates money to three major areas: direct payments to consumers to pay for broadband services, support for municipal networks, including those in tribal areas, and subsidies for major companies to build out more broadband infrastructure such as fiber optics lines and wireless base stations. The money is destined for the states who can demonstrate groups in need.

It allocates $14 billion to help low-income Americans pay for service at about $30 a month. President Biden announced progress in the Affordable Connectivity Program, an extension and revision of the Emergency Broadband Benefit in the spring of 2022.

The digital divide emerged as a much-needed policy issue again due to the priorities of the COVID-19 pandemic and changes in education and work. More and better access, especially in rural areas became a high priority. Major broadband providers have organized to take advantage of infrastructure spending and comply with the administrations concerns to provide $30 monthly broadband subsidies for eligible households.

Several issues linger in the public’s consciousness depending on media attention. Some have come to the forefront of public scrutiny. These include:

– Even more, better, and cheaper broadband access through mobile, satellite, and wireline facilities, especially in rural areas. Broadband strategies must also consider the implications of SpaceX’s Starlink satellite network.

Also, we should be looking at a wider range of Internet issues such as:

– Antitrust concerns about cable and telco ISPs, including net neutrality. [1]
– Privacy and the collection of behavioral data by platforms to predict, guide, and manipulate online user actions.
– Section 230 reform for Internet platforms and content producers, including assessing social media companies’ legal responsibilities for user-generated content.
– Security issues, including ransomware and other threats to infrastructure.
– Deep fakes, memes, and other issues of misrepresentation, including fake news.
– eGovernment and digital money, particularly the role of blockchain and cryptocurrencies as the Internet moves to Web 3.0.

Citation APA (7th Edition)

Pennings, A.J. (2022, Jun 4). U.S. Internet Policy, Part 6: Broadband Infrastructure and the Digital Divide. https://apennings.com/telecom-policy/u-s-internet-policy-part-6-broadband-infrastructure-and-the-digital-divide/

Share

Notes

[1] Newman, R. (2016). The Debate Nobody Knows: Network Neutrality’s Neoliberal Roots and a Conundrum for Media Reform. International Journal of Communication, 10, 5969–5988.

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, Ph.D. is Professor of the Department of Technology and Society, State University of New York, Korea. From 2002-2012 was on the faculty of New York University. Previously, he taught at Victoria University in New Zealand where he studied the changes in their telecommunications industry. He researched also telecommunications issues at the East-West Center in Honolulu, Hawaii and was associated with the Pacific Telecommunications Council (PTC). He keeps his American home in Austin, Texas.

Analyzing YouTube Channels

Posted on | May 2, 2022 | No Comments

The video capabilities of the Internet have made possible a new era of media analysis. The traditions of film and television can be brought to the new online media, including YouTube. The challenge will be to both apply traditional media analysis as well as suggest new modes of televisual criticism for this relatively new medium.[1]

This post reviews some of the techniques used in cinematic and television production and suggests a strategy to analyze YouTube channels based on work in my Visual Rhetoric and IT class. It uses a semiotic (study of meaning) framework to examine the channel imagery (mise-en-scène) and editing (montage) to determine what might make such videos successful. It applies denotative and connotative strategies to describe the meaning-making practices with vocabulary and explain what YouTube creators do to inform/entertain the audience.[2]

Story-telling is an important consideration. Who is telling what story, and how? What narrative devices are being used? How does it engage the audience? This channel Lessons from the Screenplay (LFTS) discuses the narrative of how the new James Bond was introduced in 2006. Note who is speaking, whether you actually see them, and how the story is being told.

What are the main signifying practices that make the YouTube channel a success? A semiotic approach looks closely at the details visible in the video (denotation). It then connects the content with connotative meanings such as social myths and narratives. These details would include various “signs” such as indexical metrics, mainly subscribers, views, likes, and the money YouTubers make. It also looks at the typographies, logos, and other meaning-making practices, such as camera pictures, and how those images are spliced together.

The shot continues to be the major unit of analysis, reflecting the relationship between the camera and the scene. The primary visual grammar holds for the most part – establishing shot (LS), medium shot (MS), closeup (CU). The wider shot creates a meaningful context for the tighter images that provide detail and usually heightened emotion. The smartphone now offers an extraordinary high-quality camera for single shots or high-definition video that can get YouTube channelers started.

Ask more about the mise-en-scene. What do you seen in the image? The lighting? The props? The backgrounds? The special effects (FX)? Is a drone used for birds-eye shots? How is the camera used to create the shot – zooms, pans, tilts. And why? What is the motivation or reason for the technique? Who is doing the shooting? What camera techniques are being used? And why?

A bit more challenging is the editing. Applications like iMovie or just hiring someone on Fiverr.com have been helpful for YouTube channelers. The analytical challenge is to keep up with the vocabulary and understanding what the montage is doing in terms of “building” meaning in the narrative.

A narrative is a series of connected events edited in chronological significance. Montage editing can include parallel editing, flashbacks in time, and flash forwards in time as well. What about the montage is noteworthy? What is the pace of editing? What transitions are being used – and once again, what is the motivation?

Ask what drives the narrative. Narration asks who is the storyteller? Who is the narrator? Are they like the anchor in a news program? What is the mode of address: voice-over, talking to the camera, or a combination? Do we see him or her? Is it combined with a voice-over? Or is it all told from a voice of anonymous authority? -the voice of “God” booming over the montage of images.

Also relevant are the microeconomics of the channel and the overall political economy of YouTube. Understanding how YouTube channels make money either through Adsense, brand arrangements, and, more recently, patronage helps understand a channel’s messaging. In the latter, individuals and organizations become “patreons” by pledging to support a channel financially, including smaller payments of gratitude at websites such as buy me a coffee.

Recommendation engines are a key to understanding viewer captivity in YouTube. Using a sophisticated computer algorithm and data collection system, it finds content related to your search or the content you are watching. These computer programs reduce what could become complex search and decision processes to just a few recommendations. It lists a series of video thumbnails based on metadata from the currently viewed video and information gathered about your past viewings and searches.

In February 2005, Chad Hurley, Steve Chen, and Jawed Karim established YouTube in San Bruno, California. YouTube has since become the second-most visited website worldwide with 14 billion monthly views. Only Google, from the same parent company, Alphabet, has more. By 2022, almost 5 billion videos were watched on YouTube every single day, and 694,000 hours of video are streamed on YouTube each minute. More than half of those YouTube views are seen on mobile devices.

This particular form of YouTube analysis asks: What signifying practices make the YouTube channel a success?[3] It applies the denotative and connotative semiotic methodologies to language (describe) and explains what creators do in their videos to educate and/or entertain the audience. This process trains students to understand what techniques are used and their impact. Hopefully, this also contributes to the growing YouTube Studies area in academia.

Citation APA (7th Edition)

Pennings, A.J. (2022, May 2) Analyzing YouTube Channels. apennings.com https://apennings.com/media-strategies/analyzing-youtube-channels/

Share

Notes

[1] Pennings, A. (2020, Feb 2). YouTube Meaning-Creating (and Money-Creating) Practices. Retrieved from https://apennings.com/media-strategies/youtube-meaning-creating-practices/ Accessed May 1, 2022.
[2] Our class uses a series of web pages by David Chandler to learn basic televisual grammar and vocabulary, semiotics and signs, as well as denotation, connotation and myth. His book Semiotics: The Basics is also very useful.
[3] According to Karol Krol, there are several features or qualities that make a YouTube channel successful: continuous posting, using an angle, content quality, and content that is international.

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a professor at the Department of Technology and Society, State University of New York, Korea and the Undergraduate Program Director for a BS in Technological Systems Management. Before joining SUNY, he was on the faculty of New York University where he created and co-managed the BS in Digital Communications and Media. He began teaching cinema studies at the University of Hawaii as a PhD student at the East-West Center in Honolulu, Hawaii.

Wireless Charging Infrastructure for EVs: Snack and Sell?

Posted on | April 22, 2022 | No Comments

I recently chaired the defense of a PhD dissertation on patents for electric vehicle (EV) charging. Soonwoo (Daniel) Chang, with his advisor, Dr. Clovia Hamilton, did a great job mapping trends associated with the technology patents central to wireless electric charging, including resonant induction power transmission (RIPT).[1] The research got me interested in exploring more about the possibilities of wireless charging as part of my Automatrix series, especially since the US government is investing billions of dollars in developing electric charging infrastructure throughout the country.

This post discusses some of the issues related to wireless charging of EVs. Cars, but also buses, vans, and long-haul trucks are increasingly using electric batteries for locomotion instead of petroleum-fueled internal combustion engines (ICE). It suggests that wireless charging infrastructure be considered that can allow EVs to partially replenish their batteries (“snack”) and in some cases sell electricity back to the grid. Currently, not many EV original equipment manufacturers (OEMs) are designing their automobiles for wireless charging. Why not?

Most likely, OEMs are quite aware of what they need to succeed in the short term. Tesla, for example, has developed an EV charging infrastructure viable for personal autos using cables that plug into a vehicle. But we should also be wary of some of the limitations of plug-in chargers and prepare to build a more dynamic infrastructure that would allow charging in multiple locations and without getting out of the vehicle. These are some of my speculations, and they do not necessarily reflect the results of the soon-to-be Dr. Chang, whose research sparked a lot of my thinking.

You may have used wireless charging for your smartphone. It’s helpful to get a quick charge without dealing with plugs and wires. Inductive charging has some limitations though. While you can play music or a podcast, or talk over a speaker mic, it’s typically immobile. Your phone also needs to be very close to the charging device, its coils aligned properly, and it gets hot. It’s generally not that energy-efficient either, often losing more than 50% of its electricity while charging. With several devices connected in nearly every home, the losses can add up, putting strain on a community’s electrical grid.

In EV wireless charging, a receiver with a coiled wire is placed underneath the vehicle and connects to the battery. This “near field” charging requires that the vehicle be near a similar charging coil. The receiver needs to come near a charging plate on the ground to transmit the energy.

However, advances in magnetic resonance technologies have increased the distance and energy efficiencies involved in wireless charging of EVs. Power transfer is increased by electromagnetically tuning the devices to each other with a magnetic flux, allowing convenient replenishing of a vehicle’s battery. Electric devices generally have conductive wires in a coil shape that maintain stability for the instrument by resisting or storing the flow of current, initially. They are classified by the frequency of their current that flows through it that consists of direct current (DC), audio frequency (AF), and radio frequency (RF). These frequencies can be managed and directed to transfer power over a short distance to another electrical coil. They are not radio waves or emit ionizing radiation that have sufficient energy to detach electrons, so they appear to be relatively safe.

WiTricity for example, uses frequencies around 85 kHz to tune the two coils and expand the charging range from a few millimeters to tens of centimeters. The Massachusetts company has spearheaded the development of this technology and has opened up many possibilities, particularly its use in the public sphere. This may also include dynamic charging that allows a vehicle to charge while moving. See the below video about WiTricity:

It’s no secret that charging issues are a limiting factor for EV diffusion. Drivers of ICE vehicles have resigned themselves to getting out of their cars, organizing a payment, inserting the nozzle into the gas tank, and waiting patiently for a few minutes while the liquid hydrocarbons poured into their cars. Unfortunately, EV owners are still dealing with a lack of available charging locations as well as challenges with nozzle standards, payment systems, long lines, and lengthy charging periods. Currently, most EV owners in the US charge at home with L2 chargers that can readily be bought online or at a Home Depot. EV owners in urban areas need to find other locations, such as parking garages and may face fines for staying too long.

Standards both permit and restrict technological solutions. The Society of Automotive Engineers (SAE) published the J2954 standard in 2020 for wireless charging with three wireless charging levels — WPT1 (3.7 kW), WPT2 (7 kW), and WPT3 (11 kW) for transfer up to 10 inches. These generally may take up to three and a half hours to fully charge.[2]

Yes, these are not impressive numbers given the competition from high-end EV superchargers. Even the EVgo chargers at my local Walmart in Austin have several standards (J-1772, CHAdeMO, CCS/SAE) that charge from 14.4 – 50 kW.[1] Note that EVs have onboard chargers with varying acceptance rates (in kW) that convert AC electricity found in homes to the DC a car battery needs to store. A Chevy Volt with 3.3 kW acceptance will not charge as fast as a Telsa Model S with 10 kW no matter where it is plugged. Other factors include charging cables that can often be awkward to handle. The experimental charging “snake” that reaches out and automatically finds and connects to the auto’s electric nozzle doesn’t seem to be viable yet. So, charging is a limiting factor for EV adoption success.

The strategy behind wireless charging will probably focus more on what WiTricity has coined “power snacking” than full meals of electricity. Snacking is actually better for your battery as longevity improves if you don’t let the battery capacity run down to below 20% or recharge it to 100 percent. Keeping the electric ions in equilibrium across the battery reduces strain and increases the number of charge cycles before degrading occurs.

The snacking can be done by a waiting taxi, a bus stopping for a queue of passengers, a quick stop at a convenience store, and perhaps EVs waiting at a red light. Shopping centers are likely to “capture” customers with charging stalls, especially if they can reduce costs by having micro-grids with solar panels on roofs.

Many countries have tested infrastructure for charging EVs in motion, although this will require substantially more investment. “Dynamic” wireless charging appears to be feasible but comes with high costs as it needs to be embedded in existing infrastructure such as bridges and highways.

The major issues for wireless charging are the infrastructure changes needed and the OEM buy-in. Wireless charging will require more planning and construction than the current charger station. They will also require monitoring and payment applications and systems. Most importantly, they will require electricity – and without significant capacity coming from renewable sources, the purpose will be mainly defeated. Vehicle manufacturers will need to include the wireless charging pads and ensure safety. On the positive side, they can use smaller batteries for many models as constant recharging will reduce range anxiety.

Wireless technologies have been successfully tested for “vehicle to grid” (V2G) transmission. This innovation means a car or truck can sell electricity back to the grid. These might be particularly useful for charging locations off the grid or places challenging to connect—for instance, charging pads at a red light. So we might see a “snack or sell” option in future cars. The prices are likely to vary by time, location, and charging speed, but this setup will present some arbitrage opportunities.

The arbitrage economics are based on ‘valley filling’ when EVs charge at low-demand hours, often overnight, and ‘peak shaving’ when an EV transmits stored energy back into the grid during high-demand hours. So, for example, a vehicle charging at home overnight with cheaper grid electricity or excess from solar panels can sell it on the way to work or at the office parking lot. You might not get rich, but considering the money currently spent on diesel or petrol, it could still help your wallet.

Effective infrastructure like highways or the Internet provides indirect network effects, allowing different entities to use the system and expanding that network’s possibilities. The Global Positioning System (GPS), for example, uses some 27 satellites that transmit pulsed time codes. These signals allowed multiple devices to be invented that triangulate and compute a latitude, longitude, and altitude position to provide different location services. In this case, an effective wireless charging infrastructure enables many different vehicles to use the electrical network. A lot of the wireless charging infrastructure will be done by corporate fleets like Amazon, Best Buy, and Schindler Elevator. Hopefully, the US Post Office will catch up.

However, the US government made a down payment on EV charging stations in the 2021 infrastructure bill. Legislation targeted $15 billion in the Infrastructure Investment and Jobs Act, for “a national network of electric vehicle (EV) chargers along highways and in rural and disadvantaged communities.” It was cut in half to $7.5 billion to provide much needed funding for electric bus fleets for schools and municipalities.[3] Should future infrastructure spending target wireless charging?

Once we move beyond the internal combustion engine (ICE) in vehicles, you will see a lot more flexibility in design of autos, buses, vans, trams, etc. They require fewer parts and are easier to construct. We see it on the lower end with electric bikes and even those controversial electric scooters. New forms of electric autonomous trolleys and vans are necessary to revive urban transportation in a quiet and sustainable way. All these changes in mobility will require changes in the electrical infrastructure.

The term “Smart Mobility” has emerged as an important sociological and technical construct. The city of Austin, Texas:

    Smart Mobility involves deploying new technology to move people and goods through the city in faster, safer, cleaner, more affordable and more equitable ways. Our mission is “to lead Austin toward its mobility future through meaningful innovation, collaboration, and education.”

Smart devices have expanded to smart vehicles. Autonomy is becoming prevalent and some cars and trucks offer full self-driving options (FSD), with or without a passenger. Wireless charging is central to this process. Auto-valet services, for example, will allow your car to drop you off and park itself, likely at a stall that can provide charging. Who is going to get out to plug it in?

Notes

[1] Gaining Competitive Advantage with a Performance-Oriented Assessment using Patent Mapping and Topic Trend Analysis: A Case for Comparing South Korea, United States and Europe’s EV Wireless Charging Patents. A 2022 PhD Dissertation by Soonwoo (Daniel) Chang for Stony Brook University in New York. He can be reached at sdchang8@gmail.com

[2] A kWh is a 1,000 Watts of electricity. Named after James Watt, the inventor of the steam engine, a watt is the unit of electrical power equal to one ampere under the pressure of one volt. The time it takes to charge a car depends on both the car’s acceptance rate and the amount of electricity sent by the charging station. Volts x Amps – Wattage.

[3] Known officially as the Infrastructure Investment and Jobs Act, it authorized over half a trillion dollars in spending for airports, bridges, broadband, rail, roads, and water systems. It also included up to $108 billion in spending for public transportation such as rail as part of the largest federal investment in public transit in the nation’s history. Another $73 billion was for upgrades to the electrical grid to transmit higher loads while efficiently collecting and allocating energy.

Citation APA (7th Edition)

Pennings, A.J. (2022, Apr 22). Wireless Charging Infrastructure for EVs: Snack and Sell? apennings.com https://apennings.com/mobile-technologies/wireless-charging-infrastructure-for-evs-snack-and-sell/

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at St. Edwards University in Austin, Texas. Originally from New York, he taught at Marist College and from 2002-2012 was on the faculty of New York University where he taught digital economics and information systems managementand. His first academic job was at Victoria University in New Zealand. He was also a Fellow at the East-West Center in Honolulu, Hawaii.

The Great Monetary Surge of 2020 and the Return of Inflation

Posted on | March 4, 2022 | No Comments

COVID-19 hit dramatically in early 2020, sending the financial markets around the world reeling and unemployment soaring. The immediate response in the US was a monetary surge led by both the Federal Reserve Bank and legislation by the US Congress.

The rising money tide reversed the economic collapse, alleviated some of the harshnesses of the K-shaped recovery, and helped many companies weather the sudden downturn.[1] These 2020 monetary and fiscal spikes continued to reverberate in the economy, increasing the value of financial assets, reducing unemployment, but also adding to inflationary pressure.

This post examines the 2020 monetary surge and compares it with additional spending in 2021 and the slight declines entering 2022. What impact did the second Trump stimulus have in December 2020? What additional liquidity did the Biden “Build Back Better” bills in 2021 introduce? Finally, how much did the Fed’s continuing Quantitative Easing (QE), zero reserve ratios, and low interest rates contribute to the rise of inflation?

The chart below shows the amount of money in the economy over the last few decades and the dramatic jump in 2020.[2]

The Fed moved first as COVID-19 emerged. It slashed interest rate targets and let banks lend without restriction. The Fed Funds interest rate went down to nearly zero again while bank reserve ratios were reduced from 10 percent to 0 percent. That meant that banks could lend out freely without holding a fraction of deposits in reserve. Meanwhile, the Fed called desperately for government spending to assist the emergency and recovery process. Senate Leader Mitch McConnell responded.

Fiscal policy shaped quickly in the Senate. All US spending bills must originate from the House of Representatives, so the Senate used a previous House bill to create the CARES Act. The Middle Class Health Benefits Tax Repeal Act, originally introduced in the U.S. Congress on January 24, 2019 was used as a “shell bill” to begin working on economic and public health relief for the accelerating Covid-19 pandemic.

The Senate added content to combat the virus and protect faltering businesses and the unemployed. Then, on March 27, 2020, President Trump signed the $2.2 trillion CARES (Coronavirus Aid, Relief, and Economic Security) Act into law. Jared Kushner, the President’s son-in-law, became the “coronavirus czar” in charge of personal protective equipment (PPE) and other medical equipment needed to treat Covid patients.

The bill also included the Paycheck Protection Program (PPP) and its famous Section 1106, which provided forgiveness of up to the full principal amount of qualifying loans. The PPP was replenished the next month with an additional $484 billion.

The chart below shows the amounts of M1 money in the economy (currency, demand deposit bank accounts, and liquid deposits, such as OCDs and savings deposits, and money market deposit accounts) including the dramatic jump in the spring of 2020 and extends the above chart into 2021 and the beginning of 2022.

Money supply

So, 2020 saw about $3.7 trillion in additional fiscal spending for COVID-19 relief in addition to the Fed’s QE, reduction of Fed Funds interest rates from 1.5% to near 0. These numbers show the dramatic economic stimulus that quickly turned the economy around as indicated partially in the Dow-Jones Industrial Index (DJIA) of the top 30 companies by market capitalization (shown below) and pricing power as well as the turn-around in unemployment.

Stock market since 2020

It was a pretty impressive response in that spring in 2020. The Fed hit hard with the reserve ratio, fed funds rate, and QE. GOP Senate Leader Mitch McConnell pulled in some impressive legislation with the CARES Act. Having done about all that it could to jack up the money supply in 2020, the Fed begged for fiscal support. The Senate and House of Representatives delivered.

Compare 2020 with the pandemic spending in 2021. Biden’s proposed Build Back Better plan was conceived to address COVID-19’s K-shaped recovery, specifically to strengthen the middle class, provide care for the elderly, increase affordable housing, and reduce child poverty. First on the agenda was the American Rescue Plan, passed and signed in March 2021 for about $1.9 trillion.[3]

Second was the bi-partisan infrastructure bill with $579 billion in new spending added to what had already been planned. While not specifically Coronavirus relief, the Infrastructure Investment and Jobs Act (IIJA) added $1.2 trillion over ten years ($120 billion a year) to the economy. This included money for roads and bridges, broadband, mass transit, and even for a network of EV charging stations.

Infrastructure Bill

Concerns about inflation increased during 2021, especially after President Biden signed a new bill for COVID-19 relief in the spring and the bi-partisan infrastructure bill in the summer. A third bill, the American Families Plan, or BBB III, was pushed by the progressive wing of the Democratic Party. It proposed additional spending of up to $6 trillion and was heavily debated, but it did not get the support of two Democratic Senators and was never voted on. The media narrative was very negative, not about the bill, but bickering in the Democratic party.

So roughly $2 trillion was added in 2021, or about half as much as the 2020 stimulus spending. It was still a lot of money, plus bank money from continued QE and low-interest rates. The $5 trillion in 2020 for domestic relief spending raised concerns about excess demand, especially in light of global reductions in the supply of goods and services. It was a lot to add to an economy with its supply of goods and services depleted by unemployment, closed factories, seaport bottlenecks, and rising energy prices.

But at the risk of sounding like Condoleezza Rice after 9/11, no one ever imagined the supply of goods and services would come under such pressure as COVID-19 continued. Inflation needs money but is also a matter of supply, and market failures can dramatically affect rising prices.

Inflation is a complex interplay between effective demand and the provision of goods and services. Its calculation also needs to consider the pricing power of major suppliers. Inflation can be demand-pull or cost-push. Demand-pull inflation is when an increase in demand (too much money) is greater than the ability of production of goods and services to keep up and typically results in higher prices.

Cost-push inflation happens when prices increase due to rising costs of raw materials and wages. The Russian invasion of Ukraine and the sanctions imposed created shortages of essential resources from both countries, such as aluminum and carbon-based products such as fertilizers, natural gas, nickel, and crude oil. It is difficult to assess the impact of food shortages that might result from the degradation of the Ukrainian infrastructure. Overall, rising costs of inputs, measured by the Producer Price Index (PPI), pressure suppliers to raise prices.

But most of the current problems come from supply shortages due to the limits of the COVID-19 pandemic. Calculating the delays in factory production, the chip shortages, the reduction in oil production, restaurant closures, and shipping containers waiting at the docks, etc., will give us a tally of what has gone missing in the US economy and producing inflationary forces.

We had a fantastic shot of money into the US economy during 2020. The legislated increases continued into 2021 but nowhere near the amounts of the previous year.[3] The stimulus helped forestall an economic collapse but became part of the complex equation of supply and demand forces increasing prices. The global inflation that emerged from the COVID-19 resulted from a “perfect storm” of economic and political issues.

Notes

[1] The K-shaped recovery indicated a split between groups of society that were able to adjust fairly rapidly economically and those subject to long-term unemployment and poverty.

[2] Money supply statistics from St. Louis FRED.

[3] The Trump government increased deficits almost as much as the Obama administration, except in half the time. It also spent more on stimulus in 8 months in 2020 than Biden did in all of 2021. But it had the COVID-19 virus and had to respond. Biden’s expenditures, while prescient because of Delta and Omnicron variants, was a bit more elective and Biden III was stopped by Sen Manchin (W-VA). The federal deficit totaled nearly $2.8 trillion in 2020, about $360 billion more than in 2021. So I consider it about equal and in total, about half the monetary part of inflation. The Fed, through QE, 0% reserve ratio, and near-zero interest by Trump appointee Jerome Powell added the other half. The Fed added $4.66 trillion to the money supply n 2020 and another $10 trillion up to February 2022. So that is a lot of money. Biden stopped the stimulus for the most part, but the Fed is still adding money.
Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught in the Digital MBA program at St. Edwards University in Austin, Texas. Originally from New York, he taught at Marist College and from 2002-2012 was on the faculty of New York University where he taught comparative and digital economics. His first academic job was at Victoria University in New Zealand. He was also a Fellow at the East-West Center in Honolulu, Hawaii.

US Technology Diplomacy

Posted on | December 31, 2021 | No Comments

Edited Remarks to the Global Technology Diplomacy Forum 2021. Hosted by the Ministry of Foreign Affairs of the Republic of Korea. November 30, 2021, 13:30 (KST)

Ladies and Gentlemen, Ambassador Min,

I’d like to cover three areas today as I talk about US technology diplomacy. First, I want to talk about America’s domestic renewal and how it relates to its international diplomatic agenda. Americans across the political spectrum recognize that the US needs to change. However, they differ on the causes and the solutions. Then I will address some of the major US institutions managing technology diplomacy, including the private sector.

The State Department has the prerogative for taking diplomatic leadership. Still, the increasing centrality of technology has not only disrupted State’s mission objectives but has also brought other departments and agencies into the mix. Lastly, I want to end with some comments on “norms” in multilateral technology diplomacy. The “securitization” of cyber technology – state actors involved in cyberattacks, and the tensions they are creating with tech companies, is of particular concern.

So, America is rediscovering itself, undergoing domestic renewal, and once again sorting out its friends and enemies around the world. As is the rest of the world, we are experiencing dramatic technological change, pressured by the pandemic, climate pollution, decentralized finance, and changes in the global media ecosystem. I could add to this list, but I think you get the point.

America is divided. It was rocked by the MAGA movement, otherwise known as Make America Great Again. President Trump stressed border security, carbon utilization, and tax cuts while alienating many traditional international allies. He withdrew from many international forums, including the Paris Climate Accords, the World Health Organization, UNESCO, the Human Rights Council, and the Iranian nuclear negotiations.

Now the US is embarking on the Build Back Better plan. Instead of focusing on tax cuts for the more affluent, it focuses on money to end the pandemic, building US infrastructure, and creating the conditions for families to return to work. It allocated $3.1 trillion in spending on top of some $5 trillion in 2020 for domestic spending, raising concerns about excess demand in light of global reductions in the supply of goods and services.

The primary goal of both administrations has been to get through this pandemic. Recent spending has shifted to infrastructure, mainly roads and bridges, mass transit, renewable energy, and even broadband. It is, in a sense, a “smart new deal,” literally a grand experiment, if phase 3 is passed that will support parents going back to work and address climate concerns. A new influence is called MMT or modern monetary theory. Rather than the pained austerity theory that hampered the Obama administration’s attempts to recover from the Great Recession, MMT delinks deficits from borrowing and taxes. It encourages government spending where appropriate while cautioning against the real risks of inflation.

So, the US is looking inwards. But it recognizes international threats and opportunities. I am not in any way a speaker for the Biden administration, but let me echo some concerns here today and start by recognizing the multiple branches and institutions with international reach that influence mediations regarding science and technology.

The actors involved in US diplomacy are many. The Departments of Agriculture, Homeland Security, Defense, Treasury, even the Federal Reserve Bank all have some roles in international diplomacy. However, the Departments of Commerce and State are most relevant to today’s topic, as is the trade negotiation representative of the executive office.

Since diplomacy starts with the President, let me first draw attention to the United States Trade Representative (USTR), a Cabinet-level position currently headed by Katherine Tai, an attorney fluent in Mandarin. Tai is the President’s principal trade advisor, negotiator, and spokesperson on trade issues and develops and coordinates U.S. trade policy, in consultation with other federal agencies and Congress. Her nomination was confirmed in the Senate without dissent, which is indicative of the Biden’s administration concerns about China.

Remember that Biden was still Vice-President in 2015 when the Made in China 2025 Report was released that simultaneously angered and struck fear in American industry. It created a backlash in American politics, including implementing the failed tariffs on Chinese goods. The event is reminiscent of Japan’s Fifth Generation AI project in the early 1980s. Luckily, the fear of Japanese AI led to investments in supercomputers and networking and ultimately the Internet thanks to the “Atari Democrats,” a group that included Al Gore and Gary Hart.

The Department of Commerce is also critical. “Commerce” includes the International Trade Administration (ITA) and the NTIA. The latter includes the Office of International Affairs (OIA) that advises the US President on international telecommunications and information policy issues. The OIA plays an important role in the formulation of international information and communications technology (ICT) policies including ICANN and DNS issues. Domestically, the NTIA is in charge of the $65 billion in spending for broadband development that was designated in the 2021 infrastructure bill.

Secretary of Commerce

Commerce is engaged in technology and patent protection to help American businesses maintain their edge over global competitors. Secretary Gina Rimando is particularly interested these days in legislation that will allow her to entice foreign companies to the US to reduce supply chain worries. She was very pleased recently, when Samsung announced a $17 billion dollar investment in a semiconductor fabrication site in Texas. The site is, by the way, about 30 kilometers from my home in northern Austin.

Secretary Rimando is confident that “Commerce has tools to level the global playing field.” Major concerns include ensuring access to foreign markets, enforcing export controls, protecting intellectual property (IP), and being an advocate for American business. Commerce is awaiting final passage of the United States Innovation and Competition Act (USICA) that will allocate $10 billion to the Dept of Commerce to invest in regional tech hubs across the US.

Speaker Pelosi recently agreed with Majority Leader Schumer to conference to reconcile the Act. But in light of the supply chain shocks, particularly in semiconductors, several other bills might be included. The “Chip Act” has proposed another $52 billion to help develop the semiconductor industry manufacture on US soil. It was passed by the Senate but held up a bit by the House as it pushed through BBBII, the Bi-partisan infrastructure act.

Another relevant bill that might be connected is the Facilitating American-Built Semiconductors Act (FABS). As part of the USICA, it would double the number of “fabs” from 9 to 18. Fabs are enormous multi-billion dollar facilities needed to fabricate or manufacture advanced chips. Currently, Taiwan’s TSMC is the world leader, but Samsung is nearly as potent.

As mentioned earlier, Samsung is investing in Austin, Texas, while TSMC is building new facilities in Phoenix, Arizona. Intel is also building there, as the former world innovator and leader looks to regain stature. But the number of fabs throughout Asia could grow to over 50 in the upcoming decade. As a result, total investment in the US should be in the range of $500 billion to regain a significant presence in the global production of chips and ensure supplies for domestic use, including the automobile industry.

Anthony Blinken is our current Secretary of State and has had a busy year consolidating international coalitions that were pulled apart during the last administration. Blinken presented the Biden administration’s vision of a strategic approach to shaping the rules for technology and science policy in several forums this year. He recognizes that US diplomacy needs to “shape the strategic tech landscape, not just react to it.” He has recently taken an interest in Artificial Intelligence (AI), citing important work reported in the OECD 2019 Recommendations on AI. He stressed the importance of AI that respects human rights and democratic values.

He has also been working on the Quad (Australia, India, Japan, US) Framework for Technology, in which they committed to integrating human rights and democratic values into the ways “technology is designed, developed, governed, and used,” particularly in wireless 5G and beyond.

You can see here what he calls “pillars” representing the State Dept’s science and technology priorities.

    Build US capacity and expertise in global health, cyber security, climate and emerging technologies through multilateral diplomacy;

    Ensure leadership in technology leadership by encouraging more initiative and more innovation;

    Protect the Internet and US analytical capabilities;

    Set standards and norms for emerging technologies;

    Make technology work for democracy;

    Develop cooperative relationships by “friend-shoring” and “near-shoring” to help secure and resilient supply chains;

    Protect “innovative ecosystems” and talent “pipelines.”

Let me start to conclude by bringing to your attention cybersecurity and the private sector. The Department of State is paying increasing attention to computer and network security threats and weaknesses. But tensions are emerging between nation-states that increasingly see technology as a security issue and tech companies that feel the pressure from customers for additional protection and support.

Microsoft called for a “Digital Geneva Convention” in February 2017 to protect citizens and companies from cyber attacks. The proposal was not warmly received. But they followed up after the May 2017 WannaCry ransomware cyberattacks by gathering broad corporate support with the Cybersecurity Tech Accord.

These companies want governments to protect their customers and not cajole them into attacking innocent people or companies. They also want to empower developers to build more resilient ICT systems. They see value in developing strategic partnerships across many sectors of society to achieve their objectives.

CTA signatories agree to commitments in four key areas:

    Stronger defense against cyberattacks by pledging to protect all customers globally regardless of the motivation for attacks online;

    No offense by choosing to not help governments or other actors launch cyberattacks against innocent civilians or enterprises and protecting against the tampering or exploitation of products and services through every stage of technology development, design, and distribution;

    Empowering developers and the people and businesses that use their technology by helping them build and improve capacity for protecting themselves; and

    Building upon existing relationships and taking collective action together to establish new formal and informal partnerships with industry, civil society, and security researchers.

The goals of the Cybersecurity Tech Accord are to improve technical collaboration, coordinate vulnerability disclosures, and share information about threats. Most of all it is to minimize the introduction of malicious code into the Internet and other aspects of cyberspace. Microsoft suggested five important “norms” that should inform international discussions of cybersecurity:

    Harmonization;
    Risk reduction;
    Transparency;
    Proportionality, and;
    Collaboration.

Norms are common in multilateralism as they are used to work out approaches to common concerns. Norms are better seen as processes and not goals. It is not so much a matter of norm acceptance but a a struggle over precise meanings, eventually favoring the member states’ prerogative.

Norms are not treaties that need to be ratified by the Senate nor alliances like NATO, whose Article 5 provides that if a NATO Ally is the victim of an armed attack, it is an attack on all members. They are not specific agreements like those achieved at the WTO meetings in 1996 and 1997 that gave us a truly World Wide Web. While norms do not have the force of law, they still can have weight. Norms can provide positive guidance. For example, the ITU called for an end to facial recognition technologies. You may have noticed Facebook stopped using it. Conversely, norms can also result in some states being labeled “bad actors.” Consequently, sanctions can be applied.

Technological strength is now recognized as a prime determinant of economic, political, and military power. The U.S. is rebuilding its industrial strength and returning wealth to its citizens. This transformation will require working with key international partners to encourage investment and innovation while protecting the Internet and needed supply chains, as well as critical intellectual property. A significant challenge will be to make technology work for democracy while safeguarding personal data.

Ⓒ ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor and Undergraduate Director at the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at St. Edwards University in Austin, Texas, where he keeps his US home. Most of his career was spent on the faculty of New York University. His first faculty position was at Victoria University in New Zealand and he also spent time as a Fellow at the East-West Center in Honolulu, Hawaii where he obtained his Ph.D.

« go backkeep looking »
  • Referencing this Material

    Copyrights apply to all materials on this blog but fair use conditions allow limited use of ideas and quotations. Please cite the permalinks of the articles/posts.
    Citing a post in APA style would look like:
    Pennings, A. (2015, April 17). Diffusion and the Five Characteristics of Innovation Adoption. Retrieved from https://apennings.com/characteristics-of-digital-media/diffusion-and-the-five-characteristics-of-innovation-adoption/
    MLA style citation would look like: "Diffusion and the Five Characteristics of Innovation Adoption." Anthony J. Pennings, PhD. Web. 18 June 2015. The date would be the day you accessed the information. View the Writing Criteria link at the top of this page to link to an online APA reference manual.

  • About Me

    Professor at State University of New York (SUNY) Korea since 2016. Moved to Austin, Texas in August 2012 to join the Digital Media Management program at St. Edwards University. Spent the previous decade on the faculty at New York University teaching and researching information systems, digital economics, and strategic communications.

    You can reach me at:

    apennings70@gmail.com
    anthony.pennings@sunykorea.ac.kr

    Follow apennings on Twitter

  • About me

  • Writings by Category

  • Flag Counter
  • Pages

  • Calendar

    April 2024
    M T W T F S S
    1234567
    891011121314
    15161718192021
    22232425262728
    2930  
  • Disclaimer

    The opinions expressed here do not necessarily reflect the views of my employers, past or present.