Anthony J. Pennings, PhD

WRITINGS ON DIGITAL ECONOMICS, ENERGY STRATEGIES, AND GLOBAL COMMUNICATIONS

Connecting a Dangerous World: Border Gateway Protocol (BGP) and National Concerns

Posted on | December 9, 2024 | No Comments

This post discusses the Border Gateway Protocol (BGP) does and some of the risks it poses. While BGP is necessary for the functioning of the Internet, it also presents several security concerns due to its decentralized and trust-based nature. This raises national concerns, and has drawn the attention of the cybersecurity efforts by regulatory agencies such as the Federal Communications Commission (FCC) in the US. This post examines how governments and malicious actors can exploit BGP vulnerabilities for censorship, surveillance, and traffic control. Techniques include implementing blacklists, rerouting traffic, and partitioning networks, as seen with China’s Great Firewall. Such actions enable monitoring, filtering, or isolating traffic and raise concerns about privacy, Internet freedom, and global access.

The Internet is considered a network of interconnected networks. People with devices like PCs or mobile phones (hosts) connect to Internet via networked Internet Service Providers (ISPs) that in turn are connected to other ISPs. Applications running on devices work with other devices through a maze of interconnections that pass data from router to router within the ISP’s domain and through or to other ISPs, enterprises, campuses, etc. The resulting network of networks is quite complex, but it connects the world’s Internet traffic with the help of specific routing protocols.

Border Gateway Protocol (BGP) is one such protocol and integral to the global Internet. Sometimes known as the “Post Office of the Internet,” it was developed in the 1980s to help networks exchange information about how to reach other networks. They basically determine the best path for data packets to reach their destination. BGP enables network administrators to manage and optimize network traffic by advertising the Internet routes they offer and the ones they can reach. With BGP, they can prioritize certain routes, influence the path selection process to balance traffic loads between different servers, and adapt to changing network conditions.

Despite its usefulness, many nations are worried about BGP’s security vulnerabilities.

In the US, the Federal Communications Commission (FCC) is concerned that the Internet presents security threats to the US economy, defense capabilities, public safety, and critical utilities such as energy, water, and transportation. These concerns are echoed by other countries. Malicious or sometimes incompetent actors can exploit or mismanage BGP vulnerabilities through hijacking or spoofing attacks. They can also reroute traffic to intercept or disrupt data flows. The FCC says that while efforts have been made to mitigate the Internet’s security risks, more work needs to be done, especially on BGP.

How Does BGP Work?

Jim Kurose does a great job explaining BGP and its importance in this video:

BGP connects the world by enabling communication and cooperation among autonomous systems, ensuring that data packets can traverse the vast and interconnected network of networks that make up the Internet. It bridges separate, but also networked ASes such as campuses, companies, and countries. BGP works to ensure that data packets originating from one location can cross over between ISPs and other WANS (Wide Area Networks) to reach their destination anywhere else on the planet. Network routers read the prefix numbers on packets to determine how which way they will be sent.

BGP’s ability to adapt to changing network conditions, manage data traffic, and facilitate redundant paths is crucial for the stability and reliability of the global Internet, but it also poses several dangers. Without the continuing implementation of software-defined networks (SDN), BGP will organize routing tables locally throughout its network. And the information for the routing calculations will be based on trust relationships with other ASes. Ideally, this will result in connections that can quickly reroute traffic through alternative paths to maintain network integrity. If one route becomes unavailable due to network outages, maintenance, or policy, it has the ability to quickly find other routes.

BGP is designed for interdomain routing, which means it focuses on routing decisions between different autonomous systems. This is in contrast to intradomain routing protocols like OSPF or Enhanced Interior Gateway Routing Protocol (EIGRP), which operate within a single AS. BGP is the protocol of choice for interdomain routing and that can mean between countries and even large scale Tier 1 ISPs.

Who Uses Border Gateway Protocols?

BGP users include telecommunications companies such as ISPs, Content Delivery Networks (CDNs) such as Akamai and Cloudflare, and Internet Exchange Points (IXPs) that bypass multiple networks and allow specific networks to interconnect directly rather than routing through various ISPs. Cloud providers like Amazon’s AWS, Google Cloud, and Microsoft Azure also use BGP to manage the traffic between their data centers and ISPs, allowing them to provide reliable cloud services globally.

Many large enterprises with extensive networks operate their own ASes and have BGP access to control routing policies across their internal networks and connections to external services. Universities and research institutions often employ their own ASes and use BGP when connecting to national and international research networks supporting scientific collaboration.

The image above from Top “tier-1” commercial Internet Service Providers (ISPs) use BGP as well. Tier-1 ISPs are considered the top-tier providers in the global internet hierarchy. They own and operate extensive networks and are responsible for a significant portion of the global Internet’s infrastructure. BGP is crucial for them route and exchange network reachability information with ASes, and it plays a crucial role in how these tier-1 ISPs manage their networks and interact with other ASes. Tier-1 ISPs use BGP to implement routing policies that align with their business strategies and network management goals.

A Tier-1 ISP has access to an entire Internet Region like Singapore solely via its free and reciprocal peering agreements with BGP as its glue. Examples include AT&T in the US or KDDI in Japan. BGP allows them to announce their IP prefixes to the wider Internet and receive routing updates from other ASes. Tier-1 ISPs use BGP to make routing decisions based on various criteria, including network policies, path attributes, and reachability information. BGP allows them to determine the best path for routing traffic through their networks, considering factors like latency, cost, and available capacity.

Tier-1 ISPs can establish BGP peer relationships with other ASes. These relationships can take the form of settlement-free peering or transit agreements. Peering involves the mutual exchange of traffic between two ASes, while transit agreements typically involve the provision of Internet connectivity to a customer AS in exchange for a fee. Network effects increase the importance and centrality of existing network hubs, giving them a stronger “gravitational pull,” making it more difficult for new entrants to establish themselves in the market. Effective relationships enable the global Internet to function as a connected network of networks.

BGP allows the managers of autonomous systems to consider various factors when selecting the best path, including network policies, routing metrics, and the reliability and performance of available routes. BGP helps maintain Internet reachability by constantly updating routing tables and responding to changes in network topology. It identifies the most efficient path for data packets to travel from source to destination and allows ISPs to advertise what routes they are able to offer other ISPs. BGP empowers network managers to control how data is routed, manage traffic, and enforce security policies.

How Governments Use and Abuse BGP

Military agencies usually maintain BGP access within their data infrastructure, especially to secure sensitive networks or manage national Internet traffic and, in some cases, control public Internet access to their networks. BGP allows militaries to define specific routing policies, such as prioritizing certain types of traffic (e.g., command-and-control data) or restricting traffic to trusted allies. In field operations, militaries use deployable communication systems that rely on satellite links and mobile base stations. BGP allows these systems to dynamically integrate into broader military networks. Militaries increasingly rely on drones and Internet of Things (IoT) devices, which require efficient routing of data. BGP works to ensure that data from these systems is routed optimally within military infrastructures.

A study of the early Russian-Ukrainian conflict revealed that Russian and separatist forces modified BGP routes to establish a “digital frontline” that mirrored the military one. This strategy involved diverting local internet traffic from Ukraine, the Donbas region, and the Crimean Peninsula. The research focused on analyzing the strategies employed by actors manipulating BGP, categorizing these tactics, and mapping digital borders at the routing level. Additionally, the study anticipated future uses of BGP manipulations, ranging from re-routing traffic for surveillance to severing Internet access in entire regions for intelligence or military objectives. It underscored the critical role of Internet infrastructure in modern conflict, illustrating how BGP manipulations can serve as tools for strategic control in both cyber and physical domains.

Government and other malicious actors can manipulate the Internet through multiple techniques including BGP hijacking, IP blacklisting and filtering, network partitioning and isolation, content monitoring and traffic analysis, traffic throttling and prioritization, shutdowns and access control, border routing as well as policies and compliance.

By influencing or manipulating BGP routes, governments or actors with access to BGP-enabled networks can reroute traffic to go through specific regions or servers. This is often done by injecting false BGP announcements to redirect traffic to specific router. This can allow governments to block, intercept, or monitor certain data flows. Such an approach was seen with incidents in various countries where traffic was rerouted through state-managed systems.

Governments worldwide can influence or control national ASes and various network providers. They can use BGP to dictate the paths data takes across the Internet on a large scale to manage, manipulate, or filter traffic for their own ends. This capability provides a point of control that governments can leverage for regulatory, security, or censorship purposes.

Government and military agencies usually maintain BGP access within their data infrastructure, especially to secure sensitive networks or manage national Internet traffic and, in some cases, control public Internet access. Governments worldwide can influence or control national ASes and local network providers. Because BGP controls traffic flow, it provides a point of control that governments can leverage for regulatory, security, or censorship purposes. They can dictate the paths data takes across the Internet on a large scale to manage, manipulate, or filter traffic for their own ends.

Governments can mandate that ISPs refuse to announce or accept specific IP prefixes or routes associated with restricted sites or content. By implementing BGP blacklists, they can prevent access to certain websites or services entirely by removing or altering the BGP routes that lead to these destinations, effectively blocking them at the network level.

Some governments impose strict routing policies that partition national networks from the global Internet. By requiring ISPs to use BGP filtering rules that isolate local traffic, they can keep Internet activity confined within national borders. China’s Great Firewall is an example, where BGP filtering and routing policies prevent certain global routes and confine users to government-approved internet spaces.

Governments can influence routing so that Internet traffic passes through surveillance or monitoring points. By injecting specific BGP routes, traffic can be directed to infrastructure where deep packet inspection (DPI) or other monitoring techniques are applied. This enables governments to analyze or even censor content in real time.

Through BGP route manipulation, governments can slow down or prioritize traffic to specific networks. For example, they may route traffic through slower networks or specific filtering points to control Internet speeds to certain services or prioritize government-approved traffic sources.

In extreme cases, governments can mandate ISPs to withdraw BGP routes to cut off access entirely, effectively disconnecting regions, communities, or entire countries from the global Internet. This can be seen in certain political scenarios or during unrest when governments initiate BGP route withdrawals, isolating the local Internet temporarily.

Governments can also enforce policies that restrict data to specific geographic boundaries, requiring ISPs to adjust BGP configurations to comply with data residency or border policies. This limits data flows outside national borders and aligns with regulatory frameworks on data sovereignty.

Concerns

Nations worldwide have growing concerns regarding the security and resilience of BGP, which is fundamental to Internet routing. While critical for directing Internet traffic between ASes, BGP has vulnerabilities that can pose significant risks to national security, data integrity, and overall network resilience.

Through these mechanisms, governments can exercise significant influence over network behavior and access at a national level, using BGP as a powerful tool for traffic control, monitoring, and regulation. Such actions raise concerns over Internet freedom, privacy, and access rights on a global scale.

Citation APA (7th Edition)

Pennings, A.J. (2023, Dec 10). Connecting a Dangerous World: Border Gateway Protocol (BGP) and National Concerns. apennings.com https://apennings.com/global-e-commerce/connecting-a-dangerous-world-border-gateway-protocol-bgp-and-national-concerns/

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea teaching broadband policy and ICT for sustainable development. He is also a Research Professor at Stony Brook University. From 2002-2012 he was on the faculty of New York University where he digital economics and information systems management. He also taught in Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

The Framing Power of Digital Spreadsheets

Posted on | December 7, 2024 | No Comments

Digital spreadsheets like Excel have framing power because they shape how information is chosen, organized, interpreted, and presented. These capabilities directly influence decision-making and resource prioritization within organizations. The power of framing arises from the ability to define what data is included, how it is processed by the spreadsheets functions or formulas, and the visual or numerical emphasis placed on specific inputs and outcomes. Spreadsheets exert framing power through selecting and prioritizing data, building formula logic and embedded assumptions, standardizing norms and templates, simplifying complex realities, and selectively presenting results.

This post continues my inquiry into the remediation of digital spreadsheets and the techno-epistemological production of organizational knowledge. This includes a history of spreadsheet technologies including VisiCalc, Lotus 1-2-3, and Microsoft’s Excel as well as the functions and formulas they integrated over time. Digital spreadsheets built on the history of alphabetical letteracy and the incorporation of Indo-Arabic numerals, including zero (0) and calculative abiltities built up through administrative, commercial, and scientific traditions.

Spreadsheets frame discussions by determining which data is included or excluded, consequently controlling narratives and resource decisions. For instance, only presenting revenue figures without costs can create a biased perspective on the financial health of an organization. By selecting and emphasizing certain key performance indicators (KPIs) over others, spreadsheets prioritize specific organizational goals (e.g., profitability over sustainability). A budget sheet that highlights “Cost Savings” as a primary metric frames spending decisions in a cost-minimization mindset. Those designing spreadsheets gain control and power by deciding what aspects of reality are quantified and analyzed.

Spreadsheet formulas embed certain assumptions about relationships between variables (e.g., Profit = Revenue – Costs assumes no other factors like opportunity costs). The logic built into formulas can obscure biases or simplify complexities, shaping decision-making paths. For example, a financial projection using =RevenueGrowthRate * PreviousRevenue assumes linear growth and potentially oversimplifies future uncertainties. What-if scenario analysis in spreadsheets often reflects the biases or priorities of the person constructing the formulas. These biases can frame potential outcomes in specific ways.

Spreadsheets can become templates for recurring processes, setting standards for what is considered “normal” or “important” in an organization. Those who create and control spreadsheet templates have the power to define organizational norms and expectations and codify power dynamics. Establishing standardization reinforces certain frames, perpetuating specific ways of viewing and evaluating organizational performance over time. A standardized sales report may continuously emphasize gross sales, neglecting other factors like customer churn.

Spreadsheets distill complex realities into numbers, tables, and graphs, simplifying nuanced issues into quantifiable elements. This reductionism simplifies complex realities. An example is reducing employee performance to numeric scores (e.g., Productivity = TasksCompleted / HoursWorked) that may overlook qualitative contributions. Critical contextual factors, such as market volatility or employee morale, may be excluded, shifting focus to what is easily measurable. By reducing complexity, spreadsheets prioritize some perspectives while marginalizing others.

Those who design spreadsheets often decide how data is framed and can influence organizational decision-making disproportionately. Those who gain access and control over the spreadsheet design have an inside track to the power dynamics of an organization. Control over spreadsheet design shapes how others interpret and interact with the data, reinforcing the designer’s influence and power. In an organization with limited transparency, complex formulas or hidden sheets can obscure understanding of the data. This can disadvantage other users and consolidate power with those with the skills and tools to create the organization’s spreadsheets.

The order in which data is presented (e.g., revenue before costs) in spreadsheets influences how stakeholders mentally process the information. Structural and layout choices affect how data is understood and what conclusions are drawn. Visualizations like pie charts, bar graphs, and trend lines direct focus to certain comparisons or patterns, and frame how priorities are perceived.

Finally, presentation decisions about formatting (e.g., conditional highlights, bold text) and visualization (e.g., charts) draw attention to specific elements. For example, highlighting a profit margin cell in green or showing a declining revenue trend in red emphasizes success or urgency. Choosing to present aggregated data (e.g., total revenue) versus granular details (e.g., revenue by product) influences how complexity is perceived and addressed. Presentation choices affect data interpretation, often steering stakeholders toward certain conclusions.

In conclusion, digital spreadsheets are powerful technologies that frame knowledge in organizations and consequently they produce power by determining how data is selected, processed, and presented. Their influence lies not just in the data they hold but in how they structure understanding and shape decision-making. Those who design spreadsheets can wield significant control over organizational narratives and perspectives, making it critical to use these tools with awareness of their potential limitations and strengths.

Citation APA (7th Edition)

Pennings, A.J. (2024, Dec 7) The Framing Power of Digital Spreadsheets. apennings.com https://apennings.com/meaning-makers/the-framing-power-of-digital-spreadsheets/

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea and Research Professor at Stony Brook University. He teaches digital economics and sustainable development. Previously, he taught macroeconomics and digital media management at New York University. He also taught in the MBA program at St. Edwards University in Austin, Texas, where he lives when not in South Korea.

Google: Monetizing the Automatrix – Rerun

Posted on | November 21, 2024 | No Comments

I originally wrote this in October 2010 but showed it to a student recently working on a paper about the economics of FSD. It was my first post on autonomous driving.

Google recently announced its work on a driverless car to mixed reviews. While a technical success, with only one mishap in 140,000 miles of testing, many felt that Google was losing its focus. I think this latter view underestimates Google’s strategy – to monetize the road. As we move towards the “Automatrix,” the newly forming digital environment for self-driving and wireless charging transportation, Google looks to situate its search/advertising business at its center.

Let’s face it; the car is almost synonymous with shopping and consumerism. Whether going to the mall to buy some new shoes, picking up groceries, or going out to look for a new washing machine – the car transports both our bodies and our booty.

apple-icar

Nothing in the fridge? Drive out to the nearest Applebee, Dennys, or Olive Garden for some nachos and a diet Coke. Got kids? Try the drive-in for a Happy Meal or some Chuck E. Cheese’s pizza after a day at the water park. You get the point: have car, will spend. It’s American.

Google, which “wants to organize the world’s information”, clearly sees your car as a major generator of that data and the car occupants as major traffic generators – the good kind of traffic – on the web, not the road. They want the passenger to focus on the navigation, not the road. They want to provide destinations, pit stops, and other places to rest and refresh. The car will provide the movement while “the fingers do the walking,” to draw on a famous Yellow Pages ad.

While Nielsen, famous for its ratings business, has championed the three-screen advertising measurement (TV, PC, mobile phone), you could say Google is going for a four-screen strategy: PC, mobile, TV, and now the dashboard. Talk about a captured audience! It has the potential to pay off big, adding billions more to Google’s bottom line by tying moving passengers to the web.

Can driving through downtown Newark, sitting at a light, or leaving a movie theater parking lot really compete with the latest user-generated video on YouTube? As you drive to the airport, wouldn’t you rather be making dinner reservations or checking out entertainment on your flight or destination? No, Route 66 is going to be route66.com because, well, Pops Restaurant bought the ad word, and you would rather be enjoying a Coke and burger anyway.

Actually, I’m all for computers driving my car, as long as they are doing it for other drivers as well. Yes, I enjoy the occasional thrill of driving and, probably more, the relaxing feel from the directed focus of the activity. However, I prefer looking out the window, listening to music, and or even reading a book. I’m good at reading in a moving vehicle.

GPS has already rescued me from the travel maps, and I now need reading glasses to see them anyway. Besides, the road is dangerous. It’s really scary passing that zigzagging car because the driver is zoning out in a conversation with his ex-wife, or some teenager is texting the girl he has a crush on.

Sure, I have mixed feelings about sliding into the Automatrix. Taking over the steering wheel seems like a bit of a stretch, even for Moore’s Law modern-day’s microprocessors. It will require a whole new framework for car safety testing. However, it has been over 50 years since they guided the Apollo spacecraft to the Moon, so it makes sense to replace the current system of haphazard meat grinders we currently use.

The next in this series is Google, You can drive my car.

Citation APA (7th Edition)

Pennings, A.J. (2024, Nov 21) Google: Monetizing the Automatrix – Rerun. apennings.com https://apennings.com/global-e-commerce/google-monetizing-the-automatrix-2/

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea and Research Professor at Stony Brook University. He teaches ICT for sustainable development. Previously, he was on the faculty of New York University where he taught digital economics and media management. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in South Korea.

The Lasting Impact of ALOHAnet and Norman Abramson

Posted on | November 13, 2024 | No Comments

Professor Norman Abramson was a pioneering engineer and computer scientist best known for creating ALOHAnet, one of the first wireless packet-switched networks, at the University of Hawaii’s College of Engineering in 1971. His work laid the foundation for local area networks (LANs), wireless networking, satellite transmission, and data communication protocols, all crucial to modern digital communications.[1]

When I was doing my internship at the East-West Center’s Communication Institute and eventually graduate work at the University of Hawaii, I had the chance to interact with Professor “Norm” Abramson. At first, it was while interning with the National Computerization Policy Program, and then I audited his Satellite Communications class as that was a significant focus of my MA thesis. As an intern and master’s degree student, I did not exactly command his attention, but it was good to meet him and occasionally cross paths on the beaches and in the waters off of Diamond Head and Waikiki. Now, I teach a broadband course that covers local area network protocols and wireless communications where ALOHA and slotted ALOHA protocols are fundamental technologies.

Here, Dr. Abramson gives a talk at the East-West Center on the history of ALOHAnet including how it used Intel’s first microprocessor, the 4004. He also talks how it will continue to influence new developments in wireless communications. He is introduced in the video by the CIO and future president of the University of Hawaii, David Lassner.

Originally designed as a radio communication network ALOHAnet, which connected computers on different Hawaiian Islands, pioneered techniques for handling data collisions over shared communication channels. Using microwave radio transmissions, ALOHAnet protocols allowed multiple devices to share the same communication channel. It introduced a simple but effective way of managing “collisions” that might occur when two devices transmit data simultaneously.

If a collision occurred, the Aloha protocol allowed devices to detect and retry transmission after a random delay, significantly improving the efficiency of shared communication channels. The ALOHA protocol was also one of the first “random access” methods, enabling devices to send data without waiting for authorization or a fixed schedule. This innovation was a breakthrough for networks where many devices, such as wireless and satellite networks, shared the same medium.

Ethernet

Bob Metcalfe cited the ALOHA protocol as directly influencing Ethernet’s design. While working on his PhD at Harvard, Metcalfe went to Hawaii to study the Aloha protocols. He then went to Xerox PARC in Menlo Park California, to develop networking technology for the Aloha Alto computer systems they were developing with Stanford University. More famous for its graphical user interface that Apple would “pirate” for the Lisa and Mac a few years later, the Aloha Alto would adapt and enhance ALOHAnet’s channel-sharing techniques for wired connections. With this Metcalfe laid the groundwork for what became the Ethernet standard, which went on to dominate LAN technology.

Ethernet’s approach to transmitting data in packets was initiated by the ARPAnet’s packet-based structure, which sent data in discrete units. ALOHAnet demonstrated how packet-switching could enable more efficient and flexible communication between multiple users over a shared channel, a principle central to Ethernet and other LAN technologies. ALOHAnet’s use of radio channels showed how digital data could be transmitted wirelessly, paving the way for both Ethernet’s early development and later wireless local area network (WLAN) technologies.

Although Aloha protocols initally handled collisions simply by retransmitting after a random delay, this concept was expanded upon by Ethernet’s CSMA/CD (Carrier Sense Multiple Access with Collision Detection) protocol. CSMA/CD built on ALOHA’s collision-detection mechanism to allow faster, more reliable data transfer in wired networks. Rather than transmitting indiscriminately, CSMA would “listen” to make sure no one else is transmitting before sending out its packets. Collisions can still occur, primarily because of signal propagation delays, so CSMA/CD was developed to adjust to collisions that still might occur. To free up more of the channel, CSMA/CD was also designed to be able to transmit only part of a packet (called a frame in LANs) at times.

Ethernet has become indispensable for high-speed networking and high-capacity applications for data centers, corporate networks, and manufacturing automation. Ethernet has reached extraordinary speeds ranging from 10 Mbps to 100 Mbps, 1 Gigabit Ethernet, 10 Gigabit Ethernet, and higher-speed Ethernet up to 400 Gbps, primarily for advanced networks and data centers. While cabling can be cumbersome and expensive, Ethernet supports high data transfer rates with low latency and minimal data loss and can scale from small home networks to enterprise-level setups while compatible across devices from different manufacturers.

Wireless Communications

ALOHAnet also demonstrated the potential of wireless networking, inspiring future wireless data transmission systems. By showing how to handle contention and collisions in a shared radio network, it became a foundational model for mobile and wireless data communication systems where devices contend for the same wireless spectrum. Many modern wireless technologies, including Wi-Fi networks, rely on concepts that originated with ALOHAnet for managing access to shared channels and retransmitting after collision. Many principles used in Wi-Fi, such as carrier-sensing and collision handling, were modeled on ALOHA’s methods, adapted for higher-speed and more complex wireless environments.

Techniques derived from ALOHA’s collision handling have been integrated into cellular network standards. Modern mobile networks, especially in early generations (2G, 3G, LTE, 5G) adapted variants of ALOHA for handling data requests in base stations and handling multiple simultaneous transmissions over limited spectrum. For example, spread spectrum techniques in cellular networks and CSMA/CD methods in Wi-Fi borrow principles directly from ALOHA, adapted to high-density environments. Spread spectrum systems spread a signal over a wider frequency band than the original message while transmitting at the same signal power, making it harder to intercept or jam.

Abramson’s approach to data transmission over ALOHAnet helped popularize the concept of decentralized network architectures. Unlike traditional telecommunications systems, which often relied on centralized control, the ALOHA protocol allowed for a more flexible and robust form of data transmission. This decentralized model influenced satellite communications, where satellite networks often need to function independently in distributed environments. This model was influential in shaping the architecture of distributed and resilient communication networks, where remote or isolated nodes (e.g., satellites or base stations in remote areas) must handle communication independently.

Satellite Communications

Abramson’s work also contributed to the advancement of satellite Internet, including Starlink, where random access protocols allow for the efficient use of satellite bandwidth, especially in remote and rural areas where terrestrial infrastructure is limited. Random access techniques became crucial for enabling multiple users to communicate through a shared satellite link. Slotted ALOHA and spread spectrum ALOHA also allowed for more efficient and flexible satellite communication systems. These methods have been used extensively in satellite-based messaging, data collection from remote sensors, and early VSAT (Very Small Aperture Terminal) systems.

Many IoT systems that use satellite networks for low-cost, wide-coverage communication rely on techniques similar to ALOHA for uplink data transmission from devices to satellites. This is particularly important for applications like remote sensing, environmental monitoring, and asset tracking, where many devices need to transmit sporadic bursts of data over long distances.

Lasting Impact

Abramson’s work at the University of Hawaii inspired a generation of engineers and researchers in telecommunications to explore new, more efficient methods of managing bandwidth and addressing congestion. His contributions continue to inform R&D in telecommunications, from satellite communications and mobile networks to next-generation IoT and 5G protocols, which employ refined versions of random access and collision avoidance methods. Abramson’s work effectively laid the groundwork for these standard practices. His protocols became foundational in textbooks and engineering curricula, influencing fields as diverse as digital communications theory, networking equipment, and data transmission protocols.

Robert X. Cringely interviewed Norm Abramson about why he moved to the University of Hawaii.

The success and influence of ALOHAnet proved that multiple devices could share the same communication medium effectively, ultimately helping shape the modern landscape of wired and wireless networking. Abramson’s impact on satellite communications and telecommunications can be seen in the widespread adoption of random access protocols, the development of mobile and wireless standards, and the rise of decentralized communication models. His foundational work with the ALOHA protocol allowed for efficient use of shared communication channels and inspired innovations that are integral to modern satellite networks, mobile communications, and IoT applications.

Citation APA (7th Edition)

Pennings, A.J. (2024, Nov 14) The Lasting Impact of ALOHAnet and Norman Abramson. apennings.com https://apennings.com/how-it-came-to-rule-the-world/the-lasting-impact-of-alohanet-and-norman-abramson/

Notes

[1] Abramson, N. (2009, Dec) THE ALOHAnet — Surfing for Wireless Data. IEEE Communications Magazine. https://www.eng.hawaii.edu/wp-content/uploads/2020/06/THE-ALOHANET-%E2%80%94-SURFING-FOR-WIRELESS-DATA.pdf

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea and Research Professor at Stony Brook University. He teaches broadband policy and ICT for sustainable development. Previously, he was on the faculty of New York University where he taught digital economics and media management. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in South Korea.

Legal Precedents and Perturbations Shaping US Broadband Policy

Posted on | November 3, 2024 | No Comments

When I was in graduate school my thesis advisor recommended that I read Ithiel de Sola Pool‘s Technologies of Freedom (1984). I soon used it to teach my first university-level course in Communication Policy and Planning at the Unversity of Hawaii. One of the major lessons I learned from Pool’s book was the importance of legal precedent in communication policy, and particularly telecommunications policy. It was an important lesson that I applied to my MS thesis Deregulation and the Telecommunications Structure of Transnationally Integrated Financial Industries (1986).[1]

While Pool highlighted the importance of legal precedents to provide stability, he cautioned against applying old telecommunications regulations to emerging digital services, arguing that such rigidity could hinder innovation, infringe on freedom of expression, and prevent society from fully benefiting from technological advancements. Regulations must be flexible enough to adapt to new technologies. His work influenced later debates on technological neutrality and regulatory flexibility, especially as the Internet and digital communications became more central to society.

This post will examine the legal precedents and perturbations for Internet broadband policy going back to railroad law, telegraph law, and the Communications Acts of 1934. The FCC’s Computer Inquiries and the Telecommunications Act of 1996 are particularly relevant, although the distinction between Title I and Title II is paramount in broadband policy.

The legal precedents for Internet policy are deeply rooted in transportation and communications regulations in the US, which began with railroad law, expanded to telegraph law, and were later codified in major pieces of legislation such as the Communications Act of 1934 and the Telecommunications Act of 1996. These laws established principles that would later be applied to the Internet and digital communication networks.

Railroad Law and Common Carriers

Railroad law in the 19th century provided an early legal foundation for regulating services that were considered essential public utilities. Railroads, which were the first major national infrastructure, were regulated under Interstate Commerce Act of 1887. This law created the Interstate Commerce Commission (ICC) and established key principles such as reasonable tariffs that would influence later communications regulation.

Railroads and the trains that ran on them were to be treated as “common carriers,” meaning they had to offer services to all customers without discrimination, and rates had to be “just and reasonable.” Railroads had to stop at all major towns in the Midwest and pick up commodities from the farmers bringing cows, pigs, timber, wheat, etc. to the major markets.

The laws determinend that government had the right to regulate industries deemed essential to the public interest, ensuring accessibility and fairness. These principles of nondiscrimination, accessibility, and regulation in the public interest would later be applied to communications networks, including the telegraph, telephone, and eventually the Internet.

Telegraph Law

The Telegraph Act of 1860 and subsequent regulations were important precursors to modern communications law. Telegraph companies were also treated as “common carriers” under this legal framework. The Pacific Telegraph Act of 1860 was designed to promote the construction of a telegraph line across the US, establishing early rules for how telecommunications infrastructure would be managed. Like railroads, telegraph companies had to provide equal access to their networks for all customers.

The telegraph was seen as a national necessity, further embedding the idea that communication networks are critical public infrastructure requiring federal regulation.

The Communications Act of 1934

The Communications Act of 1934 was a landmark law that created the Federal Communications Commission (FCC) and consolidated the regulation of all electronic communications, including telephone, telegraph, and radio. The Act aimed to establish a comprehensive framework for regulating interstate and foreign communication.

The FCC was tasked with ensuring that communications networks served the public interest, convenience, and necessity. Key provisions included common carrier regulation and public interest standard. Telephone companies were to be regulated as common carriers, requiring them to provide service to everyone on nondiscriminatory terms, with rates subject to regulation by the FCC. The Act also introduced the idea that communications services, particularly telephony, should be universally accessible to all Americans, which laid the groundwork for later universal broadband goals.

This Act established a regulatory foundation that persisted for much of the 20th century, governing how telecommunications were managed and ensuring public access to these services. This included addressing the introduction of computer services into the telecommunications networks.

The Communications Act of 1934 created a regulatory framework for U.S. telecommunications and introduced the concepts that later became known as Title II and Title I services. These specifications would shape the structure of the Internet Services Provider (ISP) industry.

Title II: Common Carrier Services

Title II of the Communications Act defined and regulated “common carrier” services, treating telecommunications services (like traditional telephone service) as essential public utilities. Common carriers are required to provide service to all customers in a non-discriminatory way, under just and reasonable rates and conditions.

Title II imposed significant regulatory obligations on these services to ensure fair access, affordability, and reliability. It allowed the Federal Communications Commission (FCC) to enforce rate regulations, mandate service quality standards, and prevent discriminatory practices. Common carriers under Title II were required to operate as public utilities, adhering to principles similar to those governing railroads and utilities. The FCC was given authority to ensure these services operated in the public interest.

Title I: “Ancillary” Authority

A crucial provision the Communications Act provides the FCC is “ancillary authority” over all communication by wire or radio that does not fall under specific regulatory categories like Title II or Title III (broadcasting). This means Title I covers services that are not classified as common carriers, allowing the FCC some jurisdiction but without the strict regulatory requirements of Title II.

Title I services are not subject to common carrier regulations. Instead, the FCC’s regulatory power over these services is limited to actions necessary to fulfill its broader regulatory goals. Title I gives the FCC flexibility to regulate communication services that may not fit into the traditional telecommunications (Title II) or broadcasting (Title III) categories. The FCC uses its Title I authority to support its mission, although its powers are limited, making it harder to impose the same level of regulatory oversight on Title I services.

This distinction between Title I and Title II services has significant implications, as services under Title I remain less regulated, which has encouraged innovation and rapid growth in Internet services but has also limited the FCC’s authority to impose rules (such as net neutrality). Title II, by contrast, gives the FCC stronger regulatory powers, which is why network policy debates often focus on reclassifying broadband as a Title II service to enforce stricter oversight. The term “online” was invented by the computer processing industry to avoid the FCC regulations that might incur with use of “data communications.”

Computer Inquiry Proceedings (1970s–1980s)

In the 1960s, the FCC began examining how to regulate computer-related services in its Computer Inquiries, which laid the foundation for differentiating traditional telecommunications services from emerging computer-based services.

Computer Inquiry I aimed to establish a regulatory distinction between “pure” data processing services (considered competitive and not subject to FCC regulation) and regulated communication services. The FCC considered separating computer functions from traditional telephone network operations, allowing for the development of the data processing industry without heavy regulatory burdens while still overseeing communication services provided by carriers like AT&T. It marked the first attempt by the FCC to grapple with the emerging intersection of computers and telecommunications by defining which aspects would fall under their regulatory jurisdiction.

In Computer II (1980), the FCC created a significant legal distinction between “basic” telecommunications services (transmitting data without change in form) and “enhanced” computer services (services that involve processing, storage, and retrieval, such as email or database services). Basic services remained subject to common carrier regulations, while enhanced services were left largely unregulated.

The FCC refined these distinctions further in Computer III (1986) and established that “enhanced” or “information” services would not be regulated like traditional telecommunications. This deregulation fostered the growth of the computer services industry and allowed for innovation without strict regulatory oversight. When the Internet started to take off, this distinction allowed PC users to connect to an ISP over their phone lines with a modem for long periods of time without paying line charges.

The Telecommunications Act of 1996

The Clinton-Gore administration attempted the first major overhaul of communications law since 1934. The Telecommunications Act of 1996 was designed to address the emergence of new digital technologies and services, including the Internet. It sought to promote competition and reduce regulatory barriers, with the assumption that market competition would benefit consumers by reducing prices and increasing innovation.

The Telecom Act encouraged competition in local telephone service, long-distance service, and cable television, which had previously been dominated by monopolies. The aim was to foster competition among service providers. It also updated the universal service mandate to include access to advanced telecommunications services, which would later include broadband Internet access.

The 1996 Telecom Act distinguished between “information services” and “telecommunications services,” in line with the Communications Act of 1934’s distinction between Title I and Title II, but left the Internet’s regulatory status ambiguous.

A crucial provision in the 1996 Telecom Act is Section 230, which grants immunity to Internet publishing platforms from liability for content posted by its users. This protection has allowed platforms like social media sites Facebook and X to flourish, but it has also raised debates over platform responsibility for harmful content.

The Internet world was shocked in 2002 when the FCC under Chairman Michael Powell ruled cable modem service was an information service, and not a telecommunications service. Cable companies such as Comcast became lightly regulated broadband providers and were exempted from the common-carrier regulation and network access requirements imposed on the ILECs.

Then, in 2005, incumbent local exchange carriers (ILECs) like AT&T, BellSouth, Hawaiian Telecom, Quest, and Verizon were bestowed Title I deregulated status. They were quickly able to take advantage of their new status to take over large market shares of the ISP business. After the FCC’s 2005 decision, content providers and IAPs began negotiating over paid prioritization and fast lanes. The FCC attempted to implement net neutrality principles under Title I, but these principles were apparently unable to protect web users from IAPs that throttled traffic.

Internet Policy and the Net Neutrality Debates

Over time, as the Internet grew globally in importance, regulatory debates focused on how it should be governed, particularly regarding principles of common carriage and net neutrality. Net neutrality is the stance that ISPs should treat all data on their networks equally, without favoring or discriminating against certain content or services.

In 2015, the FCC under the Democrats passed regulations that classified broadband Internet as a Title II telecommunications service, subjecting it to common carrier obligations. This meant ISPs were required to adhere to net neutrality rules, treating all traffic equally. President Obama personally advocated for the change.

However, the Republican FCC repealed these rules in 2017, classifying broadband as an information service rather than a telecommunications service, thus removing common carrier obligations and weakening net neutrality protections. Chairman Pai compared regulating ISPs with regulating websites, a clear deviation from the regulatory layers set out in the Computer Inquiries. He stressed that net neutrality would restrict innovation.

In 2024, the FCC under the Democrats, returned broadband services to Title II, bringing back Net Neutrality.

Conclusion

The historical frameworks from railroad, telegraph, and early telephone regulation have carried through into the digital era. The legal precedents established the key principles of common carrier, public interest regulation, universal access, competition and deregulation. The idea that certain services, including telecommunications and potentially the Internet, should serve all users equally and fairly was codified in common carrier law. Legal precedent also solidified that communications networks must serve the broader public interest, ensuring access to all, protecting consumers, and encouraging innovation.

The legal frameworks governing communications, from the regulation of railroads and telegraphs to the Communications Acts of 1934 and 1996, have laid the foundation for modern Internet broadband policy. The principles of common carrier status, public interest, universal service, and regulated competition have influenced the ongoing debates over how to govern the Internet and ensure equitable access in the digital age. These legal precedents continue to shape policies around net neutrality, ISP regulation, and the expansion of broadband access.

In Technologies of Freedom, Pool highlighted that while legal precedents can provide stability, they must be flexible enough to adapt to new technologies. He cautioned against applying old telecommunications regulations to emerging digital services, arguing that such rigidity could hinder innovation, infringe on freedom of expression, and prevent society from fully benefiting from technological advancements. His work influenced later debates on technological neutrality and regulatory flexibility, especially as the Internet and digital communications became more central to society.

Citation APA (7th Edition)

Pennings, A.J. (2024, Nov 3) Legal Precedents and Perturbations Shaping US Broadband Policy. apennings.com https://apennings.com/telecom-policy/legal-precedents-shaping-us-broadband-policy/

Share

Notes

[1] Pool, I. (1984) Technologies of Freedom. Harvard University Press. Written at the University of Hawaii Law Library in the early 1980s.
[2] List of Prevous Posts in this Series
Pennings, A.J. (2022, Jun 22). US Internet Policy, Part 6: Broadband Infrastructure and the Digital Divide. apennings.com https://apennings.com/telecom-policy/u-s-internet-policy-part-6-broadband-infrastructure-and-the-digital-divide/

Pennings, A.J. (2021, May 16). US Internet Policy, Part 5: Trump, Title I, and the End of Net Neutraliy. apennings.com https://apennings.com/telecom-policy/us-internet-policy-part-5-trump-title-i-and-the-end-of-net-neutrality/

Pennings, A.J. (2021, Mar 26). Internet Policy, Part 4: Obama and the Return of Net Neutrality, Temporarily. apennings.com https://apennings.com/telecom-policy/internet-policy-part-4-obama-and-the-return-of-net-neutrality/

Pennings, A.J. (2021, Feb 5). US Internet Policy, Part 3: The FCC and Consolidation of Broadband. apennings.com https://apennings.com/telecom-policy/us-internet-policy-part-3-the-fcc-and-consolidation-of-broadband/

Pennings, A.J. (2020, Mar 24). US Internet Policy, Part 2: The Shift to Broadband. apennings.com https://apennings.com/telecom-policy/us-internet-policy-part-2-the-shift-to-broadband/

Pennings, A.J. (2020, Mar 15). US Internet Policy, Part 1: The Rise of ISPs. apennings.com https://apennings.com/telecom-policy/us-internet-policy-part-1-the-rise-of-isps/

Share

© ALL RIGHTS RESERVED (Not considered legal advice)



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea and Research Professor at Stony Brook University. He teaches broadband policy and ICT for sustainable development. Previously, he taught at New York University, primarily digital economics and media management. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in South Korea.

All Watched over by “Systems” of Loving Grace

Posted on | October 10, 2024 | No Comments

A few years ago, I started to address an interesting set of BBC documentary videos by Adam Curtis entitled All Watched Over by Machines of Loving Grace, based on the poem by Richard Brautigan. The series focuses on the technologies we have built and the type of meanings we have created around them, particularly as they relate to our conceptions of political governing, and the state of the world.

I started these essays with a post called “All Watched Over by Heroes of Loving Grace” addressing “Episode 1: Love and Power” and the “heroic” trend that emerged in the 1970s with the philosophies of Ayn Rand, Friedrich Hayek, and George Gilder. The influence of these philosophies has been acknowledged to have contributed to the emergence of the “personal empowerment” movement of the 1980s and the Californian Ideology, a combination of counter-culture, cybernetics, and free market “neo-liberal” economics. With Reaganmics, these movements became encapsulatd in a worldwide phenomenon contributing to the export of US industrial jobs, the privatization of public assets and the globalization of finance, information, and news.

In the second of the series, “The Use and Abuse of Vegetational Concepts,” Adam Curtis criticizes the elevation and circulation of natural metaphors and environmental ideas in political thinking. He recounts a history of systems thinking from the introduction of ecology to Buckminster Fuller’s Synergetics. He goes on to examine the concepts of “systems” and “the balance of nature” and their relationship to machine intelligence and networks. The documentary continues with networks and perhaps more importantly, the concept of “system” as both a tool and an ideology.[1]

In the 1950s, systems ideas were projected on to nature by scientists such as Jay Wright Forrester and Norbert Wiener. By introducing the idea of feedback and feedforward loops, Forrester framed people, technology, nature, and social dynamics in terms of interacting information flows. He was largely responsible for the investments in the North American early warning defense network in the 1950s called SAGE that created many computer companies like Burroughs and Honeywell and helped IBM transition from a tabulating machine company to a computer company. Wiener saw control and communications as central to a new technical philosophy called “cybernetics” that placed humans as nodes in a network of systems. His Cybernetics: Or Control and Communication in the Animal and the Machine (1948) was a benchmark book in the area.[2]

All three episodes can be seen at Top Documentary Films.

This second part of the series also looks at how the notion of “systems” emerged and how it conflated nature with machine intelligence. Its major concern is that in systems conceptions, humans become just one cog in a machine, one node in a network, one dataset in a universe of big data. Or to preview Buckminster Fuller, just one astronaut on Spaceship Earth. Systems thinking fundamentally challenged the Enlightenment idea of the human being being separate from nature and master of her destiny.

Adam Curtis starts with Buckminster Fuller (one of my personal favorites), who wrote the Operating Manual for Spaceship Earth in 1964. Fuller viewed humans as “equal members of a global system” contained within “Spaceship Earth.” Inspired by the Apollo moonshot, with its contained biosupport system, Fuller stressed doing more with less. The video argues that the concept displaced the centrality of humanity and instead emphasized the importance of the “spaceship.”

Fuller is mostly known for his inventive design of the geodesic dome, a superstrong structure based on principles derived from studying forms in nature. His dome was used for radar installations in the North American Aerospace Defense Command (NORAD) defensive shield because of its strength in rough weather conditions. It is also used for homes and other unique buildings such as the Epcot Center at Walt Disney World in Florida. The dome is constructed from triangles and tetrahedrons that Fuller considered the most stable energy forms based on his science of “synergetics.”

This second episode also explores the origins of the term “ecology” as it emanated from the work of Sir Arthur George Tansley, an English botanist who pioneered the science of ecology in the 1930s. He was the coiner of one of Chat GPT’s favorite terms “ecosystem,” and also one of the founders of the British Ecological Society and the editor of the Journal of Ecology. Tansley came up with his ideas after studying the work of Sigmund Freud – who conceived of the brain in terms of interconnecting but contentious electrical dynamics which we know grossly as id, ego, and superego. From these neural psychodynamics, Tansley extrapolated a view of nature as an interlocking system; almost a governing mechanical system, that could absorb shocks and tribulations and return to a steady state.

Later, Eugene Odum wrote a textbook on ecology with his brother, Howard Thomas Odum, a graduate student at Yale. The Odum brothers’ book (1953), Fundamentals of Ecology, was the only textbook in the field for about ten years. The Odum brothers helped to shape the foundations of modern ecology by advancing systems thinking, emphasizing the importance of energy flows in ecosystems, and promoting the holistic study of nature. Curtis crticized their methodology, saying they took the metaphor of systems and used it provide a simplistic version of reality. He called it “a machine-like fantasy of stability,” perpetuating the myth of the “balance of nature.”

The story moves on to Jay Forrester, an early innovator in computer systems and one of the designers of the Whirlwind computers that led to the SAGE computers that connected the NORAD hemispheric defense radar network in the 1950s. SAGE jump-started the computer industry and helped the telephone and telegraph system prepare for the Internet by creating modems and other network innovations. Forrester created the first random-access magnetic-core memory – he organized grids of magnetic cores to store digital information so that the contents of their memory could be retrieved.

Forrester had worked on “feedback control systems” at MIT during World War II developing servomechanisms for the control of gun mounts and radar antennas. After Whirlwind he wrote Industrial Dynamics (1961) about how systems could apply to the corporate environment and Urban Dynamics (1969) about cities and urban decay.

In 1970 Forrester was invited to Berne, Switzerland to attend a meeting of the newly formed Club of Rome. Having been promised a grant of $400,000 by the Volkswagen Foundation for a research project on the “problematique humaine,” future of civilization, the group struggled to find a research methodology until Forrester suggested they come to MIT for a week long seminar on the possibilities of systems theory. A month later Forrester finished World Dynamics in 1971 and contributed to Donella H. Meadows’ Limits to Growth in 1972.

Limits to Growth presented Forrester’s computer-aided analysis and a set of solutions to the Earth’s environment and social problems. It modeled the Earth mathematically as a closed system with numerous feedback loops. The MIT team, including Dennis and Donella Meadows, ran a wide variety of computer-based scenarios examining the interactions of five related factors: the consumption of nonrenewable resources, food production, industrial production, pollution, and population.

The model tested causal linkages, structural-behavioral relationships, and positive and negative feedback loops using different sets of assumptions. One key assumption was exponential growth. They tested different rates of change but stuck with idea of capitalist development and compound growth – as the economy grows, the faster its rate of absolute growth. As they tested their model, they didn’t really like what they saw as it didn’t present an optimistic picture of the future.

A truism that emerged in the data processing era was “Garbage in – Garbage out.” In Models of Doom: A Critique of the Limits to Growth (1973), one author wrote “Malthus in – Malthus out.” Thomas Robert Malthus was an early economist who predicted mass starvation because food production would not be able to keep up with population growth. His idea was that population growth would continue exponentially while the growth of the food supply would only grow arithmetically. In the early 1970s, the world was going through a number of dramatic changes and The Limits to Growth reflected that.

Summary and Conclusion

Adam Curtis’ documentary series, “All Watched Over by Machines of Loving Grace,” delves into the relationship between technology, political ideologies, and human agency. Inspired by Richard Brautigan’s poem, Curtis explores how technology shapes our governance systems and worldview. In “Love and Power,” Curtis examines the influence of thinkers like Werner Erhard (Another of my favorites), Ayn Rand (Not so much) and their role in shaping the personal empowerment movement of the 1980s. The rise of the hero contributed to the decreases of taxes and rise of neoliberalism, globalization, and privatization under Reaganomics.

In “The Use and Abuse of Vegetational Concepts,” Curtis critiques the adoption of natural systems thinking in political and technological contexts, tracing the origins of ecological systems thinking back to the work of figures like Jay Forrester, Norbert Wiener, Buckminster Fuller, and the Odum brothers. These ideas, initially intended to describe natural “ecosystems,” were later applied to human societies and governance, often conflating nature with machine intelligence.
“Systems” is more of an engineering concept rather than a scientific one, meaning that it is useful for connecting rather than dissecting. However, one is left thinking: What is democracy if not a system of human nodes providing feedback in a dynamic mechanism?

This leads us to Curtis’ anti-Americanism and Tory perspective. American democracy was designed with checks and balances reflecting the Founding Fathers’ belief that human nature could lead to abuse of power. By distributing authority across different branches and levels of government, the US Constitution creates a system where power is constantly monitored and balanced, fostering accountability and preventing any single entity from becoming too powerful. This system is intended to protect democratic governance, individual rights, and the rule of law.

The documentary raises questions about the consequences of seeing human and natural systems as mechanistic, potentially leading to a distorted understanding of complex, dynamic realities. Curtis raises concerns about how these systems-based frameworks reduce humans to mere nodes in networks, challenging the Enlightenment view of humanity as autonomous and separate from nature.

Citation APA (7th Edition)

Pennings, A.J. (2024, Oct 10). All Watched over by Systems of Loving Grace. apennings.com https://apennings.com/how-it-came-to-rule-the-world/all-watched-over-by-systems-of-loving-grace/

Notes

[1] It addresses Episode 2 (see video) of the series All Watched Over by Machines of Loving Grace called “The Use and Abuse of Vegetational Concepts”.

[1] Transcript of the poem from Chris Hunt’s blog
[2] 1948, Cybernetics: Or Control and Communication in the Animal and the Machine. Paris, (1948) MIT Press. ISBN 978-0-262-73009-9; 2nd revised ed. 1961.
[3]

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea and a Research Professor for Stony Brook Univerisyt. He teaches broadband policy and ICT for sustainable development. From 2002-2012 he was on the faculty of New York University where he taught digital economics and information systems management. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in South Korea.

US Legislative and Regulatory Restrictions on Deficit Spending and Modern Monetary Theory (MMT)

Posted on | September 29, 2024 | No Comments

I’m a cautious MMTer. But, in the age of climate chaos and boomer retirement, I think it’s particularly relevant. Modern Monetary Theory (MMT) is an understanding of government spending that suggests opportunities for using national government spending to enhance employment and other desirable social goals. For instance, MMT can be used to fund sustainable initiatives such as renewable energy projects or resilient infrastructure in the wake of disaster. Stony Brook University professor Stephanie Kelton’s The Deficit Myth: Modern Monetary Theory and the Birth of the People’s Economy (2020) challenged many of the myths and truisms we associate with federal deficits, such as the often cited claim that nation-state economics are equivalent to household economics and that national debt is just like the debt of individuals.

In this post, I want to outline the promise of MMT for the US political economy while digging into several legislative and regulatory problems associated with enacting MMT policies long-term. I critique the MMT approach in the US because the movement has yet to adequately dissect what hurdles and limits keep the government from embracing MMT spending strategies. The major obstacles appear to be a series of legislative actions restricting deficit spending without corresponding borrowing through Treasury auctions.

The Case for MMT

Kelton drew on the observations of Warren Mosler, a former government bond trader who challenged the dominant understandings of government spending in his (1994) “Soft-Currency Economics.” Drawing on his first-hand experience in financial markets, he challenged traditional views of government spending, taxation, and monetary policy in the US, but also other sovereign countries with fiat currencies like the US dollar. And, as Mosler says on page 3/26, “Fiat money is a tax credit not backed by any tangible asset.”

Mosler’s core ideas emphasize that governments that issue their own currency operate under different rules than households or businesses. He argued governments that issue their own currency (like the US with the dollar) cannot “run out of money” in the same way a household or business can. Government’s constraints are not financial, but real resources such as labor and materials. Also, government spending does not depend on revenue from taxes or borrowing. Instead, the government spends money by crediting business and citizen bank accounts, and this process creates money.[1]

Government deficits are normal and not inherently harmful, in fact, they add net financial assets (money) to the private sector. Borrowing is a way to manage interest rates and not a necessity for funding the national government. Mosler acknowledges that government spending can be problematic if it causes inflation. This can happen when spending outstrips the economy’s capacity to produce goods and services.

Government borrowing through issuing bonds can provide a safe asset for investors and collateral for additional borrowing or obtaining cash liquidity. US treauries are a major source of stable collateral around the world that allow corporations and countries to acquire their dollar needs for international transactions. Eurodollar lending requires high quality collateral like US Treasuries to keep borrowing costs low.

Taxes create demand for the government’s currency, but also remove money from economic circulation. Following Mosler and Kelton, MMTers argued that since people need to pay taxes in the national denomination, it creates demand for the official currency. You need US dollars to pay US taxes. At the macroeconomic level, taxes remove money from economic circulation and can impede economic activities. Consequently, they provide a major tool to manage inflation. Taxes can also reduce demand for targeted goods and services. On the other hand, tax cuts can stimulate economic activity such as the Inflation Reduction Act’s (IRA) credits for renewable energies. In an era of international credit mobility, tax cuts can create surpluses that can be moved offshore.

Deficits are useful for the non-government sector (households and businesses) to accumulate savings. They inject money into the economy that help promote growth, especially when there is underutilized capacity. For example, capable and healthy people out of work is a waste of productive resources and government policies should aim to achieve full employment, including Job Guarantee programs that would provide a stable wage floor, reduce poverty, and help maintain price stability.

Following Mosler, Kelton argued that governments like the US legislate money into existence to introduce currency for the economy to thrive and fund the activities of the government. Governments need to spend first to generate an official economy. Spending comes first, taxing and borrowing come later after liquidity is added to the economy. Furthermore, national administrations that issue money do not really need to tax populations or borrow money to run the nation’s political economy. Only when they need to cool the economy or disincentivize certain activities.

MMT and Covid’s Pandemic-related Spending

But most of this argument runs against major narratives used by traditional economists and politicians, who often use US federal spending numbers to panic the public. Their fears were supported by what might be called a quasi-MMT experiment during the COVID-19 pandemic. Some $8 trillion dollars by Trump and Biden helped save the US economy and most of the world economy from a significant economic recession but added significantly to US deficit numbers. The accumulated US debt tally reached ($35 trillion in late 2024) as measured by the issuance of government financial instruments and continued to raise concerns.

This level of debt in the context of MMT raises several questions:

1) What are the limits, if any, to US government spending?
2) Can US spending be severed from current debt accounting metrics?
3) What are the optimum spending choices for MMT?

As popularly construed, the spreadsheet-tallied government debt and annual deficit numbers are angst-producing. The voting public is repeatably exposed to the narrative that the debt is unsustainable and a sign of inevitable US decline. They are constantly reminded that the national debt is like household debt, to be discussed over the kitchen table (or Zoom), with inevitable sacrifices to be made from the family budget. Bitcoiners and other “hard” money enthusiasts echo these myths, looking to cash in on panic trades for cryptocurrency or gold appreciation, for personal gain.

But Kelton and other MMTers argue that a national money issuer is a different animal, so to speak, with different objectives and responsibilities. They see that it can allow politicians to become more proactive in addressing social concerns. Climate resilience, healthcare, housing construction, income equality, and tax holidays can be deliberately addressed with less political noise and pushback. MMT can also support education and good jobs, particularly relevant in an age of expanding AI.[2]

The problem worth examining is that the US government has tied debt to borrowing over the years through several legal and legislative entanglements. This intertwining has meant that deficit expenditures are challenged by the requirements of borrowing through prescribed treasury auctions and the proceeds deposited in specific accounts at the Federal Reserve. The increasing accounting accumulations raise concerns, justifiably or not, about the dangers of the national government borrowing too much.

The US Spending Apparatus

Spending money is what governments do. They purchase goods and services by “printing” money that enters the economy, hopefully, as an acceptable currency, facilitating trade and savings. For example, the “greenback” emerged as the established US currency during the Civil War when Lincoln worked with Congress to pass the Legal Tender Act of 1862, which authorized the printing of $150 million in paper currency with green ink to finance the Union war effort and stabilize the economy.

Additional legislation authorized further issuances of greenbacks, bringing the total to about $450 million by the war’s end. The Funding Act of 1866 ordered the Treasury to retire them, but Congress rescinded the order after complaints from farmers looking for currency to pay off debts. As a result, the US “greenback” dollar stayed in circulation with a “Gold Room” set up on Wall Street to reconcile the price relationship between greenbacks and gold.

The US Constitution vested Congress with the power to create money and regulate its value. In 1789, George Washington was elected the first president and soon created three government departments, State (led by Thomas Jefferson), War (led by Henry Knox), and Treasury (led by Alexander Hamilton). The US Treasury was established on March 4, 1789, by the First Congress of the United States in New York. The institution has played a vital role in US monetary policy ever since, primarily through the use of the Treasury General Account (TGA) to requisition government operations and pay off maturing debt.[3]

The TGA is now located at the Federal Reserve Bank (Fed), which became the government’s banker due to the Federal Reserve Act of 1913. This Act established the Fed as a major manager of the country’s monetary policy. Over time, however, legislation tied spending from this account to the insertion of taxed revenues, tariffs, and the proceeds from issuing debt instruments.

So the TGA is the main account at the Fed that the Treasury uses to deposit its receipts and to make disbursements from the government for resources, services rendered, and social programs. Congress appropriates the money to spend through legislation, and the Treasury instructs the Fed to credit the appropriate accounts. The TGA handles daily financial operations, defense spending, and government obligations like Social Security.

The way the system actually works is that when the Treasury makes a payment, the Federal Reserve credits the recipient’s bank account, but also has to debit the TGA. When the Federal Reserve credits a bank account of a vendor or transfer payment recipient with digital dollars, it effectively increases the money supply. To balance this increase, bankers and monetary authorities have required that an equivalent amount be debited from another account with dollars garnered through taxes or borrowings, which reduces the money supply. The TGA is required to serve this dual purpose. Before 1981, the Treasury could often overdraw its account at the Federal Reserve; however, eliminating this privilege required the Treasury to ensure that sufficient funds from authorized sources were available in the TGA before making payments.[4]

No Deficit Spending without Borrowing

The requirement that the US Treasury issue securities when spending money beyond revenues is rooted in a combination of statutory authorities and legislative acts, primarily the Second Liberty Bond Act and its amendments in 1917. Financing for World War I established the legal framework for issuing various government securities, including Treasury bonds, notes, and bills – as well as the famous Liberty Loan Bonds. It has been continually amended to allow for offering different forms of securities, and to adjust borrowing limits.

Also, The Federal Reserve Act plays a crucial role in determining the government’s borrowing protocols and the issuance of securities. Specifically, Section 14 outlined the powers of the Federal Reserve Banks, including the purchase and sale of government securities. Notably, it prohibited the direct purchase of securities from the Treasury. The Federal Reserve could only buy government securities on the open market, not directly from the Treasury. This practice restricted the Treasury from directly monetizing the debt and set up practices that would later be used in monetary policy by the Fed’s Federal Open Market Committee (FOMC) and its computer trading operations at the New York Fed in Manhattan.

The Treasury-Federal Reserve Accord of 1951 reinforced the separation of the Fed and the Treasury and the need for the Treasury to issue securities to finance deficits. This Accord established a fundamental principle that continues to influence policy: the separation of monetary policy (managed by the Federal Reserve) and fiscal policy (managed by the Treasury). The agreement ended the practice of the Federal Reserve directly purchasing Treasury securities to help finance government deficits; this prevented the direct monetization of debt and curbing inflationary pressures. It reinforced the need for the Treasury to auction securities to finance deficits rather than relying on direct borrowing from the central bank.

President Nixon’s August 1971 decision to back the US out of the Bretton Woods dollar-gold convertibility set off several economic crises. These included a significant dollar devaluation, related hikes in oil prices, and a global debt crisis. Instead of returning gold to emerging manufacturing giants such as Japan and West Germany in exchange for goods such as Sony radios and BMWs, Nixon chose to sever the US dollar from a gold backing, making it a fiat currency. The immediate economic ramifications resulted in “stagflation,” a combination of stagnation and inflation and the rise of the petrodollar, OPEC dollars held in banks outside the US that were lent out as eurodollars worldwide. Rising prices became a high priority during the Ford and Carter administrations in the late 1970s, and they looked to the Federal Reserve to tackle the problem.

Carter’s Fed chairman pick, Paul Volcker, re-committed to not monetizing the debt, meaning it would not finance government deficits by increasing the money supply. This reinforced the need for the Treasury to rely on market-based financing through bond auctions and led to the transition at the Fed from managing the money supply to managing interest rates. It would do this by buying and selling government securities through its open market operations (OMO) at the New York Fed to produce the “Fed Funds Rate,” a benchmark interest rate that the banking industry uses to price loans for cars, mortgages, and later credit cards.

Enter Reaganomics

During Ronald Reagan’s presidency, significant changes were instituted in the financial sphere, largely influenced by his Secretary of the Treasury, Donald Regan, the former Chairman and CEO of Merrill Lynch. They saw the need to create a new economic paradigm, a new “operating system” for the global economy that still relied on the US dollar, but one not tied to gold. Several key measures were implemented in the early 1980s and advertised as strategies to reduce the federal deficit, control spending, and increase the dollar’s strength globally.

However, the US saw a significant increase in the US national debt over the next few years, mainly due to tax cuts, increased military spending, and related policy decisions. Instead of reducing spending, Reagan and Regan’s policies resulted in increased national debt and the export of US capital to China and other offshore manufacturing centers. Although they championed “supply-side” economics, promising that tax cuts and spending reforms would reduce deficits, the goal of debt reduction was never achieved and instead, it began a historic trend of increasing government deficits (Except in the later Clinton years as described below).

Supply-side economics, often termed “trickle-down” economics, was meant to increase capital investments in the US. But, it became “trickle-out” economics as new circuits of news and telecommunications facilitated a freer flow of information and capital to other countries. This left much of the US labor force in a tight situation while rewarding investors who owned productive facilities in other, lower cost, countries. The Economic Recovery Tax Act of 1981 lowered the top marginal tax bracket from 70% to 50%. The second tax cut in the Tax Reform Act of 1986 cut the highest personal income tax rate from 50% to 38.5%. They decreased again to 28% in the following years.

Consequently, the Reagan administration would grow government deficits and worked to ensure they were financed through the issuance of Treasury securities sold in computerized open market auctions. This change marked a significant modernization of government financing, with the Treasury shifting to more competitive and transparent auction processes. Initially, the Treasury used multiple-price auctions, where winning bidders paid the prices they bid. Later they experimented with uniform-price auctions, where all winning bidders paid the same price. These auctions became the primary method for raising funds to cover deficits.

Crucially, the Reagan administration further institutionalized the connection between government spending and the Treasury General Account through Title 31 of the US Code, which governs American money and finance. The US Code is the comprehensive compilation of the general and permanent federal statutes of the United States, comprising 53 titles. The Office of the Law Revision Counsel of the House of Representatives publishes it every six years. Section 31 was codified on September 13, 1982, and reinforced the need to issue bonds to finance deficits to avoid inflationary pressures that could arise from printing money (debt monetization) to cover shortfalls.[4]

Under 31 U.S.C. § 3121, the Secretary of the Treasury was authorized to issue securities to finance the public debt. Title 31 specifically mandates that the US government raise funds through Treasury auctions, subjecting the government’s borrowing to market scrutiny and investor expectations. By detailing the types of securities that can be issued and the procedures for their issuance, Title 31 prevents the direct monetization (printing) of debt.[5]

Finally, Public Debt Transactions (31 U.S.C. § 3121) authorizes the Secretary of the Treasury to issue securities to finance the public debt. It emphasizes that the Treasury must manage the public debt responsibly to ensure that the government can meet its financial obligations. The code specifies that the Treasury should issue securities in a manner that attracts investors and maintains confidence in the US government’s ability to manage its debt. This meant conducting auctions and selling securities to private investors. It details the types of securities that can be issued, including Treasury bonds, notes, and bills, as well as the terms, conditions, and procedures for the issuance.

This GOP regulatory framework stressed the importance of managing the federal debt in a way that reinforces market confidence and the perception of responsible fiscal policy. The separation of fiscal and monetary policy ensured political control over spending while maintaining the monetary independence of the Fed. The separation also maintains the narrative that financing deficits through borrowing from the private sector helps sustain confidence in the role of US treasuries in the global financial system.

Clinton-Gore and the Surplus Problem

While the Reagan-Bush administration sought to increase spending, especially deficit spending to produce financial instruments, the Clinton-Gore sought to bring down spending and balance the budget. Thanks to a booming “dot.com” economy and tax increases laid out in the Omnibus Budget Reconciliation Act of 1993 (OBRA 1993) they were quite successful. The administration worked with the Republican Congress to reduce spending on welfare through the Personal Responsibility and Work Opportunity Reconciliation Act. It reduced federal spending across the budget including military reductions, the so-called “peace dividend.” Some disagree over accounting techniques but, conservatively, surpluses of $69.2 billion in fiscal 1998, $76.9 billion in fiscal 1999, and $46 billion for fiscal year 2000 were achieved.

Did the Clinton surpluses contribute to the “dot.com” and Telecom crashes of 2000 and 2002? Probably not, but it did occur as cheap capital became less available. The Internet boom contributed substantially to the revenues that produced the surpluses but the economy was in decline when George W. Bush took office in 2001. The Bush-Cheney administration worked quickly to restore the deficits. Tax cuts, Medicare drug spending, and the invasion of Afghanistan and Iraq quickly erased the budget surpluses that were projected to continue for several more years.

In June 2001, the Economic Growth and Tax Relief Reconciliation Act was signed that particularly targeted the “death tax.” The Act lowered taxes across the board, including reducing the top rates of households making over $250,000 annually from 39.6 percent to 35 percent. It also dramatically increased the estate tax exclusions from $675,000 in 2001 up to $5.25 million for those who died during 2013.

In December 2003, President Bush signed the Medicare Prescription Drug, Improvement, and Modernization Act (MMA), a major overhaul of the 38-year national healthcare program. The law would add another half a trillion dollars to the decade’s deficits for the subsidization of prescription drugs; but also because a major objective of the legislation was to reduce the government’s bargaining positions in drug and healthcare negotiations. Medicaid and Medicare were significantly handcuffed in their ability to drive down costs by this legislation, adding billions more to the government debt.

The tragedies of 9/11 sent the US into a collective shock and led to two major wars in Afghanistan and Iraq. Costs would total over $1.4 trillion by 2013, with additional billions spent on homeland security and the global search for Osama bin Laden and other members of Al-Qaeda. The permanent war against communism was now replaced by a permanent war against Islamic fundamentalism. A 2022 Brown University report estimated the total cost of 20 years of the post-9/11 wars cost the US some $8 trillion.

Conclusion

MMT argues that the economy starts with government spending that puts currency into circulation and provisions the federal government. Debt is not necessarily a bad thing because debt instruments become another entity’s asset and adds to private savings. They also provide a valuable source of collateral. Treasury bills are just a different form of money; one that pays interest. In that regard, spending and borrowing is not just stimulative, but foundational.

Still, debt makes the populace nervous. Historical evidence from countries that have experienced severe debt crises or even hyperinflation after excessive money creation fuels skepticism about MMT’s long-term viability. Covid spending by both Trump and Biden administrations addressed important economic and social issues associated with its uneven recovery. Still, it was part of a “perfect storm” where stimulative spending combined with supply shocks and corporate pricing greed, was seen to increase inflation to over 8% percent by mid-2022.[6]

MMT’s limits are unclear. Recent studies of post-COVID inflation in the US point predominately to supply disruptions despite record deficit spending by both the Trump and Biden administrations. Tying deficit spending to borrowing statistics unnerves the financial markets that are adverse to inflation and the general populace, who are conditioned in the narrative that debt is bad. We should review the legislation on deficit spending while democratically obtaining a vision of where domestic and international spending (for the US) can achieve the most good. As mentioned in the beginning, addressing climate change and chaos is a worthy start.

Citation APA (7th Edition)

Pennings, A.J. (2024, Sep 29). US Legislative and Regulatory Restrictions on Modern Monetary Theory (MMT). apennings.com https://apennings.com/how-it-came-to-rule-the-world/digital-monetarism/us-legislative-and-regulatory-restrictions-on-modern-monetary-theory-mmt/

Notes

[1] Most money is still created by banks in the process of issuing debt, both domestically and internationally as eurodollars. Mosler’s contention that the government can spend without taxing or borrowing is the major focus of this post and why MMT is “theory.” It can, but the process is tangled up in legislation and codification. Yes, it spends money by crediting bank accounts, but is forced to debit Treasury account at the Federal Reserve. Read a pdf of Warren B. Mosler’s seminal Soft Currency Economics.
[2] Kelton followed up on financier’s Warren Mosler’s 1990s explanation with her book, The Deficit Myth: Modern Monetary Theory and the Birth of the People’s Economy. (2020).
[3] When Congress appropriates funds for government activities and programs through the federal budget process, they are disbursed from the Treasury General Account (TGA) to pay for government operations and maturing debt. Tax revenues are deposited into this account as are government securities, such as bonds, notes, and bills. Fees and user charges, collected from various activities like passport issuance and spectrum licensing, are deposited into the TGA as are proceeds from the sale of government assets, royalties from natural resource use, and investment income. Emergency funds are allocated and held in the TGA during crises or emergencies to facilitate rapid responses, such as the recent 2024 hurricanes. Trust funds, such as the Social Security Trust Fund and the Highway Trust Fund, are also managed within the TGA but are held separately from the general operating budget. See Investopedia for more details.
[4] Title 31 was codified September 13, 1982 as “Money and Finance”, Pub. L. Tooltip Public Law (United States) 97–258, 96 Stat.
[5] Code of Federal Regulations (CFR), Title 31 – Money and Finance: Treasury.
[6] Brooks, R, Orszag, P.R. and Murdock III, R. (2024, Aug 15). COVID-19 Inflation was a Supply Shock. https://www.brookings.edu/ https://www.brookings.edu/articles/covid-19-inflation-was-a-supply-shock/
Note: Chat GPT was used for parts of this post. Several prompts were used and parsed.

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea teaching financial economics and ICT for sustainable development. From 2002-2012 he was on the faculty of New York University where he taught digital economics and information systems management. When not in Korea he lives in Austin, Texas were he has also taught in the Digital Media MBA at St. Edwards University.

Battery Energy Storage Systems (BESS) and a Sustainable Future

Posted on | September 16, 2024 | No Comments

Battery Energy Storage Systems (BESS) are transforming renewable energies by addressing key challenges associated with intermittent power generation from sources like solar and wind. Sometimes called long-duration energy storage (LDES), their role is critical in supporting the transition to a low-carbon energy future by enabling the expansion of renewable energy, electrifying transportation, and providing energy resilience in many sectors. Currently, the leading companies are CATL, BYD, Hitachi, LGChem, Panasonic, Samsung, Sumitomo, and Tesla. BESS companies are helping to integrate more renewable energy into power grids and making these sources more viable as part of the energy mix.

While lithium-ion batteries remain the cornerstone of the renewable energy storage revolution, other battery types like solid-state, lithium iron phosphate battery (LFP), and especially sodium-ion batteries are emerging as important alternatives, especially for grid storage. These various energy technologies are helping to address the key challenges of storage, cost, scalability, and safety of battery storage. Innovations in these areas will make the transition to a cleaner and sustainable energy future much more feasible.

BESS technologies are most needed in applications that require energy storage to stabilize electric grids, integrate renewables, manage peak demand, and ensure reliable backup power. Solar and wind are experiencing a remarkable price decline. The sun’s rays send 173,000 terawatts (173 petawatts) of energy daily to the whole Earth, more than 10,000 times the amount currently used by humans worldwide. So it makes more and more sense to capture that live rather than burning the natural containers (cellulose, coal, oil) stored historically by the Earth. This is to say, the potential for solar and wind energy is immense, but it will require BESS to make it work. Here are the key ways BESS are changing renewable energy:

BESS can be used to stabilize intermittent energy supplies. Renewable energy sources like solar and wind are intermittent and produce power only when the sun shines or the wind blows. BESS allow for the storage of excess energy generated during peak production times (e.g., midday for solar) and then release it during periods when generation is low (e.g., nighttime for solar or calm days for wind). This capability helps stabilize the energy supply, ensuring that renewable energy is available even when natural conditions are not ideal. Furthermore, it reduces the reliance on fossil-fuel backup systems.

BESS provide grid flexibility by offering frequency regulation, voltage support, and load balancing.[1] In traditional grids, fossil fuel plants are often used to manage fluctuations in energy demand or sudden drops in supply. Still, BESS technologies can respond much more quickly to these fluctuations. This responsiveness improves the overall reliability of the power grid, allowing for a smoother integration of renewables without causing instability or blackouts.

One of the significant benefits of BESS is peak shaving, which helps reduce the demand on the grid during high-demand periods. BESS can discharge stored renewable energy when demand is at its highest, which reduces the need for additional power plants or peaker plants (often fossil-fuel-based) to come online. This capability reduces costs and emissions while making renewable energy more competitive against traditional power generation sources.

BESS are enabling greater energy independence by supporting microgrids and off-grid systems. Homes, businesses, and communities with BESS and local renewable energy sources like solar panels and windmills can reduce their reliance on central power grids. In rural or remote areas, BESS allow for the storage of renewable energy in isolated systems, ensuring a consistent power supply without needing extensive transmission infrastructure.

The cost of BESS, particularly lithium-ion and lithium-iron-phosphate batteries, has fallen dramatically in recent years, making large-scale energy storage systems more affordable. As the technology improves, energy storage and discharged transmission efficiency have also increased, meaning less energy is lost during the process. As manufacturing facilities are optimized, costs decrease. The average price of a 20-foot DC BESS container in the US was US$180/kWh in 2023 but is expected to fall to US$148/kWh this year, according to Energy-Storage. These cost reductions make it economically viable to store renewable energy for longer durations, contributing to the overall decline in the cost of renewable energy compared to traditional fossil fuel sources.

BESS play a critical role in realizing the vision of 100% sustainable energy systems. With sufficient energy storage, renewable sources can provide power 24/7, eliminating the need for fossil fuels as a backup in many locations. Long-duration energy storage solutions, which can store power for days or weeks, are emerging and could further boost the reliability of renewables, allowing them to complement or replace baseload energy generation from coal, gas, or nuclear plants.

By allowing renewable energy to be stored and used when needed, BESS help reduce reliance on fossil fuel-based power plants that typically provide backup energy during periods of high demand or low renewable generation. This reduces greenhouse gas emissions (GHG) and supports the transition to a low-carbon economy. As more renewables are integrated into the grid, BESS can mitigate the need for fossil fuels, directly contributing to climate change mitigation efforts.

BESS are also driving change in renewable energy through their connection to other important sectors such as transportation, industry, and buildings. The electric vehicle (EV) charging infrastructure is often criticized for using electricity produced by burning fossil fuels. As EV adoption grows, battery energy storage systems can help manage the increased load on the grid by storing excess renewable energy and using it to charge vehicles during off-peak hours.[2] This integration, known as sector coupling, links the electricity grid with other sectors, such as buildings and industry to optimize their use of resources. The goal is to reduce the use of fossil fuels by increasing the use of renewable energy across a wide variety of sector uses in transportation as well as the heating/cooling sectors, allowing for better use of renewable energy across different areas of the economy and making the world and its oceans much cleaner and safer.

Conclusion

BESS are revolutionizing the renewable energy landscape by addressing key issues like intermittency, grid stability, and peak demand management. They allow for greater integration of renewables into power grids, provide energy independence, and enable a more reliable and flexible energy system. As costs continue to decline and the technology improves, BESS are expected to play an increasingly crucial role in the global transition toward a sustainable, low-carbon energy future.

Notes

[1] Pennings, A.J. (2023, Sept 29). ICTs for SDG 7: Twelve Ways Digital Technologies can Support Energy Access for All. apennings.com https://apennings.com/science-and-technology-studies/icts-for-sdg-7-twelve-ways-digital-technologies-can-support-energy-access-for-all/
[2] Pennings, A.J. (2022, Apr 22). Wireless Charging Infrastructure for EVs: Snack and Sell? apennings.com https://apennings.com/mobile-technologies/wireless-charging-infrastructure-for-evs-snack-and-sell/
Note: Chat GPT was used for parts of this post. Multiple prompts were used and parsed.

Citation APA (7th Edition)

Pennings, A.J. (2024, Sep 16). Battery Energy Storage Systems (BESS) and a Sustainable Future. apennings.com https://apennings.com/smart-new-deal/battery-energy-storage-systems-bess-and-a-sustainable-future/

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a professor at the Department of Technology and Society, State University of New York, Korea teaching broadband policy and ICT for sustainable development. From 2002-2012 he was on the faculty of New York University where he taught digital economics and information systems management. He lives in Austin, Texas, when not in the Republic of Korea.

keep looking »
  • Referencing this Material

    Copyrights apply to all materials on this blog but fair use conditions allow limited use of ideas and quotations. Please cite the permalinks of the articles/posts.
    Citing a post in APA style would look like:
    Pennings, A. (2015, April 17). Diffusion and the Five Characteristics of Innovation Adoption. Retrieved from https://apennings.com/characteristics-of-digital-media/diffusion-and-the-five-characteristics-of-innovation-adoption/
    MLA style citation would look like: "Diffusion and the Five Characteristics of Innovation Adoption." Anthony J. Pennings, PhD. Web. 18 June 2015. The date would be the day you accessed the information. View the Writing Criteria link at the top of this page to link to an online APA reference manual.

  • About Me

    Professor at State University of New York (SUNY) Korea since 2016. Moved to Austin, Texas in August 2012 to join the Digital Media Management program at St. Edwards University. Spent the previous decade on the faculty at New York University teaching and researching information systems, digital economics, and strategic communications.

    You can reach me at:

    apennings70@gmail.com
    anthony.pennings@sunykorea.ac.kr

    Follow apennings on Twitter

  • About me

  • Writings by Category

  • Flag Counter
  • Pages

  • Calendar

    January 2025
    M T W T F S S
     12345
    6789101112
    13141516171819
    20212223242526
    2728293031  
  • Disclaimer

    The opinions expressed here do not necessarily reflect the views of my employers, past or present.