Legal Precedents and Perturbations Shaping US Broadband Policy
Posted on | November 3, 2024 | No Comments
When I was in graduate school my thesis advisor recommended that I read Ithiel de Sola Pool‘s Technologies of Freedom (1984). I soon used it to teach my first university-level course in Communication Policy and Planning at the Unversity of Hawaii. One of the major lessons I learned from Pool’s book was the importance of legal precedent in communication policy, and particularly telecommunications policy. It was an important lesson that I applied to my MS thesis Deregulation and the Telecommunications Structure of Transnationally Integrated Financial Industries (1986).[1]
While Pool highlighted the importance of legal precedents to provide stability, he cautioned against applying old telecommunications regulations to emerging digital services, arguing that such rigidity could hinder innovation, infringe on freedom of expression, and prevent society from fully benefiting from technological advancements. Regulations must be flexible enough to adapt to new technologies. His work influenced later debates on technological neutrality and regulatory flexibility, especially as the Internet and digital communications became more central to society.
This post will examine the legal precedents and perturbations for Internet broadband policy going back to railroad law, telegraph law, and the Communications Acts of 1934. The FCC’s Computer Inquiries and the Telecommunications Act of 1996 are particularly relevant, although the distinction between Title I and Title II is paramount in broadband policy.
The legal precedents for Internet policy are deeply rooted in transportation and communications regulations in the US, which began with railroad law, expanded to telegraph law, and were later codified in major pieces of legislation such as the Communications Act of 1934 and the Telecommunications Act of 1996. These laws established principles that would later be applied to the Internet and digital communication networks.
Railroad Law and Common Carriers
Railroad law in the 19th century provided an early legal foundation for regulating services that were considered essential public utilities. Railroads, which were the first major national infrastructure, were regulated under Interstate Commerce Act of 1887. This law created the Interstate Commerce Commission (ICC) and established key principles such as reasonable tariffs that would influence later communications regulation.
Railroads and the trains that ran on them were to be treated as “common carriers,” meaning they had to offer services to all customers without discrimination, and rates had to be “just and reasonable.” Railroads had to stop at all major towns in the Midwest and pick up commodities from the farmers bringing cows, pigs, timber, wheat, etc. to the major markets.
The laws determiend that government had the right to regulate industries deemed essential to the public interest, ensuring accessibility and fairness. These principles of nondiscrimination, accessibility, and regulation in the public interest would later be applied to communications networks, including the telegraph, telephone, and eventually the Internet.
Telegraph Law
The Telegraph Act of 1860 and subsequent regulations were important precursors to modern communications law. Telegraph companies were also treated as “common carriers” under this legal framework. The Pacific Telegraph Act of 1860 was designed to promote the construction of a telegraph line across the US, establishing early rules for how telecommunications infrastructure would be managed. Like railroads, telegraph companies had to provide equal access to their networks for all customers.
The telegraph was seen as a national necessity, further embedding the idea that communication networks are critical public infrastructure requiring federal regulation.
The Communications Act of 1934
The Communications Act of 1934 was a landmark law that created the Federal Communications Commission (FCC) and consolidated the regulation of all electronic communications, including telephone, telegraph, and radio. The Act aimed to establish a comprehensive framework for regulating interstate and foreign communication.
The FCC was tasked with ensuring that communications networks served the public interest, convenience, and necessity. Key provisions included common carrier regulation and public interest standard. Telephone companies were to be regulated as common carriers, requiring them to provide service to everyone on nondiscriminatory terms, with rates subject to regulation by the FCC. The Act also introduced the idea that communications services, particularly telephony, should be universally accessible to all Americans, which laid the groundwork for later universal broadband goals.
This Act established a regulatory foundation that persisted for much of the 20th century, governing how telecommunications were managed and ensuring public access to these services. This included addressing the introduction of computer services into the telecommunications networks.
The Communications Act of 1934 created a regulatory framework for U.S. telecommunications and introduced the concepts that later became known as Title I and Title II services.
Title II of the Communications Act defined and regulated “common carrier” services, treating telecommunications services (like traditional telephone service) as essential public utilities. Common carriers are required to provide service to all customers in a non-discriminatory way, under just and reasonable rates and conditions.
Title II imposed significant regulatory obligations on these services to ensure fair access, affordability, and reliability. It allowed the Federal Communications Commission (FCC) to enforce rate regulations, mandate service quality standards, and prevent discriminatory practices.
Common carriers under Title II was required to operate as public utilities, adhering to principles similar to those governing railroads and utilities. The FCC was given authority to ensure these services operated in the public interest, providing universal and
Title I: “Ancillary” Authority
A crucial provision the Communications Act provides the FCC is “ancillary authority” over all communication by wire or radio that does not fall under specific regulatory categories like Title II or Title III (broadcasting). This means Title I covers services that are not classified as common carriers, allowing the FCC some jurisdiction but without the strict regulatory requirements of Title II.
Title I services are not subject to common carrier regulations. Instead, the FCC’s regulatory power over these services is limited to actions necessary to fulfill its broader regulatory goals. Title I gives the FCC flexibility to regulate communication services that may not fit into the traditional telecommunications (Title II) or broadcasting (Title III) categories. The FCC uses its Title I authority to support its mission, although its powers are limited, making it harder to impose the same level of regulatory oversight on Title I services.
This distinction between Title I and Title II services has significant implications, as services under Title I remain less regulated, which has encouraged innovation and rapid growth in Internet services but has also limited the FCC’s authority to impose rules (such as net neutrality). Title II, by contrast, gives the FCC stronger regulatory powers, which is why network policy debates often focus on reclassifying broadband as a Title II service to enforce stricter oversight. The term “online” was invented by the computer processing industry to avoid the FCC regulations that might incur with use of “data communications.”
Computer Inquiry Proceedings (1970s–1980s)
In the 1960s, the FCC began examining how to regulate computer-related services in its Computer Inquiries, which laid the foundation for differentiating traditional telecommunications services from emerging computer-based services.
Computer Inquiry I aimed to establish a regulatory distinction between “pure” data processing services (considered competitive and not subject to FCC regulation) and regulated communication services. The FCC considered separating computer functions from traditional telephone network operations, allowing for the development of the data processing industry without heavy regulatory burdens while still overseeing communication services provided by carriers like AT&T. It marked the first attempt by the FCC to grapple with the emerging intersection of computers and telecommunications by defining which aspects would fall under their regulatory jurisdiction.
In Computer II (1980), the FCC created a significant legal distinction between “basic” telecommunications services (transmitting data without change in form) and “enhanced” computer services (services that involve processing, storage, and retrieval, such as email or database services). Basic services remained subject to common carrier regulations, while enhanced services were left largely unregulated.
The FCC refined these distinctions further in Computer III (1986) and established that “enhanced” or “information” services would not be regulated like traditional telecommunications. This deregulation fostered the growth of the computer services industry and allowed for innovation without strict regulatory oversight. When the Internet started to take off, this distinction allowed PC users to connect to an ISP over their phone lines with a modem for long periods of time without paying line charges.
The Telecommunications Act of 1996
The Clinton-Gore administration attempted the first major overhaul of communications law since 1934. The Telecommunications Act of 1996 was designed to address the emergence of new digital technologies and services, including the Internet. It sought to promote competition and reduce regulatory barriers, with the assumption that market competition would benefit consumers by reducing prices and increasing innovation.
The Telecom Act encouraged competition in local telephone service, long-distance service, and cable television, which had previously been dominated by monopolies. The aim was to foster competition among service providers. It also updated the universal service mandate to include access to advanced telecommunications services, which would later include broadband Internet access.
The 1996 Telecom Act distinguished between “information services” and “telecommunications services,” but left the Internet’s regulatory status ambiguous.
A crucial provision in the 1996 Telecom Act is Section 230, which grants immunity to Internet publishing platforms from liability for content posted by its users. This protection has allowed platforms like social media sites Facebook and X to flourish, but it has also raised debates over platform responsibility for harmful content.
The Internet world was shocked in 2002 when the FCC under Michael Powell ruled cable modem service was an information service, and not a telecommunications service. Cable companies such as Comcast were became lightly regulated broadband providers and were exempted from the common-carrier regulation and network access requirements imposed on the ILECs.
Then, in 2005, incumbent local exchange carriers (ILECs) like AT&T, BellSouth, Hawaiian Telecom, Quest, and Verizon were bestowed Title I status. They were able to take advantage of their new status to take over the ISP business. After the FCC’s 2005 decision, content providers and IAPs began negotiating over paid prioritization and fast lanes. The FCC attempted to implement net neutrality principles under Title I, but these principles were apparently unable to protect web users from IAPs that throttled traffic.
Internet Policy and the Net Neutrality Debates
Over time, as the Internet grew globally in importance, regulatory debates focused on how it should be governed, particularly regarding principles of net neutrality. Net neutrality is the stance that ISPs should treat all data on their networks equally, without favoring or discriminating against certain content or services.
In 2015, the FCC under the Democrats passed regulations that classified broadband Internet as a Title II telecommunications service, subjecting it to common carrier obligations. This meant ISPs were required to adhere to net neutrality rules, treating all traffic equally. President Obama personally advocated for the change.
However, the Republican FCC repealed these rules in 2017, classifying broadband as an information service rather than a telecommunications service, thus removing common carrier obligations and weakening net neutrality protections. Chairman Pai compared regulating ISPs with regulating websites, a clear deviation from the regulatory layers set out in the Computer Inquiries. He stressed that net neutrality would restrict innovation.
In 2024, the FCC under the Democrats, returned broadband services to Title II, bringing back Net Neutrality.
Conclusion
The historical frameworks from railroad, telegraph, and early telephone regulation have carried through into the digital era. The legal precedents established the key principles of common carrier, public interest regulation, universal access, competition and deregulation. The idea that certain services, including telecommunications and potentially the Internet, should serve all users equally and fairly was codified in common carrier law. Legal precedent also solidified that communications networks must serve the broader public interest, ensuring access to all, protecting consumers, and encouraging innovation.
The legal frameworks governing communications, from the regulation of railroads and telegraphs to the Communications Acts of 1934 and 1996, have laid the foundation for modern Internet broadband policy. The principles of common carrier status, public interest, universal service, and regulated competition have influenced the ongoing debates over how to govern the Internet and ensure equitable access in the digital age. These legal precedents continue to shape policies around net neutrality, ISP regulation, and the expansion of broadband access.
In Technologies of Freedom, Pool highlighted that while legal precedents can provide stability, they must be flexible enough to adapt to new technologies. He cautioned against applying old telecommunications regulations to emerging digital services, arguing that such rigidity could hinder innovation, infringe on freedom of expression, and prevent society from fully benefiting from technological advancements. His work influenced later debates on technological neutrality and regulatory flexibility, especially as the Internet and digital communications became more central to society.
Citation APA (7th Edition)
Pennings, A.J. (2024, Nov 3) Legal Precedents and Perturbations Shaping US Broadband Policy. apennings.com https://apennings.com/telecom-policy/legal-precedents-shaping-us-broadband-policy/
Notes
[1] Pool, I. (1984) Technologies of Freedom Harvard University Press. Written at the University of Hawaii Law Library in the early 1980s.
[2] List of Prevous Posts in this Series
Pennings, A.J. (2022, Jun 22). US Internet Policy, Part 6: Broadband Infrastructure and the Digital Divide. apennings.com https://apennings.com/telecom-policy/u-s-internet-policy-part-6-broadband-infrastructure-and-the-digital-divide/
Pennings, A.J. (2021, May 16). US Internet Policy, Part 5: Trump, Title I, and the End of Net Neutraliy. apennings.com https://apennings.com/telecom-policy/us-internet-policy-part-5-trump-title-i-and-the-end-of-net-neutrality/
Pennings, A.J. (2021, Mar 26). Internet Policy, Part 4: Obama and the Return of Net Neutrality, Temporarily. apennings.com https://apennings.com/telecom-policy/internet-policy-part-4-obama-and-the-return-of-net-neutrality/
Pennings, A.J. (2021, Feb 5). US Internet Policy, Part 3: The FCC and Consolidation of Broadband. apennings.com https://apennings.com/telecom-policy/us-internet-policy-part-3-the-fcc-and-consolidation-of-broadband/
Pennings, A.J. (2020, Mar 24). US Internet Policy, Part 2: The Shift to Broadband. apennings.com https://apennings.com/telecom-policy/us-internet-policy-part-2-the-shift-to-broadband/
Pennings, A.J. (2020, Mar 15). US Internet Policy, Part 1: The Rise of ISPs. apennings.com https://apennings.com/telecom-policy/us-internet-policy-part-1-the-rise-of-isps/
© ALL RIGHTS RESERVED (Not considered legal advice)
Anthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea and Research Professor at Stony Brook University. He teaches broadband policy and ICT for sustainable development. Previously, he was on the faculty of New York University where he taught digital economics and media management. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in South Korea.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Communications Act of 1934 > Computer II > Computer Inquiries > Computer One > Telecommunications Act of 1996
All Watched over by “Systems” of Loving Grace
Posted on | October 10, 2024 | No Comments
A few years ago, I started to address an interesting set of BBC documentary videos by Adam Curtis entitled All Watched Over by Machines of Loving Grace, based on the poem by Richard Brautigan. The series focuses on the technologies we have built and the type of meanings we have created around them, particularly as they relate to our conceptions of political governing, and the state of the world.
I started these essays with a post called “All Watched Over by Heroes of Loving Grace” addressing “Episode 1: Love and Power” and the “heroic” trend that emerged in the 1970s with the philosophies of Ayn Rand, Friedrich Hayek, and George Gilder. The influence of these philosophies has been acknowledged to have contributed to the emergence of the “personal empowerment” movement of the 1980s and the Californian Ideology, a combination of counter-culture, cybernetics, and free market “neo-liberal” economics. With Reaganmics, these movements became encapsulatd in a worldwide phenomenon contributing to the export of US industrial jobs, the privatization of public assets and the globalization of finance, information, and news.
In the second of the series, “The Use and Abuse of Vegetational Concepts,” Adam Curtis criticizes the elevation and circulation of natural metaphors and environmental ideas in political thinking. He recounts a history of systems thinking from the introduction of ecology to Buckminster Fuller’s Synergetics. He goes on to examine the concepts of “systems” and “the balance of nature” and their relationship to machine intelligence and networks. The documentary continues with networks and perhaps more importantly, the concept of “system” as both a tool and an ideology.[1]
In the 1950s, systems ideas were projected on to nature by scientists such as Jay Wright Forrester and Norbert Wiener. By introducing the idea of feedback and feedforward loops, Forrester framed people, technology, nature, and social dynamics in terms of interacting information flows. He was largely responsible for the investments in the North American early warning defense network in the 1950s called SAGE that created many computer companies like Burroughs and Honeywell and helped IBM transition from a tabulating machine company to a computer company. Wiener saw control and communications as central to a new technical philosophy called “cybernetics” that placed humans as nodes in a network of systems. His Cybernetics: Or Control and Communication in the Animal and the Machine (1948) was a benchmark book in the area.[2]
All three episodes can be seen at Top Documentary Films.
This second part of the series also looks at how the notion of “systems” emerged and how it conflated nature with machine intelligence. Its major concern is that in systems conceptions, humans become just one cog in a machine, one node in a network, one dataset in a universe of big data. Or to preview Buckminster Fuller, just one astronaut on Spaceship Earth. Systems thinking fundamentally challenged the Enlightenment idea of the human being being separate from nature and master of her destiny.
Adam Curtis starts with Buckminster Fuller (one of my personal favorites), who wrote the Operating Manual for Spaceship Earth in 1964. Fuller viewed humans as “equal members of a global system” contained within “Spaceship Earth.” Inspired by the Apollo moonshot, with its contained biosupport system, Fuller stressed doing more with less. The video argues that the concept displaced the centrality of humanity and instead emphasized the importance of the “spaceship.”
Fuller is mostly known for his inventive design of the geodesic dome, a superstrong structure based on principles derived from studying forms in nature. His dome was used for radar installations in the North American Aerospace Defense Command (NORAD) defensive shield because of its strength in rough weather conditions. It is also used for homes and other unique buildings such as the Epcot Center at Walt Disney World in Florida. The dome is constructed from triangles and tetrahedrons that Fuller considered the most stable energy forms based on his science of “synergetics.”
This second episode also explores the origins of the term “ecology” as it emanated from the work of Sir Arthur George Tansley, an English botanist who pioneered the science of ecology in the 1930s. He was the coiner of one of Chat GPT’s favorite terms “ecosystem,” and also one of the founders of the British Ecological Society and the editor of the Journal of Ecology. Tansley came up with his ideas after studying the work of Sigmund Freud – who conceived of the brain in terms of interconnecting but contentious electrical dynamics which we know grossly as id, ego, and superego. From these neural psychodynamics, Tansley extrapolated a view of nature as an interlocking system; almost a governing mechanical system, that could absorb shocks and tribulations and return to a steady state.
Later, Eugene Odum wrote a textbook on ecology with his brother, Howard Thomas Odum, a graduate student at Yale. The Odum brothers’ book (1953), Fundamentals of Ecology, was the only textbook in the field for about ten years. The Odum brothers helped to shape the foundations of modern ecology by advancing systems thinking, emphasizing the importance of energy flows in ecosystems, and promoting the holistic study of nature. Curtis crticized their methodology, saying they took the metaphor of systems and used it provide a simplistic version of reality. He called it “a machine-like fantasy of stability,” perpetuating the myth of the “balance of nature.”
The story moves on to Jay Forrester, an early innovator in computer systems and one of the designers of the Whirlwind computers that led to the SAGE computers that connected the NORAD hemispheric defense radar network in the 1950s. SAGE jump-started the computer industry and helped the telephone and telegraph system prepare for the Internet by creating modems and other network innovations. Forrester created the first random-access magnetic-core memory – he organized grids of magnetic cores to store digital information so that the contents of their memory could be retrieved.
Forrester had worked on “feedback control systems” at MIT during World War II developing servomechanisms for the control of gun mounts and radar antennas. After Whirlwind he wrote Industrial Dynamics (1961) about how systems could apply to the corporate environment and Urban Dynamics (1969) about cities and urban decay.
In 1970 Forrester was invited to Berne, Switzerland to attend a meeting of the newly formed Club of Rome. Having been promised a grant of $400,000 by the Volkswagen Foundation for a research project on the “problematique humaine,” future of civilization, the group struggled to find a research methodology until Forrester suggested they come to MIT for a week long seminar on the possibilities of systems theory. A month later Forrester finished World Dynamics in 1971 and contributed to Donella H. Meadows’ Limits to Growth in 1972.
Limits to Growth presented Forrester’s computer-aided analysis and a set of solutions to the Earth’s environment and social problems. It modeled the Earth mathematically as a closed system with numerous feedback loops. The MIT team, including Dennis and Donella Meadows, ran a wide variety of computer-based scenarios examining the interactions of five related factors: the consumption of nonrenewable resources, food production, industrial production, pollution, and population.
The model tested causal linkages, structural-behavioral relationships, and positive and negative feedback loops using different sets of assumptions. One key assumption was exponential growth. They tested different rates of change but stuck with idea of capitalist development and compound growth – as the economy grows, the faster its rate of absolute growth. As they tested their model, they didn’t really like what they saw as it didn’t present an optimistic picture of the future.
A truism that emerged in the data processing era was “Garbage in – Garbage out.” In Models of Doom: A Critique of the Limits to Growth (1973), one author wrote “Malthus in – Malthus out.” Thomas Robert Malthus was an early economist who predicted mass starvation because food production would not be able to keep up with population growth. His idea was that population growth would continue exponentially while the growth of the food supply would only grow arithmetically. In the early 1970s, the world was going through a number of dramatic changes and The Limits to Growth reflected that.
Summary and Conclusion
Adam Curtis’ documentary series, “All Watched Over by Machines of Loving Grace,” delves into the relationship between technology, political ideologies, and human agency. Inspired by Richard Brautigan’s poem, Curtis explores how technology shapes our governance systems and worldview. In “Love and Power,” Curtis examines the influence of thinkers like Werner Erhard (Another of my favorites), Ayn Rand (Not so much) and their role in shaping the personal empowerment movement of the 1980s. The rise of the hero contributed to the decreases of taxes and rise of neoliberalism, globalization, and privatization under Reaganomics.
In “The Use and Abuse of Vegetational Concepts,” Curtis critiques the adoption of natural systems thinking in political and technological contexts, tracing the origins of ecological systems thinking back to the work of figures like Jay Forrester, Norbert Wiener, Buckminster Fuller, and the Odum brothers. These ideas, initially intended to describe natural “ecosystems,” were later applied to human societies and governance, often conflating nature with machine intelligence.
“Systems” is more of an engineering concept rather than a scientific one, meaning that it is useful for connecting rather than dissecting. However, one is left thinking: What is democracy if not a system of human nodes providing feedback in a dynamic mechanism?
This leads us to Curtis’ anti-Americanism and Tory perspective. American democracy was designed with checks and balances reflecting the Founding Fathers’ belief that human nature could lead to abuse of power. By distributing authority across different branches and levels of government, the US Constitution creates a system where power is constantly monitored and balanced, fostering accountability and preventing any single entity from becoming too powerful. This system is intended to protect democratic governance, individual rights, and the rule of law.
The documentary raises questions about the consequences of seeing human and natural systems as mechanistic, potentially leading to a distorted understanding of complex, dynamic realities. Curtis raises concerns about how these systems-based frameworks reduce humans to mere nodes in networks, challenging the Enlightenment view of humanity as autonomous and separate from nature.
Citation APA (7th Edition)
Pennings, A.J. (2024, Oct 10). All Watched over by Systems of Loving Grace. apennings.com https://apennings.com/how-it-came-to-rule-the-world/all-watched-over-by-systems-of-loving-grace/
Notes
[1] It addresses Episode 2 (see video) of the series All Watched Over by Machines of Loving Grace called “The Use and Abuse of Vegetational Concepts”.
[1] Transcript of the poem from Chris Hunt’s blog
[2] 1948, Cybernetics: Or Control and Communication in the Animal and the Machine. Paris, (1948) MIT Press. ISBN 978-0-262-73009-9; 2nd revised ed. 1961.
[3]
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea and a Research Professor for Stony Brook Univerisyt. He teaches broadband policy and ICT for sustainable development. From 2002-2012 he was on the faculty of New York University where he taught digital economics and information systems management. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in South Korea.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: All Watched Over by Machines of Loving Grace > cybernetics > cyberspace > Jay Forrester > Limits to Growth > Richard Brautigan > Spaceship Earth > The Use and Abuse of Vegetational Concepts > World Dynamics
US Legislative and Regulatory Restrictions on Deficit Spending and Modern Monetary Theory (MMT)
Posted on | September 29, 2024 | No Comments
I’m a cautious MMTer. But, in the age of climate chaos and boomer retirement, I think it’s particularly relevant. Modern Monetary Theory (MMT) is an understanding of government spending that suggests opportunities for using national government spending to enhance employment and other desirable social goals. For instance, MMT can be used to fund sustainable initiatives such as renewable energy projects or resilient infrastructure in the wake of disaster. Stony Brook University professor Stephanie Kelton’s The Deficit Myth: Modern Monetary Theory and the Birth of the People’s Economy (2020) challenged many of the myths and truisms we associate with federal deficits, such as the often cited claim that nation-state economics are equivalent to household economics and that national debt is just like the debt of individuals.
In this post, I want to outline the promise of MMT for the US political economy while digging into several legislative and regulatory problems associated with enacting MMT policies long-term. I critique the MMT approach in the US because the movement has yet to adequately dissect what hurdles and limits keep the government from embracing MMT spending strategies. The major obstacles appear to be a series of legislative actions restricting deficit spending without corresponding borrowing through Treasury auctions.
The Case for MMT
Kelton drew on the observations of Warren Mosler, a former government bond trader who challenged the dominant understandings of government spending in his (1994) “Soft-Currency Economics.” Drawing on his first-hand experience in financial markets, he challenged traditional views of government spending, taxation, and monetary policy in the US, but also other sovereign countries with fiat currencies like the US dollar. And, as Mosler says on page 3/26, “Fiat money is a tax credit not backed by any tangible asset.”
Mosler’s core ideas emphasize that governments that issue their own currency operate under different rules than households or businesses. He argued governments that issue their own currency (like the US with the dollar) cannot “run out of money” in the same way a household or business can. Government’s constraints are not financial, but real resources such as labor and materials. Also, government spending does not depend on revenue from taxes or borrowing. Instead, the government spends money by crediting business and citizen bank accounts, and this process creates money.[1]
Government deficits are normal and not inherently harmful, in fact, they add net financial assets (money) to the private sector. Borrowing is a way to manage interest rates and not a necessity for funding the national government. Mosler acknowledges that government spending can be problematic if it causes inflation. This can happen when spending outstrips the economy’s capacity to produce goods and services.
Government borrowing through issuing bonds can provide a safe asset for investors and collateral for additional borrowing or obtaining cash liquidity. US treauries are a major source of stable collateral around the world that allow corporations and countries to acquire their dollar needs for international transactions. Eurodollar lending requires high quality collateral like US Treasuries to keep borrowing costs low.
Taxes create demand for the government’s currency, but also remove money from economic circulation. Following Mosler and Kelton, MMTers argued that since people need to pay taxes in the national denomination, it creates demand for the official currency. You need US dollars to pay US taxes. At the macroeconomic level, taxes remove money from economic circulation and can impede economic activities. Consequently, they provide a major tool to manage inflation. Taxes can also reduce demand for targeted goods and services. On the other hand, tax cuts can stimulate economic activity such as the Inflation Reduction Act’s (IRA) credits for renewable energies. In an era of international credit mobility, tax cuts can create surpluses that can be moved offshore.
Deficits are useful for the non-government sector (households and businesses) to accumulate savings. They inject money into the economy that help promote growth, especially when there is underutilized capacity. For example, capable and healthy people out of work is a waste of productive resources and government policies should aim to achieve full employment, including Job Guarantee programs that would provide a stable wage floor, reduce poverty, and help maintain price stability.
Following Mosler, Kelton argued that governments like the US legislate money into existence to introduce currency for the economy to thrive and fund the activities of the government. Governments need to spend first to generate an official economy. Spending comes first, taxing and borrowing come later after liquidity is added to the economy. Furthermore, national administrations that issue money do not really need to tax populations or borrow money to run the nation’s political economy. Only when they need to cool the economy or disincentivize certain activities.
MMT and Covid’s Pandemic-related Spending
But most of this argument runs against major narratives used by traditional economists and politicians, who often use US federal spending numbers to panic the public. Their fears were supported by what might be called a quasi-MMT experiment during the COVID-19 pandemic. Some $8 trillion dollars by Trump and Biden helped save the US economy and most of the world economy from a significant economic recession but added significantly to US deficit numbers. The accumulated US debt tally reached ($35 trillion in late 2024) as measured by the issuance of government financial instruments and continued to raise concerns.
This level of debt in the context of MMT raises several questions:
1) What are the limits, if any, to US government spending?
2) Can US spending be severed from current debt accounting metrics?
3) What are the optimum spending choices for MMT?
As popularly construed, the spreadsheet-tallied government debt and annual deficit numbers are angst-producing. The voting public is repeatably exposed to the narrative that the debt is unsustainable and a sign of inevitable US decline. They are constantly reminded that the national debt is like household debt, to be discussed over the kitchen table (or Zoom), with inevitable sacrifices to be made from the family budget. Bitcoiners and other “hard” money enthusiasts echo these myths, looking to cash in on panic trades for cryptocurrency or gold appreciation, for personal gain.
But Kelton and other MMTers argue that a national money issuer is a different animal, so to speak, with different objectives and responsibilities. They see that it can allow politicians to become more proactive in addressing social concerns. Climate resilience, healthcare, housing construction, income equality, and tax holidays can be deliberately addressed with less political noise and pushback. MMT can also support education and good jobs, particularly relevant in an age of expanding AI.[2]
The problem worth examining is that the US government has tied debt to borrowing over the years through several legal and legislative entanglements. This intertwining has meant that deficit expenditures are challenged by the requirements of borrowing through prescribed treasury auctions and the proceeds deposited in specific accounts at the Federal Reserve. The increasing accounting accumulations raise concerns, justifiably or not, about the dangers of the national government borrowing too much.
The US Spending Apparatus
Spending money is what governments do. They purchase goods and services by “printing” money that enters the economy, hopefully, as an acceptable currency, facilitating trade and savings. For example, the “greenback” emerged as the established US currency during the Civil War when Lincoln worked with Congress to pass the Legal Tender Act of 1862, which authorized the printing of $150 million in paper currency with green ink to finance the Union war effort and stabilize the economy.
Additional legislation authorized further issuances of greenbacks, bringing the total to about $450 million by the war’s end. The Funding Act of 1866 ordered the Treasury to retire them, but Congress rescinded the order after complaints from farmers looking for currency to pay off debts. As a result, the US “greenback” dollar stayed in circulation with a “Gold Room” set up on Wall Street to reconcile the price relationship between greenbacks and gold.
The US Constitution vested Congress with the power to create money and regulate its value. In 1789, George Washington was elected the first president and soon created three government departments, State (led by Thomas Jefferson), War (led by Henry Knox), and Treasury (led by Alexander Hamilton). The US Treasury was established on March 4, 1789, by the First Congress of the United States in New York. The institution has played a vital role in US monetary policy ever since, primarily through the use of the Treasury General Account (TGA) to requisition government operations and pay off maturing debt.[3]
The TGA is now located at the Federal Reserve Bank (Fed), which became the government’s banker due to the Federal Reserve Act of 1913. This Act established the Fed as a major manager of the country’s monetary policy. Over time, however, legislation tied spending from this account to the insertion of taxed revenues, tariffs, and the proceeds from issuing debt instruments.
So the TGA is the main account at the Fed that the Treasury uses to deposit its receipts and to make disbursements from the government for resources, services rendered, and social programs. Congress appropriates the money to spend through legislation, and the Treasury instructs the Fed to credit the appropriate accounts. The TGA handles daily financial operations, defense spending, and government obligations like Social Security.
The way the system actually works is that when the Treasury makes a payment, the Federal Reserve credits the recipient’s bank account, but also has to debit the TGA. When the Federal Reserve credits a bank account of a vendor or transfer payment recipient with digital dollars, it effectively increases the money supply. To balance this increase, bankers and monetary authorities have required that an equivalent amount be debited from another account with dollars garnered through taxes or borrowings, which reduces the money supply. The TGA is required to serve this dual purpose. Before 1981, the Treasury could often overdraw its account at the Federal Reserve; however, eliminating this privilege required the Treasury to ensure that sufficient funds from authorized sources were available in the TGA before making payments.[4]
No Deficit Spending without Borrowing
The requirement that the US Treasury issue securities when spending money beyond revenues is rooted in a combination of statutory authorities and legislative acts, primarily the Second Liberty Bond Act and its amendments in 1917. Financing for World War I established the legal framework for issuing various government securities, including Treasury bonds, notes, and bills – as well as the famous Liberty Loan Bonds. It has been continually amended to allow for offering different forms of securities, and to adjust borrowing limits.
Also, The Federal Reserve Act plays a crucial role in determining the government’s borrowing protocols and the issuance of securities. Specifically, Section 14 outlined the powers of the Federal Reserve Banks, including the purchase and sale of government securities. Notably, it prohibited the direct purchase of securities from the Treasury. The Federal Reserve could only buy government securities on the open market, not directly from the Treasury. This practice restricted the Treasury from directly monetizing the debt and set up practices that would later be used in monetary policy by the Fed’s Federal Open Market Committee (FOMC) and its computer trading operations at the New York Fed in Manhattan.
The Treasury-Federal Reserve Accord of 1951 reinforced the separation of the Fed and the Treasury and the need for the Treasury to issue securities to finance deficits. This Accord established a fundamental principle that continues to influence policy: the separation of monetary policy (managed by the Federal Reserve) and fiscal policy (managed by the Treasury). The agreement ended the practice of the Federal Reserve directly purchasing Treasury securities to help finance government deficits; this prevented the direct monetization of debt and curbing inflationary pressures. It reinforced the need for the Treasury to auction securities to finance deficits rather than relying on direct borrowing from the central bank.
President Nixon’s August 1971 decision to back the US out of the Bretton Woods dollar-gold convertibility set off several economic crises. These included a significant dollar devaluation, related hikes in oil prices, and a global debt crisis. Instead of returning gold to emerging manufacturing giants such as Japan and West Germany in exchange for goods such as Sony radios and BMWs, Nixon chose to sever the US dollar from a gold backing, making it a fiat currency. The immediate economic ramifications resulted in “stagflation,” a combination of stagnation and inflation and the rise of the petrodollar, OPEC dollars held in banks outside the US that were lent out as eurodollars worldwide. Rising prices became a high priority during the Ford and Carter administrations in the late 1970s, and they looked to the Federal Reserve to tackle the problem.
Carter’s Fed chairman pick, Paul Volcker, re-committed to not monetizing the debt, meaning it would not finance government deficits by increasing the money supply. This reinforced the need for the Treasury to rely on market-based financing through bond auctions and led to the transition at the Fed from managing the money supply to managing interest rates. It would do this by buying and selling government securities through its open market operations (OMO) at the New York Fed to produce the “Fed Funds Rate,” a benchmark interest rate that the banking industry uses to price loans for cars, mortgages, and later credit cards.
Enter Reaganomics
During Ronald Reagan’s presidency, significant changes were instituted in the financial sphere, largely influenced by his Secretary of the Treasury, Donald Regan, the former Chairman and CEO of Merrill Lynch. They saw the need to create a new economic paradigm, a new “operating system” for the global economy that still relied on the US dollar, but one not tied to gold. Several key measures were implemented in the early 1980s and advertised as strategies to reduce the federal deficit, control spending, and increase the dollar’s strength globally.
However, the US saw a significant increase in the US national debt over the next few years, mainly due to tax cuts, increased military spending, and related policy decisions. Instead of reducing spending, Reagan and Regan’s policies resulted in increased national debt and the export of US capital to China and other offshore manufacturing centers. Although they championed “supply-side” economics, promising that tax cuts and spending reforms would reduce deficits, the goal of debt reduction was never achieved and instead, it began a historic trend of increasing government deficits (Except in the later Clinton years as described below).
Supply-side economics, often termed “trickle-down” economics, was meant to increase capital investments in the US. But, it became “trickle-out” economics as new circuits of news and telecommunications facilitated a freer flow of information and capital to other countries. This left much of the US labor force in a tight situation while rewarding investors who owned productive facilities in other, lower cost, countries. The Economic Recovery Tax Act of 1981 lowered the top marginal tax bracket from 70% to 50%. The second tax cut in the Tax Reform Act of 1986 cut the highest personal income tax rate from 50% to 38.5%. They decreased again to 28% in the following years.
Consequently, the Reagan administration would grow government deficits and worked to ensure they were financed through the issuance of Treasury securities sold in computerized open market auctions. This change marked a significant modernization of government financing, with the Treasury shifting to more competitive and transparent auction processes. Initially, the Treasury used multiple-price auctions, where winning bidders paid the prices they bid. Later they experimented with uniform-price auctions, where all winning bidders paid the same price. These auctions became the primary method for raising funds to cover deficits.
Crucially, the Reagan administration further institutionalized the connection between government spending and the Treasury General Account through Title 31 of the US Code, which governs American money and finance. The US Code is the comprehensive compilation of the general and permanent federal statutes of the United States, comprising 53 titles. The Office of the Law Revision Counsel of the House of Representatives publishes it every six years. Section 31 was codified on September 13, 1982, and reinforced the need to issue bonds to finance deficits to avoid inflationary pressures that could arise from printing money (debt monetization) to cover shortfalls.[4]
Under 31 U.S.C. § 3121, the Secretary of the Treasury was authorized to issue securities to finance the public debt. Title 31 specifically mandates that the US government raise funds through Treasury auctions, subjecting the government’s borrowing to market scrutiny and investor expectations. By detailing the types of securities that can be issued and the procedures for their issuance, Title 31 prevents the direct monetization (printing) of debt.[5]
Finally, Public Debt Transactions (31 U.S.C. § 3121) authorizes the Secretary of the Treasury to issue securities to finance the public debt. It emphasizes that the Treasury must manage the public debt responsibly to ensure that the government can meet its financial obligations. The code specifies that the Treasury should issue securities in a manner that attracts investors and maintains confidence in the US government’s ability to manage its debt. This meant conducting auctions and selling securities to private investors. It details the types of securities that can be issued, including Treasury bonds, notes, and bills, as well as the terms, conditions, and procedures for the issuance.
This GOP regulatory framework stressed the importance of managing the federal debt in a way that reinforces market confidence and the perception of responsible fiscal policy. The separation of fiscal and monetary policy ensured political control over spending while maintaining the monetary independence of the Fed. The separation also maintains the narrative that financing deficits through borrowing from the private sector helps sustain confidence in the role of US treasuries in the global financial system.
Clinton-Gore and the Surplus Problem
While the Reagan-Bush administration sought to increase spending, especially deficit spending to produce financial instruments, the Clinton-Gore sought to bring down spending and balance the budget. Thanks to a booming “dot.com” economy and tax increases laid out in the Omnibus Budget Reconciliation Act of 1993 (OBRA 1993) they were quite successful. The administration worked with the Republican Congress to reduce spending on welfare through the Personal Responsibility and Work Opportunity Reconciliation Act. It reduced federal spending across the budget including military reductions, the so-called “peace dividend.” Some disagree over accounting techniques but, conservatively, surpluses of $69.2 billion in fiscal 1998, $76.9 billion in fiscal 1999, and $46 billion for fiscal year 2000 were achieved.
Did the Clinton surpluses contribute to the “dot.com” and Telecom crashes of 2000 and 2002? Probably not, but it did occur as cheap capital became less available. The Internet boom contributed substantially to the revenues that produced the surpluses but the economy was in decline when George W. Bush took office in 2001. The Bush-Cheney administration worked quickly to restore the deficits. Tax cuts, Medicare drug spending, and the invasion of Afghanistan and Iraq quickly erased the budget surpluses that were projected to continue for several more years.
In June 2001, the Economic Growth and Tax Relief Reconciliation Act was signed that particularly targeted the “death tax.” The Act lowered taxes across the board, including reducing the top rates of households making over $250,000 annually from 39.6 percent to 35 percent. It also dramatically increased the estate tax exclusions from $675,000 in 2001 up to $5.25 million for those who died during 2013.
In December 2003, President Bush signed the Medicare Prescription Drug, Improvement, and Modernization Act (MMA), a major overhaul of the 38-year national healthcare program. The law would add another half a trillion dollars to the decade’s deficits for the subsidization of prescription drugs; but also because a major objective of the legislation was to reduce the government’s bargaining positions in drug and healthcare negotiations. Medicaid and Medicare were significantly handcuffed in their ability to drive down costs by this legislation, adding billions more to the government debt.
The tragedies of 9/11 sent the US into a collective shock and led to two major wars in Afghanistan and Iraq. Costs would total over $1.4 trillion by 2013, with additional billions spent on homeland security and the global search for Osama bin Laden and other members of Al-Qaeda. The permanent war against communism was now replaced by a permanent war against Islamic fundamentalism. A 2022 Brown University report estimated the total cost of 20 years of the post-9/11 wars cost the US some $8 trillion.
Conclusion
MMT argues that the economy starts with government spending that puts currency into circulation and provisions the federal government. Debt is not necessarily a bad thing because debt instruments become another entity’s asset and adds to private savings. They also provide a valuable source of collateral. Treasury bills are just a different form of money; one that pays interest. In that regard, spending and borrowing is not just stimulative, but foundational.
Still, debt makes the populace nervous. Historical evidence from countries that have experienced severe debt crises or even hyperinflation after excessive money creation fuels skepticism about MMT’s long-term viability. Covid spending by both Trump and Biden administrations addressed important economic and social issues associated with its uneven recovery. Still, it was part of a “perfect storm” where stimulative spending combined with supply shocks and corporate pricing greed, was seen to increase inflation to over 8% percent by mid-2022.[6]
MMT’s limits are unclear. Recent studies of post-COVID inflation in the US point predominately to supply disruptions despite record deficit spending by both the Trump and Biden administrations. Tying deficit spending to borrowing statistics unnerves the financial markets that are adverse to inflation and the general populace, who are conditioned in the narrative that debt is bad. We should review the legislation on deficit spending while democratically obtaining a vision of where domestic and international spending (for the US) can achieve the most good. As mentioned in the beginning, addressing climate change and chaos is a worthy start.
Citation APA (7th Edition)
Pennings, A.J. (2024, Sep 29). US Legislative and Regulatory Restrictions on Modern Monetary Theory (MMT). apennings.com https://apennings.com/technologies-of-meaning/how-do-artificial-intelligence-and-big-data-use-apis-and-web-scraping-to-collect-data-implications-for-net-neutrality/
Notes
[1] Most money is still created by banks in the process of issuing debt, both domestically and internationally as eurodollars. Mosler’s contention that the government can spend without taxing or borrowing is the major focus of this post and why MMT is “theory.” It can, but the process is tangled up in legislation and codification. Yes, it spends money by crediting bank accounts, but is forced to debit Treasury account at the Federal Reserve. Read a pdf of Warren B. Mosler’s seminal Soft Currency Economics.
[2] Kelton followed up on financier’s Warren Mosler’s 1990s explanation with her book, The Deficit Myth: Modern Monetary Theory and the Birth of the People’s Economy. (2020).
[3] When Congress appropriates funds for government activities and programs through the federal budget process, they are disbursed from the Treasury General Account (TGA) to pay for government operations and maturing debt. Tax revenues are deposited into this account as are government securities, such as bonds, notes, and bills. Fees and user charges, collected from various activities like passport issuance and spectrum licensing, are deposited into the TGA as are proceeds from the sale of government assets, royalties from natural resource use, and investment income. Emergency funds are allocated and held in the TGA during crises or emergencies to facilitate rapid responses, such as the recent 2024 hurricanes. Trust funds, such as the Social Security Trust Fund and the Highway Trust Fund, are also managed within the TGA but are held separately from the general operating budget. See Investopedia for more details.
[4] Title 31 was codified September 13, 1982 as “Money and Finance”, Pub. L. Tooltip Public Law (United States) 97–258, 96 Stat.
[5] Code of Federal Regulations (CFR), Title 31 – Money and Finance: Treasury.
[6] Brooks, R, Orszag, P.R. and Murdock III, R. (2024, Aug 15). COVID-19 Inflation was a Supply Shock. https://www.brookings.edu/ https://www.brookings.edu/articles/covid-19-inflation-was-a-supply-shock/
Note: Chat GPT was used for parts of this post. Several prompts were used and parsed.
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea teaching financial economics and ICT for sustainable development. From 2002-2012 he was on the faculty of New York University where he taught digital economics and information systems management. When not in Korea he lives in Austin, Texas were he has also taught in the Digital Media MBA at St. Edwards University.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Modern Monetary Theory (MMT) > Reaganomics > Title 31 of the U.S. Code > Treasury General Account (TGA) > Treasury-Federal Reserve Accord of 1951
Battery Energy Storage Systems (BESS) and a Sustainable Future
Posted on | September 16, 2024 | No Comments
Battery Energy Storage Systems (BESS) are transforming renewable energies by addressing key challenges associated with intermittent power generation from sources like solar and wind. Sometimes called long-duration energy storage (LDES), their role is critical in supporting the transition to a low-carbon energy future by enabling the expansion of renewable energy, electrifying transportation, and providing energy resilience in many sectors. Currently, the leading companies are CATL, BYD, Hitachi, LGChem, Panasonic, Samsung, Sumitomo, and Tesla. BESS companies are helping to integrate more renewable energy into power grids and making these sources more viable as part of the energy mix.
While lithium-ion batteries remain the cornerstone of the renewable energy storage revolution, other battery types like solid-state, lithium iron phosphate battery (LFP), and especially sodium-ion batteries are emerging as important alternatives, especially for grid storage. These various energy technologies are helping to address the key challenges of storage, cost, scalability, and safety of battery storage. Innovations in these areas will make the transition to a cleaner and sustainable energy future much more feasible.
BESS technologies are most needed in applications that require energy storage to stabilize electric grids, integrate renewables, manage peak demand, and ensure reliable backup power. Solar and wind are experiencing a remarkable price decline. The sun’s rays send 173,000 terawatts (173 petawatts) of energy daily to the whole Earth, more than 10,000 times the amount currently used by humans worldwide. So it makes more and more sense to capture that live rather than burning the natural containers (cellulose, coal, oil) stored historically by the Earth. This is to say, the potential for solar and wind energy is immense, but it will require BESS to make it work. Here are the key ways BESS are changing renewable energy:
BESS can be used to stabilize intermittent energy supplies. Renewable energy sources like solar and wind are intermittent and produce power only when the sun shines or the wind blows. BESS allow for the storage of excess energy generated during peak production times (e.g., midday for solar) and then release it during periods when generation is low (e.g., nighttime for solar or calm days for wind). This capability helps stabilize the energy supply, ensuring that renewable energy is available even when natural conditions are not ideal. Furthermore, it reduces the reliance on fossil-fuel backup systems.
BESS provide grid flexibility by offering frequency regulation, voltage support, and load balancing.[1] In traditional grids, fossil fuel plants are often used to manage fluctuations in energy demand or sudden drops in supply. Still, BESS technologies can respond much more quickly to these fluctuations. This responsiveness improves the overall reliability of the power grid, allowing for a smoother integration of renewables without causing instability or blackouts.
One of the significant benefits of BESS is peak shaving, which helps reduce the demand on the grid during high-demand periods. BESS can discharge stored renewable energy when demand is at its highest, which reduces the need for additional power plants or peaker plants (often fossil-fuel-based) to come online. This capability reduces costs and emissions while making renewable energy more competitive against traditional power generation sources.
BESS are enabling greater energy independence by supporting microgrids and off-grid systems. Homes, businesses, and communities with BESS and local renewable energy sources like solar panels and windmills can reduce their reliance on central power grids. In rural or remote areas, BESS allow for the storage of renewable energy in isolated systems, ensuring a consistent power supply without needing extensive transmission infrastructure.
The cost of BESS, particularly lithium-ion and lithium-iron-phosphate batteries, has fallen dramatically in recent years, making large-scale energy storage systems more affordable. As the technology improves, energy storage and discharged transmission efficiency have also increased, meaning less energy is lost during the process. As manufacturing facilities are optimized, costs decrease. The average price of a 20-foot DC BESS container in the US was US$180/kWh in 2023 but is expected to fall to US$148/kWh this year, according to Energy-Storage. These cost reductions make it economically viable to store renewable energy for longer durations, contributing to the overall decline in the cost of renewable energy compared to traditional fossil fuel sources.
BESS play a critical role in realizing the vision of 100% sustainable energy systems. With sufficient energy storage, renewable sources can provide power 24/7, eliminating the need for fossil fuels as a backup in many locations. Long-duration energy storage solutions, which can store power for days or weeks, are emerging and could further boost the reliability of renewables, allowing them to complement or replace baseload energy generation from coal, gas, or nuclear plants.
By allowing renewable energy to be stored and used when needed, BESS help reduce reliance on fossil fuel-based power plants that typically provide backup energy during periods of high demand or low renewable generation. This reduces greenhouse gas emissions (GHG) and supports the transition to a low-carbon economy. As more renewables are integrated into the grid, BESS can mitigate the need for fossil fuels, directly contributing to climate change mitigation efforts.
BESS are also driving change in renewable energy through their connection to other important sectors such as transportation, industry, and buildings. The electric vehicle (EV) charging infrastructure is often criticized for using electricity produced by burning fossil fuels. As EV adoption grows, battery energy storage systems can help manage the increased load on the grid by storing excess renewable energy and using it to charge vehicles during off-peak hours.[2] This integration, known as sector coupling, links the electricity grid with other sectors, such as buildings and industry to optimize their use of resources. The goal is to reduce the use of fossil fuels by increasing the use of renewable energy across a wide variety of sector uses in transportation as well as the heating/cooling sectors, allowing for better use of renewable energy across different areas of the economy and making the world and its oceans much cleaner and safer.
Conclusion
BESS are revolutionizing the renewable energy landscape by addressing key issues like intermittency, grid stability, and peak demand management. They allow for greater integration of renewables into power grids, provide energy independence, and enable a more reliable and flexible energy system. As costs continue to decline and the technology improves, BESS are expected to play an increasingly crucial role in the global transition toward a sustainable, low-carbon energy future.
Notes
[1] Pennings, A.J. (2023, Sept 29). ICTs for SDG 7: Twelve Ways Digital Technologies can Support Energy Access for All. apennings.com https://apennings.com/science-and-technology-studies/icts-for-sdg-7-twelve-ways-digital-technologies-can-support-energy-access-for-all/
[2] Pennings, A.J. (2022, Apr 22). Wireless Charging Infrastructure for EVs: Snack and Sell? apennings.com https://apennings.com/mobile-technologies/wireless-charging-infrastructure-for-evs-snack-and-sell/
Note: Chat GPT was used for parts of this post. Multiple prompts were used and parsed.
Citation APA (7th Edition)
Pennings, A.J. (2024, Sep 16). Battery Energy Storage Systems (BESS) and a Sustainable Future. apennings.com https://apennings.com/smart-new-deal/battery-energy-storage-systems-bess-and-a-sustainable-future/
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is a professor at the Department of Technology and Society, State University of New York, Korea teaching broadband policy and ICT for sustainable development. From 2002-2012 he was on the faculty of New York University where he taught digital economics and information systems management. He lives in Austin, Texas, when not in the Republic of Korea.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Battery Energy Storage Systems (BESS) > greenhouse gas emissions (GHG) > lithium iron phosphate battery (LFP) > load balancing > long-duration energy storage (LDES) > sodium-ion batteries
ICT4D and the Global Network Transformation
Posted on | August 14, 2024 | No Comments
Edited remarks from my talk at the IGC Research Showcase, May 22, 2023 at the Incheon Global Campus, Songdo, South Korea.
This talk is a part of a more extensive discussion on ICT4D (Information and Communications Technology for Development) and Global Governance, but as I only have 8 minutes, I’m going to focus on changes in national network infrastructures worldwide and why we now have global data transfer and very cheap international voice and video calls. I want to discuss the transition in network architecture technology and telecommunications organizational models and how these led to the global Internet we have today.
In what was termed “liberalization,” “deregulation,” and “privatization” of national telecommunication systems, global pressures led to a radical transition in network architecture and organizational models. Often called “PTTs” for “Post, Telegraph, and Telephone,” these government entities underwent a transition to state-owned enterprises (SOEs) that were placed in competitive environments (liberalization), deregulated, and then sold off to private investors (privatization), in whole or part.[1] In this process, the global Internet emerged.
My analysis has been developed in the context of a larger set of ICT4D developments since 1945 that I examine elsewhere in detail, but because of limited time, I will focus primarily on networks. I got involved in ICT4D in the mid-1980s when I interned at the East-West Center (EWC) in Honolulu. The EWC was noted for its research on Communication for Development (C4D), including work on satellites for development such as India’s INSAT and Indonesia’s Palapa satellites.[2]
I arrived when they were putting together a project on National Computerization Policies, based on France’s Nora-Minc Report: The Computerization of Society (1982). We wanted to make the turn to computer technologies and their role in development (primarily agriculture, education, and health).
Because we had a good relationship with the Pacific Telecommunications Council (PTC), where I did my first internship, network infrastructure was a strong part of the East-West Center’s agenda on development. Also located in Honolulu, PTC brought a wide range of telecom professionals to Hawaii for their annual conference. This meant government representatives, corporate executives, academic researchers, etc. I remember that my first presentation on the national computerization policy project had International Telecommunications Union (ITU) Secretary-General Dr Pekka Tarjanne in the front row.
How did the transformation occur? Pressures started to build in the 1960s as global companies wanted better telecommunications, including more computer communications. Along with transportation, these were seen as “permissive” technologies, allowing expanded financial, manufacturing, and marketing capabilities. Undersea communications increased at a rate of one a year since 1956, and the Space Race put satellites in geosynchronous space orbits to facilitate international connectivity and the usability of earth stations. By the time of the Moon landing, NASA realized Arthur C. Clarke’s vision of “rocket stations” providing global radio coverage.
The dynamics of the global economy changed when President Nixon took the US off the Bretton Woods’ gold-dollar standard in 1971, ultimately leading to significant changes in the world’s telecommunications networks. Nixon ended the convertibility between the dollar and gold, figuratively “closing the gold window” and stopping the bleeding of gold from reserves at Fort Knox and the Federal Reserve Bank in New York. The dollar subsequently crashed in value, incentivizing OPEC to raise oil prices and creating havoc in global currency and debt markets.
At the same time, new technologies were emerging with the commercialization of the Cold War’s semiconductor and telecommunication technology. Intel released the first microprocessor in 1971. Reuters historically provided a news service but created a new virtual marketplace for foreign exchange trading (FX). The monetary volatility of the 1970s oil crises made Reuters Money Monitor Rates for FX quite profitable. SWIFT provided international messaging of money information between banks.[3]
Banks also used network technologies to create syndicated loans to recycle OPEC money, soon to be called “petrodollars.” Where did they recycle them? Primarily countries that needed the money to buy oil, but also development projects worldwide borrowed the dollars. Thus grew the “Third World Debt Crisis.”
The debt crisis became the lever to “structurally adjust” countries towards a more open and globalized system. The Reagan administration tasked the IMF with ensuring that countries looking for debt relief started to follow a specific agenda that included the liberalization and privatization of their PTTs. Liberalization meant encouraging new companies to compete against the national and international incumbents, while privatization meant the process of transitioning from public to private ownership.
Sometimes called “spreadsheet capitalism,” this process often involved inventorying and valuing assets such as maintenance vehicles, telephone poles, digital lines, etc., so the company could become a State-owned Enterprise (SEO), valued by investment banks, and eventually sold off to private investors. These changes started to open up the PTT telecom structure to the introduction of new technologies, including the fiber optic lines and packet-switching routers needed for the emerging World Wide Web.
The 1980s was a decade of significant changes in the global political economy, particularly in the US and Great Britain and their relationship with the rest of the world. Both Ronald Reagan and Margeret Thatcher wanted to counter the growing criticisms from the South countries while moving their economies out of the “stagflation” that rocked the oil crises-riden 1970s. Both were influenced by “Austrian” economists Frederick Hayek and Ludwig Von Mises. Reagan was also influenced by George Gilder’s Wealth and Poverty, that promoted a potlatch “big man” theory that partly inspired his tax cuts. Both wanted to reduce the influence of unions.
Reagan was hesitant about breaking up the AT&T telecommunications company, but Thatcher was quite aggressive about privatizing the British PTT. US’ “Ma Bell” telephone monopoly had been under various forms of anti-trust attacks in the previous decades, especially to make more spectrum available and add terminal equipment. Spurred on by other companies like MCI and Sprint, AT&T was eventually broken up by having to divest its local service to the “Baby Bells” and Bell Labs, inventor of the transistor. The new ATT got to hold on to a lot of cash and its long-distance business, and was finally allowed into the computer business.
I moved to New Zealand in 1992 to study the transition of the country’s PTT to a State-Owned Enterprise (SOE) and then the privately owned “Telco.” The government had started a process of organizing and valuing the “Post Office” into a SOE in the early 80s, following the Reagan-Thatcher preference for private ownership. Then, it sold off 49% of its shares to two Baby Bells, Ameritech and Bell Atlantic (later integrated into Verizon), partially to pay off one-third of the debt it acquired as part of the Third World Debt Crisis. The majority of shares were meant for domestic control.
The “financial revolution” of the 1980s was based on dramatic changes in the 1970s and continued into the 1990s with the formation of the World Trade Organization (WTO). Headed first by Renato Ruggiero from Italy, and later Michael Moore, the former PM of New Zealand, the WTO opened up the trade of all types of communications and information products and services with the ITA in 1996 in Singapore and pushed for further privatization of the telecommunications networks the following year in Geneva.
A speech by Vice President Al Gore to the International Telecommunications Union (ITU) on March 21, 1994, signaled the importance of building a “Global Information Infrastructure” (GII) But even more important was Gore’s participation the next month during the final negotiations of the GATT’s Uruguay Round that led to the establishment of the World Trade Organization (WTO).
The General Agreement on Tariffs and Trade (GATT) originated with the International Monetary Fund and the World Bank at the 1944 Bretton Woods Conference in New Hampshire. This event laid the foundation for the post-World War II financial system. It established the US dollar–gold link that Nixon severed in 1971. It had more trouble trying to ratify the International Trade Organization (ITO), which the US Senate rejected. Negotiations continued, and the GATT avoided the ITO’s fate in 1947 and entered into force on January 1, 1948, but it lacked a solid institutional structure.
It did, however, sponsor eight rounds of multilateral trade negotiations from 1987 to 1994, including the Uruguay Round that integrated services in 1987. International trade negotiations historically concentrated on physical goods, while services were only seriously considered in the November 1982 GATT ministerial meeting. The Uruguay’s Round of trade negotiations led to the General Agreement on Trade in Services (GATS) as part of the World Trade Organization (WTO) mandate. The GATS extended the WTO into unprecedented areas never previously recognized as coming under the scrutiny of trade policy.
The WTO would shape the Internet and its World Wide Web. While the Clinton-Gore administration was initially hesitant about the “Multilateral Trade Organization,” it saw the World Trade Organization as a way of enforcing key trade priorities and policies on global communications and e-commerce.
For networks, it meant replacing telecom monopolies with a more liberalized environment that included outside equipment vendors and service providers and replacing PTT public ownership with private enterprises that would be more competitive and friendly to outside investment. For e-commerce, it meant replacing detailed bureaucratic regulations with a legal environment for fairer and more effective competition. It also meant eliminating cross-subsidies between profitable and unprofitable services that hindered corporate expansion with non-market pricing and subsidies based on social goals rather than market activities. The WTO would shape the Internet and, precisely, why it could globalize and become so cheap.
Summary and Conclusion
The 1970s and 1980s saw significant technological advancements (like undersea cables, satellites, and microprocessors) and economic changes (such as the end of the gold standard and the oil crises) that reshaped global telecommunications. These factors, combined with the financial revolution and the restructuring of global trade policies, pushed countries toward a more open and competitive telecommunication environment.
The global transition from government-controlled Post, Telegraph, and Telephone (PTT) systems to privatized and liberalized telecommunications led to the creation of the global Internet. This shift involved deregulation and the sale of state-owned enterprises, as well as opening the market to competition and new technologies. The network structure is important for the development of ICT4D.[4]
The establishment of the World Trade Organization (WTO) in the 1990s played a crucial role in furthering the liberalization of global telecommunications. This included the introduction of the General Agreement on Trade in Services (GATS) and the promotion of a legal environment that favored competition, which helped globalize the Internet and reduce costs for international communication.
The liberalization and privatization of national telecommunication systems, driven by technological advancements, economic shifts, and global trade policies, were key factors in the development of the global Internet. These changes not only facilitated the creation of a more interconnected world but also made international communication more accessible and affordable.
Citation APA (7th Edition)
Pennings, A.J. (2024, Aug 14). ICT4D and the Global Network Transformation. apennings.com https://apennings.com/telecom-policy/ict4d-and-the-global-network-transformation/
Notes
[1] Herb Dordick and Deane Neubauer, “Information as Currency: Organizational Restructuring under the Impact of the Information Revolution.” Keio Review. No. 25 (1985)
[2] Meheroo Jussawalla was our resident communications development economist at the East-West Center, who also worked closely with Marcellus Snow of the University of Hawaii. Norm Abramson, the creator of ALOHANET, would also walk across the street from the Engineering School at the University of Hawaii. Herb Dordick and Deane Neubauer made a substantial contribution with “Information as Currency: Organizational Restructuring under the Impact of the Information Revolution.” This paper became very influential in my graduate studies.
[3] For my graduate work I swiched focus to financial technology, particularly the telecommuniations regulatory framework for international banking. I was intrigued by the emergence of new networks such as CHIPS, SWIFT, Reuters, and the shadowy world of eurodollars. Reuters was very innovative with its Stockmaster and particularly its Money Monitor Rates. By the 1980s, SWIFT was pioneering the use packet-switching, but the earlier X.25 network and X.75 gateway protocols developed by the ITU and adopted by most of the PTTs around the world at the time.
[4] Information and Communication Technologies for Development (ICT4D) is one of the specializations for the undergraduate B.Sci. degree in Technological Systems Management here at SUNY Korea and part of my research agenda.
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is a professor at the Department of Technology and Society, State University of New York, Korea teaching broadband policy and ICT for sustainable development. From 2002-2012 he was on the faculty of New York University where he taught digital economics and information systems management. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: AT&T > Money Monitor Rates > PTTs > SWIFT > The Society for Worldwide Interbank Financial Telecommunications (SWIFT) > WTO agreement on basic telecommunication services
Digital Borders and Authoritarianism
Posted on | July 27, 2024 | No Comments
Despite cyberspace’s early promise of a world without digital borders, nationalistic concerns started to re-emerge in the new millennium. This essay explores the mechanisms, implications, and broader consequences of digital authoritarianism in the modern era. In an era where information is a powerful tool, authoritarian regimes have increasingly leveraged digital borders to enforce their control, limit dissent, and maintain power. In the contemporary international landscape, the intersection of digital borders and authoritarianism presents a complex and dangerous dynamic for modern nation-states and global order.
In the post-colonial era, national powers were often keen to limit economic and financial information flows. These communications and data movements were often seen by newly independent countries as a type of “trojan horse,” bypassing national boundaries without administrative scrutiny. Before the leverage of petro-dollar debt opened up the networked flows of data and capital that characterized neo-liberal global financialization, nations around the world were known to police information borders, both technologically and politically. Technological innovations like Deep Pack Inspection (DPI) would continue to supply nation-states with tools to monitor populations, and even provide a “kill switch” to shut down a nation’s Internet access.
Post, Telephone, and Telegraph (PTTs) monopolies operated as a type of electronic moat that restricted data communications. Other ministries also restricted capital and sometimes news flows. The push to deregulate and privatize the telecommmunications environment initially liberalized the transnational information flows. Those unrestricted flows would not go uncontested for long though.
Digital Governance and Administrative Power
Anthony Giddens in Nation-State and Violence (1985) described nation-states as “power containers” whose effective functioning relies on the interplay between administrative power and surveillance within a “territorial delimitation.”[1] The nation-state is a quintessentially modern institution that is characterized by a centralized bureaucratic authority, defined territory, and the ability to mobilize resources and populations. They collect power from two major sources:
Administrative (Allocative) Power I: Communication and information storage giving control over space and time and, with it, material resources;
Administrative (Authoritative) Power II: Internal pacification of populations through ideology, surveillance, and the monopoly over violence, incarceration, and physical force.
Administrative power provides the framework for governance and has gotten increasingly more sophisticated. This power involves overt monitoring, such as policing and public surveillance systems with CCTV and smart recognition systems. It increasingly uses what he called “dataveillance,” collecting and analyzing data about individuals and groups. Targeted surveillance that is precise and aims to gather intelligence on specific people with methods can include wiretapping, geo-locational and GPS tracking, online monitoring, and physical observation.
Also, big data techniques that gather knowledge from sources such as census information, social security data, and digital footprints are powerful individual and group tracking techniques. Both enables the state to respond to internal and external challenges. Both can be used to ensure compliance and help construct narratives of legitimation.
Giddens noted that administrative power is not inherently authoritarian. In democratic contexts, it can be used to manage society effectively and uphold the rule of law. Census information is often important for allocating political representation in democratic societies. However, authoritarian regimes can co-opt the same structures to consolidate power and suppress dissent.
Authoritarianism is mostly characterized by a concentration of power in a single authority or a small group of individuals who can exercise significant control over various aspects of life, including political, social, and economic spheres.
Authoritarian nation-states usually emerge when a crisis leads to a domestic group taking power and capturing the state apparatus. Then, they use the power of the state to perpetuate themselves through the pacification of the domestic population. This control is achieved through various combinations ideological persuasion (usually grievance-based), economic dominance, and violence. Xenophobic appeals are quite effective, such as fears about immigration and foreign religions. Sanctions by the global community are another tool to play the population against a foreign threat. Then, oppressive control comes down to how many people can be fooled or manipulated, and the strength of their policing resources.
Digital borders refer to the territorial limitations and controls imposed on the flow of digital information across national boundaries. This digital control has often resulted in isolation from global information, suppression of free speech, separation from the global economy and supply chains, and the erosion of trust in the democratic potential of digital media as a valid information and news source. Globally, this digital isolation has led to human rights concerns, geopolitical tensions, and technological fragmentation.
Mechanisms of Digital Borders
Authoritarian regimes employ various strategies to create and enforce digital borders. These methods are sophisticated and evolve with technological advancements to ensure comprehensive control over the digital sphere. These include Internet censorship and filtering, surveillance and data collection, social media manipulation, and control of the digital infrastructure.
One of the most direct methods of enforcing is using firewalls and filtering technologies to block access to certain websites and online services. China’s Great Firewall is a prominent example, preventing access to selected foreign news websites, social media platforms, and content deemed subversive by the state. By controlling what information citizens can access, authoritarian regimes shape public perception and suppress dissenting views.
Mass surveillance is another key component of digital authoritarianism. By monitoring mass media and online activities, governments can track and intimidate academics, activists, dissidents, and journalists. Advanced algorithms and artificial intelligence facilitate real-time monitoring of social media and other digital communications, identifying and targeting individuals and media outlets who threaten the regime’s narrative.
Mentioned above, Deep Packet Inspection (DPI) technology allows for detailed monitoring and filtering of Internet traffic, enabling regimes to block specific content and identify users accessing prohibited material.
Social Credit Systems (SoCS) have also been conceived and implemented to build evaluative ratings for citizens, businesses, and other organizations. They use big data to monitor behaviors and assign scores based on compliance with government standards. Predictive policing technologies employ AI to analyze data and predict potential criminal activity, leading to pre-emptive actions against perceived threats.
Authoritarian governments also manipulate social media to spread propaganda and disinformation. This interference includes the use of bots, trolls, memes, and state-sponsored media to flood the digital space with content that supports the regime’s objectives while drowning out opposition voices.
By owning or heavily regulating Internet Service Providers (ISPs) and telecommunications companies, authoritarian regimes can ensure they have the ultimate say in who can access the Internet and how it can be used. This control extends to shutting down the Internet entirely during periods of unrest, as seen in countries like Bangladesh, Eygpt, Iran, and Myanmar.
Implications of Digital Borders
The implementation and enforcment of digital borders has profound implications for the political, social, and economic landscapes of affected countries.
Digital borders can greatly limit freedom of expression. Digital repression means citizens cannot freely share information, discuss political matters, or criticize the government without fear of reprisal. This control suppresses public discourse and hinders the development of a healthy, diverse society.
By limiting access to international news and perspectives, authoritarian regimes isolate their populations from the global flow of information. This hinderance fosters controlled narratives and an insular worldview, which can be manipulated to maintain nationalistic or xenophobic sentiments.
Digital borders can also impede economic development. Flows of information are crucial for innovation and global business operations. Restrictions on Internet access and online services can discourage foreign investment, hinder technological progress, and reduce competitiveness in the global market.
Pervasive surveillance and control erodes public trust in digital technologies. People become wary of expressing themselves online or using digital services, knowing their activities are being monitored. This mistrust can stymie the adoption of new technologies and hinder digital literacy and public discourse.
Broader Consequences
The intersection of digital borders and authoritarianism extends beyond individual nations, affecting global politics and international relations.
The suppression of digital freedoms raises significant human rights concerns. International organizations and civil society face challenges in addressing these violations, as authoritarian regimes often justify their actions under the guise of national security and sovereignty. Civil society, consisting of dense and diverse networks of community groups, often stand between the individual and the authoritarian state. Citizen groups, cooperating with community-based groups and associations, strengthen civic freedoms and rights such as fair elections, the freedom to associate, expression of speech, and free media.
Digital borders contribute to geopolitical tensions, particularly between authoritarian and democratic states (which also have to guard against the perils of digital control). Conflicts over cyber espionage, digital trade barriers, and information warfare are increasingly common. Democracies advocate for open Internet principles such as net neutrality. At the same time, authoritarian regimes push for cyber-sovereignty and centralized control over network management, including using “kill switches” that can immediately shut down Internet transmissions through the digital border.
The imposition of digital borders can lead to a fragmented global Internet, where different regions operate under vastly different rules and restrictions. This fragmentation threatens the foundational concept of a unified, open global Internet, complicating international collaboration and digital interoperability.
Conclusion
Authoritarian regimes’ enforcement of digital borders in the modern era represents a significant challenge to global norms of free expression, access to information, and human rights. As regimes continue to develop and refine their methods of control, the civil and international communities must navigate the delicate balance between respecting national sovereignty and advocating for digital freedoms. The future of the Internet as a space for the free exchange of ideas and information hinges on the global response to these authoritarian practices and the collective effort to preserve an open and inclusive digital world.
Citation APA (7th Edition)
Pennings, A.J. (2024, July 27). Digital Borders and Authoritarianism. apennings.com https://apennings.com/dystopian-economies/digital-borders-and-authoritarianism/
Notes
[1] Giddens, A. (1985). The Nation-State and Violence. University of California Press. p. 172.
[2] Shahbaz, A. (2018). The Rise of Digital Authoritarianism. https://freedomhouse.org/sites/default/files/2020-02/10192018_FOTN_2018_Final_Booklet.pdf
Note: Chat GPT was referenced for parts of this post. Several prompts were used and parsed.
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is a professor at the Department of Technology and Society, State University of New York, Korea teaching broadband policy and sustainable development. From 2002-2012 he was on the faculty of New York University and taught digital economics and information systems management. He also taught in the Digital media management at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Administrative power > nation-state > surveillance
AI and Remote Sensing for Monitoring Landslides and Flooding
Posted on | June 24, 2024 | No Comments
Invited remarks prepared for the 2024 United Nations Public Service Forum ‘Fostering Innovation Amid Global Challenges: A Public Sector Perspective.’ Songdo Convensia, the Republic of Korea 24 -26 June 2024. Organized by the Ministry of the Interior and Safety (MOIS), Republic of Korea.
Thank you very much for this opportunity to address how science and technology can address important issues of landslides and urban flooding. A few days after I was invited to this conference, a very unfortunate landslide occurred in Papua New Guinea. Fatalities are still being tallied but are likely to be between 700 and 1200 people.
Flooding is also a tragic sign of our times. As climate change has significantly increased the level of precipitation in the atmosphere, it increasingly resembles the turbulence of a child (or adult) playing in a filled bathtub. Some of the worst 2023 flooding occurred in Beijing, the Congo, Greece, Libya, Myanmar, and Pakistan. These floods took thousands of lives, displaced hundreds of thousands, and caused billions of dollars in property damage.
As requested, I will talk about the role of Artificial Intelligence (AI) and remote sensing of landslides and flooding. I will reference a model I use in my graduate course, EST 561 – Sensing Technologies for Disaster Risk Reduction, at the State University of New York, Korea, here in Songdo. The “Seven Processes of Remote Sensing” from the Canada Centre for Remote Sensing (CCRS) provides a useful framework for understanding how AI and remote sensing work together.[1] Additionally, AI can be implemented at several stages of the sensing process. I list the seven processes in this slide and below at [2].
Remote sensing, the detection and monitoring of an area’s physical characteristics by using sensing technologies to measure the reflected and emitted radiation at a distance, generates vast amounts of data. This data needs to be accurately collected, categorized, and interpreted for information that can be used by first responders and other decision-makers, including policy-makers.
AI algorithms, particularly those involving machine learning (ML) and deep learning (DL) can be useful at several stages. They can compensate for atmospheric conditions, and automate the extraction and use of remote sensing data from target areas. They help identify characteristics of water bodies, soil moisture levels, vegetation health, and ground deformations. This intelligence can speed up analysis and increase accuracy in crucial situations. Just as AI has proven to be extremely useful in detecting cancerous cells, AI is increasingly able to interpret complex geographical and hydrological imagery.[3]
The primary sensing model involves an energy source, a platform for emitting or receiving the energy, the interaction of energy with the atmosphere, and the interaction of energy with the target. This information is then collected, processed, interpreted, and often applied in a resilence situation. Let me explain.
The Energy Source (A)
Sensing technologies rely on data from an energy source that is either passive or active. AI can analyze data from passive sources like sunlight or moonlight reflected off the Earth’s surface. For example, it can use satellite imagery from reflected sunlight to detect changes in land and water surfaces that may indicate flooding or landslides. AI can also process data from active sources such as radar and LiDAR (Light Detection and Ranging). LiDAR, which uses light instead of radio waves, can measure variations in ground height with high precision, helping to identify terrain changes that may precede a landslide and measure the mass of land that may have shifted in the event.
Synthetic Aperture Radar (SAR) sensors on satellites, such as Sentinel-1 and NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS), produce 8–12 GHz and 4–8 GHz electromagnetic waves that can penetrate cloud cover and provide high-resolution images of the Earth’s surface. This makes it possible to detect and map flooded areas even during heavy rains or at night. Also, high-resolution optical satellites like Landsat and Sentinel-2 capture passive visible and infrared imagery that AI can use to delineate flood boundaries by distinguishing water bodies and saturated soils.
Interaction of Energy with the Atmosphere (B)
Sensing from satellites in Earth orbit (and other space-based platforms) is highly structured by what’s in the atmosphere.[4] When analyzing remote sensing data, AI can make adjustments for atmospheric conditions such as clouds, smoke, dust, rain, fog, snow, and steam. Machine learning algorithms are trained to recognize and compensate for these atmospheric factors, improving the accuracy of flood and landslide detection.
Machine learning models can also simulate how different atmospheric conditions affect radiation, helping to better understand and interpret the data received during various weather scenarios. This monitoring is crucial for accurate flood and landslide detection.
Interaction of Energy with the Target (C)
AI can analyze how different surfaces absorb, reflect, or scatter energy. For example, water bodies have distinct reflective properties compared to dry land, which AI can use to detect and identify flooding. For example, “water loves red,” meaning that it absorbs the red electromagnetic rays and reflects the blue, giving us our beautiful blue oceans. Often, particulate material absorbs the blue rays too, resulting in greenish waters. AI can identify subtle vegetation or soil moisture changes that might indicate a potential landslide. Researchers in Japan are acutely aware of these possibilities given the often mountainous terrain and frequency of heavy rains.[5]
Water and vegetation may reflect similarly in the visible wavelengths but are almost always separable in the infrared. You can see that the reflectance starts to vary considerably at about 0.7 micrometers (µm), or microns wavelengths. (See image below) The spectral response can be quite variable, even for the same target type, and can also vary with time (e.g., “green-ness” of leaves) and location. These absorption characteristics allows for the identification and analysis of water bodies, moisture content in soil, and even snow and ice. This information can be used for monitoring lakes, rivers, reservoirs, and assessing soil moisture levels for irrigation management. See the more detailed explanation at [6].
Knowing where to “look” spectrally and understanding the factors which influence the spectral response of the features of interest are critical to correctly interpreting the interaction of electromagnetic radiation with the surface.
The Platform and Recording of Energy by the Sensor (D)
Platforms can be space-based, airborne, or mobile. Much of the early research was done with satellites, but drones, and moving robots (and automobiles) use the same model. After the energy has been scattered by, or emitted from the target, we require a sensor on a platform to collect and record the returned electromagnetic radiation. Remote sensing systems that measure energy naturally available are called passive sensors. Passive sensors can only be used to detect energy when the naturally occurring energy is available and makes it through the atmosphere.
Active sensors, like the LiDar mentioned before, provide their own energy source for illumination. These sensors emit radiation directed toward the target. The sensing platform detects and measures the radiation reflected from that target.
AI can analyze data from platforms like satellites for large-scale monitoring of land and water events. Satellite technology like SAR provides extensive coverage and can track changes over time, making them ideal for detecting floods and landslides. Aircraft and drones equipped with sensors can collect detailed local data, allowing AI to process this data in real time and provide immediate insights. Ground-based sensors from cell towers, IoT and mobile units such as Boston Dynamics’ SPOT robots can provide continuous monitoring at locations that may not be accessible to other platforms.
AI can integrate data from these platforms for a comprehensive view of an area, such as identifying landslide-prone areas by soil and vegetation analysis. High-resolution digital elevation models (DEMs) created from LiDAR or photogrammetry help identify areas with steep slopes and other topographic features associated with landslide risk. Multispectral scanning systems that collect data over a variety of different wavelengths and hyperspectral imagery that detect hundreds of very narrow spectral bands throughout the visible, near-infrared, and mid-infrared portions of the electromagnetic spectrum, can detect soil moisture levels and vegetation health, important indicators of landslide susceptibility.
Transmission, Reception, and Processing (E)
Energy recorded by the sensor is transmitted as data to a receiving and processing station and processed into an image (hardcopy and/or digital). Spy satellites were early adopters of digital imaging technology such as charge-coupled device (CCDs) and CMOS (complementary metal oxide semiconductor) image sensors that now used in smartphones and other cameras. CCDs were an early technology that is still used because of their superior image quality and successful attempts to reduce their energy consumption. CMOS sensors, meanwhile have seen major improvements in their image quality.
These technologies both recieve electromagetic energy immediately (unlike film that has to be developed) and convert them into images. In both cases, a photograph is represented and displayed in a digital format by subdividing the image into small equal-sized and shaped areas, called picture elements or pixels. The brightness of each area is represented with a numeric value or digital number. Processed images are interpreted, visually and/or digitally or electronically, to extract information about the target.
Interpretation and Analysis (F)
Making sense of information from technologies to understand changes in land formations and water flooding can benefit from several analytical approaches. One of the most important is monitoring geographical features over time and space and allowing AI to use techniques such as time-series analysis, river and stream gauging, and post-landslide assessment, especially after a catastrophic fire.
Remote sensing data over time allows for the monitoring of the temporal dynamics of floods, including the rise and fall of water levels and the progression of floodwaters across a landscape. The Landsat archives provide a rich library of imagery dating back to the 1970s that can be used. Having stored information is helpful in assessing the damage and impacts of a landslide after it has occurred. Post-event imagery helps assess the extent and impact of landslides on infrastructure, roads, and human settlements, aiding in disaster response and rehabilitation efforts.
Volume and area estimation after a fire, flood, or landslide can assess the geographic impact and support engineering and humanitarian responses. AI can help remote sensing quantify the volume of displaced material and the area affected by landslides, which is essential for understanding the scale of the event and planning recovery operations. Remote sensing supplements ground-based river and stream gauges by providing spatially extensive water surface elevation measurements and flow rates. This analysis often relies on structural geology and the study of faults, folds, synclines, anticlines, and contours. Understanding geological structures is often the key to mapping potential geohazards (e.g., landslides).[p 198].
AI can classify areas affected by floods or landslides, using deep learning to recognize patterns and changes in the landscape. Subsequently, AI can use predictive analytics to identify climate and geologic trends and provide forecasts for flood and landslide risks and analyzing historical and real-time data, giving early warnings and insights.
AI Integration and Applications (G)
Techniques such as data fusion can combine remote sensing data from multiple sensors (e.g., optical, radar, LiDAR) with ground-based observations to enhance the overall quality and resolution of the information. This integration allows for more accurate mapping of topography, better detection of water bodies, and detailed monitoring of environmental changes.
AI applications can analyze real-time data from sensors to detect rising water levels and predict potential flooding areas. Machine learning algorithms can recognize patterns in historical data, improving the prediction models for future flood events. AI can also incorporate data from social media and crowdsourced reports, providing a more comprehensive view of ongoing events. This information can allow policy makers and first responders to use AI systems to automatically generate alerts and warnings for authorities and the public, allowing for timely evacuations and preparations.
AI can analyze topographical data from LiDAR sensing technologis to detect ground movement and changes in terrain that precede landslides. AI can process data from ground-based sensors to monitor soil moisture levels, a critical factor in landslide risk. By learning from past landslide events, AI can predict areas at risk and suggest mitigation measures. By analyzing data from past landslide events, AI can identify risk factors and predict areas at risk. AI can suggest mitigation measures, such as reinforcing vulnerable slopes or adjusting land use planning.
Conclusion
The integration of AI with remote sensing technologies and ground-based observations enhances the monitoring and management of landslide and flooding disasters. By combining data from multiple sources, analyzing real-time sensor data, and learning from past events, AI can provide accurate predictions, timely alerts, and effective risk mitigation strategies. This approach not only improves disaster response but also aids in long-term planning and resilience building.
By integrating AI into each of these processes, remote sensing can become more accurate, efficient, and insightful, providing valuable data for a wide range of applications supporting climate resilence. AI can contribute at each stage of the remote sensing process. As a result, the detection, monitoring, and response to floods and landslides can be significantly improved, leading to better disaster risk management and mitigation strategies. Remote sensing technologies, when combined with ground-based river and stream gauges, provide a spatially extensive, and temporally rich dataset for monitoring water surface elevation and flow rates. This combination enhances the accuracy of hydrological models, improves early warning systems, and supports effective water resource management and disaster risk reduction efforts.
Citation APA (7th Edition)
Pennings, A.J. (2024, Jun 24). AI and Remote Sensing for Monitoring Landslides and Flooding. apennings.com https://apennings.com/space-systems/ai-and-remote-sensing-for-monitoring-landslides-and-flooding/
Notes
[1] Canada Centre for Remote Sensing. (n.d.). Fundamentals of Remote Sensing. Retrieved from https://natural-resources.canada.ca/maps-tools-and-publications/satellite-imagery-elevation-data-and-air-photos/tutorial-fundamentals-remote-sensing/introduction/9363 and Fundamentals of Remote Sensing
[2] The Canada Centre for Remote Sensing (CCRS) Model:
1. Energy Source or Illumination (A)
2. Radiation and the Atmosphere (B)
3. Interaction with the Target (C)
4. Recording of Energy by the Sensor (D)
5. Transmission, Reception, and Processing (E)
6. Interpretation and Analysis (F)
7. Application (G) – Information extracted from the imagery about the target in order to better understand it, reveal some new information, or assist in solving a particular problem.
[3] Zhang B, Shi H, Wang H. Machine Learning and AI in Cancer Prognosis, Prediction, and Treatment Selection: A Critical Approach. J Multidiscip Healthc. 2023 Jun 26;16:1779-1791. doi: 10.2147/JMDH.S410301. PMID: 37398894; PMCID: PMC10312208.
[4]A good illustration of what atmospheric conditions influence what electromagnetic emissions can be found at: NASA Earthdata. (n.d.). Remote sensing. NASA. Retrieved from https://www.earthdata.nasa.gov/learn/backgrounders/remote-sensing
[5] Asada H, Minagawa T. Impact of Vegetation Differences on Shallow Landslides: A Case Study in Aso, Japan. Water. 2023; 15(18):3193. https://doi.org/10.3390/w15183193
[6] Near-Infrared (NIR) and Short-Wave Infrared (SWIR) ranges of the infrared spectrum are highly effective for sensing water, while NIR (and to some extent the red edge) is better suited for sensing vegetation. These wavelengths are particularly effective for sensing water. Water strongly absorbs infrared radiation in these ranges, making it appear dark in NIR and SWIR imagery. This absorption characteristic allows for the identification and analysis of water bodies, moisture content in soil, and even snow and ice. This can be used for monitoring lakes, rivers, reservoirs, and assessing soil moisture levels for irrigation management. Vegetation strongly reflects NIR light due to the structure of plant leaves. This high reflectance makes NIR ideal for monitoring vegetation health and biomass. Healthy, chlorophyll-rich vegetation reflects more NIR light compared to stressed or diseased plants. The transition zone between the red and NIR part of the spectrum, known as the “red edge,” is particularly sensitive to changes in plant health and chlorophyll content. A commonly used index that combines red and NIR reflectance to assess vegetation health and coverage. NDVI is calculated as (NIR – Red) / (NIR + Red). Higher NDVI values indicate healthier and more dense vegetation.
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea teaching ICT for sustainable development and engineering economics. From 2002-2012 he was on the faculty of New York University where he taught digital economics and information systems management. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in South Korea.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Boston Dynamics > deep learning > LiDar > Machine Learning > remote sensing > SPOT robot
AI and the Rise of Networked Robotics
Posted on | June 22, 2024 | No Comments
The 2004 movie I, Robot was quite prescient. Directed by Alex Proyas and named after the short story by science fiction legend Isaac Asimov, the cyberpunkish tale set in the year 2035 revolves around a policeman, played by Will Smith. He is haunted by memories of being saved from drowning by a robot in a river after a car crash. His angst comes from seeing a young girl from the other car drown as he is being saved. The robot calculated that the girl could not be saved, but the policeman could. Consequently, the policeman develops a prejudice and hatred for robots, driving the movie’s narrative.
What was particularly striking about the movie was a relatively new vision of robots as networked, and in this case, connected subjects of a cloud-based artificial intelligence (AI) named VIKI (Virtual Interactive Kinetic Intelligence). VIKI is the central computer for U.S. Robotics (USR), a major manufacturer of robots. One of their newest models is the humanoid-looking NS-5 model, equipped with advanced artificial intelligence and speech recognition capabilities, allowing them to communicate fluently and naturally with humans and the AI. “She” has been communicating with the NS-5s and sending software updates via their persistent network connection outside the oversight of USR management.
In this post, I examine the transition from autonomous robotics to networked AI-enhanced robotics by revisiting Michio Kaku’s Physics of the Future (2012). We use the first two chapters on “Future of the Computer: Mind over Matter” and “Future of AI: Rise of the Machines” from Kaku’s book as part of my Introduction to Science, Technology, and Society course. Both chapters address robotics and are insightful in many ways, but they lacked focus on networked intelligence. The book was published on the verge of the AI and robotics explosion that is coming from crowdsourcing, webscraping, and other networking techniques that can gather information for machine learning (ML).
The book tends to see robotics and even AI as autonomous, stand-alone systems. A primary focus was on ASIMO (Advanced Step in Innovative Mobility), Honda’s humanoid-shaped robot, which was recently discontinued. But not without a storied history. ASIMO was animated to be very lifelike, but its actions were entirely prescribed by its programmers.
Beyond Turing
Kaku continues with concerns about AI’s common sense and consciousness issues, including discussions about reverse engineering animal and human brains to find ways to increase computerized intelligence. Below I recount some of Kaku’s important observations about AI and robotics, and go on to stress the importance of networked AI for robotics and the potential for the disruption of human labor practices in population-challenged societies.
One of the first distinctions Kaku made is the comparison between the traditional computing model based on Alan Turin’s conception of the general purpose computer (input, central processor, output) with the learning models that characterize AI. NYU’s DARPA-funded LAGR, for example, was guided by Hebb’s rule: whenever a correct decision is made, the network is reinforced.
Traditional computing is designed around developing a program to take data in, peform some function on the data, and output a result. LAGR’s (Long-Range Vision for Autonomous Off-Road Driving) convolutional neural networks (CNN) involved training the system to learn patterns and make decisions or preditions based on the data coming in. Unlike the Turing computing model, which focuses on the theoretical aspects of computation, AI aimed to develop practical systems that can exhibit intelligent behavior and adapt to new situations.
Pattern Recognition and Machine Learning
Kaku pointed to two problems with AI and robotics: “common sense” and pattern recognition. Both are needed for automated tasks such as Full Self-Driving (FSD). He predicted common sense would be solved with the “brute force” of computing power and by the development of a “encyclopedia of thought” by endeavors such as CYC, a long-term AI project by Douglas B. Lenat, who founded Cycorp, Inc. CYC sought to capture common sense by assembling a comprehensive knowledge base covering basic ontological concepts and rules. The Austin-based company focused on implicit knowledge like how to walk and ride a bicycle. CYC eventually developed a powerful reasoning engine and natural language interfaces for enterprise applications like medical services.
Kaku went to MIT to explore the challenge of pattern recognition. Poggio’s Machine at MIT researched “Immediate Recognition,” where an AI must quickly recognize a branch falling or a cat crossing the street. It was important to develop the ability to instantly recognize an object, even before registering it in our awareness. This ability was a great trait for humanity as it was evolving through its hunter stage. Life and death decisions are often made in milliseconds, and any AI operation driving our cars or other life-critical technology needs to operate within that timeframe. With some trepidation, Kaku recounts how the robot consistently scored higher than a human (and him) on a specific vision recognition test.
AI made significant advancements in solving the pattern recognition problem by developing and applying machine learning techniques roughly categorized into supervised, unsupervised, and reinforcement learning. These are briefly: learning from labeled data to make predictions, identifying patterns in unlabeled data, and learning to make decisions through rewards and penalties in an interactive environment. Labeled data “supervises” the machine to produce your desired information. Unsupervised learning is beneficial when you need to identify patterns and make decisions. Reinforced learning is similar to human learning, where the algorithm interacts with its environment and gets a positive or negative reward.
The need for labeled data for training machine learning algorithms dates back to the early days of AI research. Researchers in pattern recognition, natural language processing, and computer vision have long relied on manually labeled datasets to develop and evaluate algorithms. Crowdsourcing platforms made obtaining labeled datasets for machine learning tasks easier at a relatively low cost and with quick turnaround times. Further improvements would improve the accuracy, efficiency, speed, and scalability of AI labeling.
Companies and startups emerged to provide AI developers and organizations with data labeling services. These companies employed teams of annotators who manually labeled or annotated data according to specific requirements and guidelines, ensuring high-quality labeled datasets for machine learning applications. Improvements included developing semi-automated labeling tools, active learning algorithms, and methods for handling ambiguous data.
Poggio’s machine at MIT represents an early example of machine learning and computer vision applied to autonomous driving. Subsequently, Tesla’s Full Self-Driving (FSD) system embodied a modern approach based on machine learning and real-world, networked data collection. Unlike Poggio’s earlier driving machine, which relied on handcrafted features and rule-based algorithms, Tesla’s FSD system utilizes a combination of neural networks, deep learning algorithms, and sensor data (e.g., cameras, radar, LiDAR) to enable autonomous driving capabilities, including automated lane-keeping, self-parking, and traffic-aware cruise control. One controversial move is that FSD is mainly relying on labeling video pixels from cameras as they have become the most cost-effective option.
Tesla’s approach to autonomous driving has emphasized real-world data collection and crowdsourcing by learning from millions of miles of driving data collected online from the fleet of Tesla vehicle owners. This information is used to train and refine the FSD system’s algorithms, although it still faces challenges related to safety, reliability, regulatory approval, and addressing edge cases. Telsa continues to leverage machine learning to acquire driving knowledge directly from the data and improve performance over time through continuous training and updates.
Reverse Engineering the Brain
Reverse engineering became a popular concept after Compaq reverse engineered the IBM BIOS in the late 1980s to bypass IBM intellectual property protections on its Personal Computer (PC). The movie Paycheck (2003) explored a hypothetical scenario of reverse engineering. MIT’s James DiCarlo describes how reverse engineering the brain can be used to understand vision better. Professor DiCarlo describes how convolutional neural networks (CNNs) mimic the human brain with networks that excel at finding patterns in images to recognize objects.
Kaku addresses reverse engineering by asking whether AI should proceed along lines of mimicking biological brain development or would it be more like James Martin’s Alien Intelligence? Kaku introduced IBM’s Blue Gene computer, as a “quarter acre” of rows of jet-black steel cabinets, each rack about 8 feet tall and 15 feet long. Housed at Lawrence Livermore National Laboratory in California, it was capable of a combined speed of 500 trillion operations per second. Kaku visited the site because he said he was interested in Blue Gene’s ability to simulate thinking processes. A few years later Blue Gene was operating at 428 Teraflops.
Blue Gene worked on the capability of a mouse brain, with its 2 million neurons, as compared to the 100 billion neurons of the average human. It was a difficult challenge because every neuron is connected to many other neurons. Together they make up a dense, interconnected web of neurons that takes a lot of computing power to replicate. Blue Gene was designed to simulate the firing of neurons found in a mouse, which it accomplished, but only for several seconds. It was Dawn, also based in Livermore, in 2007, which could simulate an entire rat’s brain (which contains 55 million neurons, much more than the mouse brain). Gene/L ran at a sustained speed of 36.01 teraflops, or trillions of calculations per second.
What is Robotic Consciousness?
Kaku suggests at least three issues be considered when analyzing AI robotic systems. One is self awarenes. Does the system recognize itself? Second, can it sense and recognize the environment around it. Boston Dynamic’s robotic “dog,” for example, now uses SLAM (Simultaneous Localization and Mapping) to recognize its surroundings and use algorithms map its location.[3] SPOT uses 360 degree cameras and Lidar to 3D sense the surrounding environment. It is being used in industrial environments to sense chemical and fire hazards. It uses Nvidia chips and a built-in 5G modem for network connections to get data from the digital canine.
Another is simulating the future and plotting strategy. Can the system predict the dimensions of causal relationships. If it recognizes the cat, can it predict what its next actions might be, including crossing into the street. Finally can it sense and ask “What if?” From that can it develop sufficient scenarios that extrapolate into the future? And develop strategies for obtaining that future outcome?
Kaku and the Singularity
Lastly, Kaku was intrigued with the concept of “singularity.” He traces this idea to his area of expertise, relativistic physics, where the singularity represents a point of extreme gravity, where nothing can escape, not even light. “Singularity” was popularized by the mathematician and computer scientist Vernor Vinge in his 1993 essay “The Coming Technological Singularity.” Vinge argued that the creation of superintelligent AI would surpass human intellectual capacity and mark the end of the human era. The term has since been used by enthusiasts such as Ray Kurzweil, who believes that the exponential growth of Moore’s Law will deliver the needed computing power for the singularity around 2045. He believes that humans will eventually merge with machines, leading to a profound transformation of society.
Kaku is cautious and conservative about the more extreme predictions of the singularity, particularly those that suggest a rapid and uncontrollable explosion of superintelligent machines. He acknowledges that while computing power is growing exponentially, he doubts the trend will continue. There are also significant challenges to achieving true artificial general intelligence (AGI). He argues that replicating or surpassing human intelligence involves more than just increasing computational power.
Kaku believes that advancements in AI and related technologies will occur in incremental improvements that will enhance human life but not necessarily lead to a runaway intelligence explosion. Instead of envisioning a future dominated by superintelligent machines, Kaku imagines a more symbiotic relationship between humans and technology. He foresees humans enhancing their own cognitive and physical abilities through biotechnology and AI, leading to a more integrated coexistence.
But once again, he ignores a networked singularity that would involve interconnected AI systems, distributed intelligence, enhanced human-AI integration, and advanced data networking infrastructure. But could the networked robot become the nexus of singularity? Kaku believes this interconnected future holds immense potential for solving complex global problems and enhancing human capabilities, even though it raises issues of security, privacy, regulation, and social equity.
The Robotic Future
The proliferation of machine learning algorithms and cloud computing platforms since the 2000s accelerated the integration of AI and now robotics with networking technologies. Machine learning models, trained on large datasets, can be deployed and accessed over networked systems, enabling AI-powered applications in areas such as image recognition, natural language processing, and autonomous systems. Cloud computing allows these AI models and robotic machines to be updated, maintained, and scaled efficiently, ensuring widespread access and utilization across various sectors.
The rise of the Internet of Things (IoT) in recent years has further expanded the scope of AI and robot communications at the edges of the network. AI algorithms can now be deployed on networked devices, enabling real-time data processing, analytics, and decision-making in distributed environments. This real-time capability is crucial for applications such as autonomous vehicles, smart cities, and industrial automation, where immediate responses are necessary.
To advance AI-enhanced robots, robust data networking infrastructure is essential. High-speed, low-latency networks facilitate the rapid transmission of large datasets and visual information necessary for training and operating AI and robots. Networking infrastructure supports the integration of AI across various devices and platforms, allowing seamless communication and data sharing.
Cloud-based computing provides the computational power required for sophisticated AI algorithms. It offers scalable resources that can handle the intensive processing demands of AI, from training complex models to deploying them at scale. Cloud platforms also enable collaborative efforts in AI research and development by providing a centralized repository for data and models, fostering innovation and continuous improvement.
The development of AI is deeply intertwined with advancements in robotics in conjunction with data networking, networking infrastructure, and cloud-based computing capabilities. These technological advancements enable the deployment of robotics in real-time applications such as healthcare, finance, and manufacturing by supporting decision-making and enhancing operational efficiency across various sectors. The continued development of AI networking is essential for the ongoing integration and expansion of robotic technologies in our daily lives.
Kaku envisions a future where technology solves major challenges such as disease, poverty, and environmental degradation. Kaku advocates for ongoing research and innovation while remaining vigilant about potential risks and unintended consequences. He emphasizes the importance of a gradual, symbiotic relationship between humans and technology. Kaku highlighted the significance of Isaac Asimov’s Three Laws of Robotics, which are central to the plot of I, Robot. He praised the film for exploring these laws and their potential limitations. The Three Laws are designed to ensure that robots act safely and ethically, but the movie illustrates how these laws can be overridden in unexpected ways and are not to be trusted by themselves.
Citation APA (7th Edition)
Pennings, A.J. (2024, Jun 22). AI and the Rise of Networked Robotics. apennings.com https://apennings.com/technologies-of-meaning/the-value-of-science-technology-and-society-studies-sts/
Notes
[1] Javaid, S. (2024, Jan 3). Generative AI Data in 2024: Importance & 7 Methods. AIMultiple: High Tech Use Cases &Amp; Tools to Grow Your Business. https://research.aimultiple.com/generative-ai-data/#the-importance-of-private-data-in-generative-ai
[2] Kaku, M. (2011) Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100. New York: Doubleday.
[3] Lee, Yeong-Min. Seo, Yong-Deok. (2009), SLAM Vision-Based SLAM in Augmented / Mixed Reality. Korea Multimedia Society 13(3). 13-14.
Note: Chat GPT was used for parts of this post. Multiple prompts were used and parsed.
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea teaching broadband policy and ICT for sustainable development. From 2002-2012 he was on the faculty of New York University where he taught digital economics and information systems management. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Alan Turing > artificial general intelligence (AGI) > CYC > full self-driving (FSD) > I > neural networks > Poggio’s Machine > Singularity