Anthony J. Pennings, PhD

WRITINGS ON AI POLICY, DIGITAL ECONOMICS, ENERGY STRATEGIES, AND GLOBAL E-COMMERCE

Silver, Anchored by Spreadsheet Logic

Posted on | January 11, 2026 | No Comments

Citation APA (7th Edition)

Pennings, A.J. (2026, Jan 11) Silver, Anchored by Spreadsheet Logic. apennings.com https://apennings.com/enterprise-systems/silver-anchored-by-spreadsheet-logic/

Introduction

Silver, like gold, occupies a distinctive position in the global financial system because it exists simultaneously as a physical material, an industrial input, and a financial asset. Its movement from the ground to the balance sheet illustrates with unusual clarity how the SACT layers of spreadsheet capitalism operate across the global financial infrastructure. Using the Substitution, Abstraction, Symbolic Computing, and Telecommunications Synchronization (SACT) framework, we can analyze how silver is mined, abstracted, computed, and traded through the global financial system that governs commodity metal markets.[1]

Silver has excellent electrical/thermal conductivity, as well as antimicrobial properties, which are highly valued for industrial applications in electronics, medical devices, solar panels, and chemical reactions. It’s a strong solder for electrical contacts and brazing alloys. Its antibacterial properties make it valuable for water purification and medical uses (e.g., dressings, instruments). It is an essential catalyst in producing chemicals such as ethylene oxide, used in antifreeze and plastics. With growing demand driven by clean energy technologies like solar panels and EVs, industrial demand for silver is increasing rapidly.

Silver’s modern life unfolds less on trading floors than inside financial terminals and the dense computational environments where spreadsheets, models, and real-time data converge. Platforms such as BlackRock’s Aladdin, Bloomberg’s “Box,” LSEG’s Workspace, and China’s Wind do not merely display silver prices; they organize how silver is seen, valued, hedged, and governed. Across all of them, the USD functions as the universal reference frame, anchoring substitution, abstraction, computation, and synchronization at a planetary scale.[2] Silver may be dug from the earth, but its economic power today is exercised throughout networked digital terminals.

In each terminal, silver appears first as a ticker (XAG/USD for spot silver, SI for silver futures), a symbol that substitutes physical metal with a continuous numerical stream. Whether it is spot silver, COMEX futures, OTC forwards, ETFs, or options, the terminal renders silver as a time-series ready for computation.

Symbolic computing is where terminals stop being informational and become governing technologies. The metal is continuously transformed into a set of executable relationships — prices, probabilities, correlations, and constraints — that allow silver to function as a global financial instrument. When explicitly placed within the SACT framework, symbolic computing emerges as the layer in which silver’s physical reality is subordinated to calculative authority and in which spreadsheet logic actively produces market behavior rather than merely reflecting it.

It is useful to examine each of the terminals in more detail to look at their similarities across terminals, which mask important differences in emphasis:

– Bloomberg prioritizes market discovery and cross-asset comparison.
LSEG Workspace (formerly Refinitiv Workspace/Eikon) emphasizes benchmark pricing, compliance, and institutional workflows.
– Aladdin embeds silver (XAG) as a financial instrument inside portfolio-wide risk and optimization systems.
– Wind integrates silver into China’s industrial planning, hedging, and macro-financial analysis.

Despite these differences, all operate within the same underlying logic of spreadsheet capitalism. Silver is not marketed simply by supply and demand in the physical sense. Instead, it is governed by how its material reality is progressively transformed into calculable, tradable, and synchronized symbols. Silver is governed in real time and denominated primarily in USD. What follows situates silver trading inside these spreadsheet-based financial terminals using the SACT framework and shows why dollar centrality is not incidental but structural for silver commodity markets.

Substitution Transforms Silver from Ore to Standardized Units

Silver begins as a geographically embedded material resource. It is mined primarily as a byproduct of copper, lead, zinc, and gold extraction. Supply originates mainly from countries such as Mexico, Peru, China, Australia, and Poland. At the mine level, silver exists as ore with varying grades, impurities, and extraction costs. None of this heterogeneity is directly tradable in global markets.

Substitution is the first step that allows silver to enter the spreadsheet economy. Assays translate ore into grams or ounces of recoverable silver. Refining processes then substitute this heterogeneous material into standardized forms such as bars, ingots, coins, or grain with certified purity, typically .999 or .9999 fine.

The physical complexities of geology and metallurgy are replaced by standardized units such as the troy ounce, the global accounting unit for precious metals. Mining output is converted into standardized measures: grams per ton of ore, ounces recovered, purity levels.

Crucially, silver substituted into troy ounces becomes the universal unit that allows silver mined under radically different conditions to become exchangeable. Once denominated in ounces, silver is no longer Peruvian or Chinese or Australian. It is globally fungible. At this point, silver exists less as metal than as a set of numbers capable of being priced, hedged, and pledged as collateral.

In SACT terms, silver becomes a quantity multiplied by purity, recovery rate, and cost. These figures are immediately translated into spreadsheet rows. The mine itself becomes a production profile: costs per ounce, recovery rates, reserve estimates, expected mine life. Once this substitution is complete, silver is no longer tied to a specific mine or geography. It becomes globally mobile.

Silver Abstracted as Category, Inventory, and Balance-Sheet Entry

After substitution, silver is abstracted. Abstraction removes silver from its physical context and embeds it within financial categories. It is classified, categorized, and represented in forms that enable governance and comparison across networked space. In spreadsheets across banks, commodity traders, and hedge/sovereign funds, silver is no longer tracked as bars or coins but as:

– Spot prices
– Forward curves
– Inventory levels
– Cost curves
– Volatility measures
– Correlations with gold, copper, inflation, and currencies

Each abstraction corresponds to a different spreadsheet model. In corporate spreadsheets, silver appears as inventory, reserves, or byproduct revenue. In national accounts, it is recorded as part of mineral output and export earnings. In financial markets, it becomes a commodity class with standardized tickers and contract specifications.

Abstraction strips silver of its physical idiosyncrasies. A bar in a London vault, a futures contract in New York, and a mining reserve estimate in Peru can be compared and aggregated because they are all rendered within the same abstract categories. Spreadsheet logic governs silver not as a metal but as a set of variables: quantity, grade, price, volatility, and correlation. This abstraction allows silver to be integrated into portfolios, indexes, and risk models alongside equities, bonds, and currencies, despite its very different material origins.

Mining companies are valued using discounted cash flow models based on expected silver output. Industrial users treat silver as an input cost to be minimized. Investors treat it as a macro signal tied to monetary conditions. These abstractions allow silver to circulate across institutional domains that never touch the physical metal. A solar panel manufacturer in Germany, a hedge fund in New York, and a central bank analyst in Asia all “see” silver through different spreadsheet lenses, yet they coordinate through shared abstractions.

Symbolic Computing Determines Pricing, Hedging, and Leverage

Symbolic computing is where silver’s financial life unfolds. Pricing, trading, and risk management are executed through formulas embedded in spreadsheets, throughout trading systems, and within financial models. Silver prices are derived from continuous symbolic computation of spot prices, futures curves, options models, and arbitrage relationships. A mine’s expected output feeds into discounted cash flow models. Refiners hedge exposure using forward contracts. Manufacturers lock in costs using derivatives. Investors model silver futures as a hedge against inflation, currency debasement, or geopolitical risk.

Formulas determine margin requirements, collateral haircuts, and position limits. A rise in volatility triggers automated adjustments. These calculations do not merely reflect the silver market; they shape it. Capital flows respond to spreadsheet outputs long before physical silver moves.

At the compute level, silver functions less as metal and more as a mathematical object whose value emerges from computational relationships. Silver prices are produced not by direct exchange of metal, but by continuous calculation across futures markets, options chains, and derivative contracts. Spreadsheets and trading systems compute:

– Spot prices from futures curves
– Implied volatility from options
– Margin requirements from price swings
– Hedge ratios for producers and consumers

A mining company hedges future production by locking in prices through futures contracts. A trader models silver’s sensitivity to interest rates or dollar strength. An ETF issuer balances physical holdings against share creation and redemption flows. These are not descriptive acts. They are performative. The formulas used to model silver directly influence how much is mined, stored, or sold. Symbolic computing does not merely reflect silver’s value; it helps create it.

This is where silver’s dual nature becomes visible. It is one of the few commodities where “paper” claims vastly exceed physical supply, making spreadsheet representations more liquid than the metal itself.

Telecommunications Synchronization Creates One Price in Many Places

The global silver market synchronizes through telecommunications networks that link exchanges, vaults, refiners, banks, and traders in real time. Prices discovered on the COMEX in New York, the LBMA in London, and exchanges in Shanghai and Mumbai update continuously across the spreadsheet logic of globally networked financial terminals.

This synchronization enforces the “law of one price” despite vast differences in geography. A supply disruption in Mexico or a surge in industrial demand in China is instantly incorporated into global prices. Silver’s market time becomes universal.

Spreadsheet terminals drawing from live data feeds ensure that mining companies, bullion banks, and investors around the world respond simultaneously. Inventory reports, warehouse receipts, and delivery notices are transmitted instantly. Decisions made in one financial center propagate globally within seconds, coordinating behavior without direct communication.

This synchronization produces a single operational silver price despite fragmented physical markets. A mine shutdown in Mexico, a solar subsidy announcement in Europe, or a change in US interest rates is absorbed into the global price within seconds. Time–space distanciation is total. Silver mined today is priced against expectations of industrial demand, monetary policy, and speculative flows years into the future.

Silver as a Hybrid Asset in Spreadsheet Capitalism

Silver’s uniqueness lies in its hybrid role. It is both a monetary metal and an industrial input. This dual identity is handled through spreadsheet logic rather than political decree.

Industrial demand for electronics, solar panels, and medical applications is modeled as consumption curves. Investment demand is modeled as portfolio allocation. These distinct uses coexist in the same spreadsheets, allowing silver to oscillate between commodity, hedge, and speculative asset depending on modeled conditions.

This hybridity makes silver especially sensitive to shifts in abstraction and symbolic computing. Changes in inflation expectations, green energy policies, or monetary regimes can reclassify silver’s role almost instantly within financial models.

This substitution does more than simplify trade. It places silver inside the dollar liquidity system, making access to silver markets contingent on access to USD funding, dollar clearing banks, and dollar-based credit lines.

Terminals reinforce this substitution by default. Even when local currency views are available, the primary frame is USD. Silver’s global fungibility depends on this monetary anchor.

Silver as a Dollar-Synchronized Spreadsheet Asset

Silver’s journey through Aladdin, Bloomberg, LSEG, and Wind reveals how deeply commodity markets are embedded in spreadsheet capitalism. These terminals do not merely reflect silver markets; they constitute them by defining how silver is substituted, abstracted, computed, and synchronized.

At the center of this system sits the USD, not as ideology, but as infrastructure. The dollar enables silver to function as a global financial instrument rather than a regional metal. It allows spreadsheets in New York, London, Beijing, and Zurich to speak the same numerical language.

Silver may be dug from the earth, but its power today is exercised in spreadsheet-based financial terminals. It is priced in dollars, governed by formulas, synchronized across networks, and traded less as metal than as a financial signal within the global USD-based spreadsheet order.

In this sense, silver is not a relic of pre-modern money. It is a living example of how ancient materials are absorbed into the most advanced architectures of time–space power.

Conclusion: Silver as Material Anchored by Spreadsheet Logic

Silver’s journey from mine to market illustrates how global finance governs material reality through SACT layers. The metal itself does not circulate freely; its spreadsheet representations do. Substitution standardizes it. Abstraction categorizes it. Symbolic computing prices and allocates it. Telecommunications synchronization ensures that this process operates globally and continuously.

In spreadsheet capitalism, silver remains valuable precisely because it is material, scarce, and physically constrained. Yet its power and price are determined less by where it is mined than by how it is calculated. Silver stands as a rare bridge between the physical world and the abstract financial architectures that now coordinate global political economies.

Notes

[1] I developed the SACT layers approach in July 2025 while reviewing my previous work on spreadsheets, and specifically examing the notion of substitution.
[2] Much of my application of the SACT layers has been to examine the role of the USD and the power of its network effects, including Eurodollars in global spreadsheet capitalism. See Pennings, A.J. (2025, Nov 30) The Seven Phases of Global US Dollar Transformation: Past and Future. apennings.com https://apennings.com/crisis-communications/the-seven-global-phases-of-past-and-future-us-dollar-transformations/
[3] Several prompts were used to gather and organize information under my SACT layers approach from Chat GPT. Hypertext links are used to connect to sources.

© ALL RIGHTS RESERVED

Not to be considered financial advice.



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea and a Research Professor for Stony Brook University. He teaches AI and broadband policy. From 2002-2012, he taught digital economics and information systems management at New York University. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in Korea.

The SACT Attack: How Spreadsheet Capitalism Conquered the World

Posted on | January 3, 2026 | No Comments

Citation APA (7th Edition)

Pennings, A.J. (2026, Jan 03) The SACT Attack: How Spreadsheet Capitalism Conquered the World. apennings.com https://apennings.com/democratic-political-economies/the-sact-attack-how-spreadsheet-capitalism-conquered-the-world/

The modern world was not conquered by force of arms, nor even by ideology alone, but by a quieter and more durable weapon that facilitated unique ways of seeing, representing, calculating, and coordinating. This weapon been the major armament of the global financial system that now spans hundreds of countries and involves trillions of US dollars (USD) and other forms of modern wealth. It rests on a historically layered techno-epistemology so profoundly embedded in the modern world that it seems quiet, objective, stealthy, and even inevitable.

This technological framework can be understood through four mutually reinforcing capabilities: Substitution, Abstraction, Symbolic Computing, and Telecommunications Synchronization (SACT). These come together in the modern digital spreadsheet “logic” and the globally connected gridmatic screens. At the heart of this achievement lies a powerful, interactive human-knowledge framework that has quietly underwritten what I call “spreadsheet capitalism,” a central thrust for the modern global political economy.

The “SACT attack” is not an event but a condition. It names the historical convergence through which older media technologies, mathematical notation, and networked communication fused into a digital technology that renders the world’s political economy as a grid of interchangeable values and mobilized capital.

The assault did not arrive with a bang, nor with the rumble of tanks across a border. The conquest of the modern world was silent, illuminated early by the cool, green glow of a cathode ray tube, then by multiple high-definition screens of financial news and transacting terminals. The SACT attack was a techno-epistemological coup d’état that upset the world of open-outcry trading pits, tacit trading reflexes, and telephone/telex communication. It reorganized the planet, not according to geography or ideology, but according to the fintech logic of the visual grid, increasingly tied together with clouds of datasets.

In the process called “remediation,” new media integrate older forms of established media.[1] Spreadsheet capitalism is based on the remediation of historical numerical and media practices into the modern digital spreadsheet. These include Indo-Arabic numerals (1,2,3,4,5,6,7,8.9) with the zero (0) and a base-10 decimal system within a positional place-value notation (1,000). Media include vertical columns and horizontal rows drawing on the political/military use of lists, and extensive 2-D visual tables used for tabulating corporate and national wealth. The digital spreadsheet uses intersected cells with Unicode multilanguage inscription, and connects data in separate cells with symbolic formulas.[1a]

These came together into a new techno-knowledge that reshaped modern financial and organizational processes. But the world did not simply stumble into this global coordination; it built epistemological authority on established practices like double-entry bookkeeping and statistics. It built on the information tools shaped since the telegraph and its need for “data processing.” It also built related accounting and financial knowledge in business schools, and statistics in science and social science education. Many formulas came from engineering.

This techno-knowledge empowered Tom Wolfe’s “Masters of the Universe,” the bond traders, the quants, the central bankers who run the world’s financial system in his (1987) Bonfire of the Vanities.[2] They did not invent their tools from scratch. Instead, they drew on remediated technology some of which thousands of years of media practices into a single, lethal interface: the digital spreadsheet terminal. The “masters” used this technology to create a global system of digitally-enabled, and increasingly blockchain-connected financial system. They used it to create new debt and credit instruments that registered, loaned, and traded hundreds of trillions of dollars. They developed a computing alchemy that resulted in an endlessly recomputable, instantaneously synchronized, and globally enforceable system for coordinating the accumulation of organizational and national wealth.

This post continues a story of how spreadsheet capitalism came to govern our working reality by quietly reorganizing finance, governance, and global “time-space power.”[3] It is a narrative that frames SACT not as a conspiracy or a single technology, but as a historically layered techno-epistemological trajectory that has quietly reorganized finance, governance, and power. I’ve kept the tone critical but analytic, grounding the argument in media history and the remediation of previous tools in modern technology rather than moral panic, and explicitly linking spreadsheets, PCs, and financial terminals to the production of economic and social power.

Substitution Replaces the World with Tokens

At the foundation of spreadsheet capitalism lies substitution. The SACT epistemology recognizes that the physical world is messy, heavy, and slow. To manage it, one must replace particulars with a signifier, sometimes called a token, usually a number or a word. This process is the replacement of concrete physical goods, people, social relationships, and lived time with symbolic stand-ins. Under the spreadsheet’s logic, a house is no longer a shelter made of brick and wood; it is replaced by a cell entry containing a “Mortgage-Backed Security” (MBS) value. A barrel of oil is not fuel; it is a futures contract.

Substitution is the first step toward constituting and coordination of meaning. Because meaning is difficult to “fix” and police, the spreadsheet provides a structural framework, drawing on historical information practices, to organize it. Digital spreadsheets remediate centuries-old writing techniques such as tallies, ledgers, and double-entry bookkeeping into a system where economic reality becomes divisible, legible, portable, and actionable. Assets become rows, contracts become entries, and future commitments become manageable obligations.

This logic draws on a long lineage of media practices: tallies, ledgers, double-entry bookkeeping. But digital spreadsheets remediate these earlier forms into a system where substitution is total and reversible. Anything that can be named can be assigned a cell; anything in a cell can be copied, moved, or deleted. Items in cells can be integrated into functions and formulas.

The signifier (the number) didn’t just represent the signified (the asset); it replaced it. The “Masters of the Universe” stopped looking at the world and started looking at the screen. Money has always done this, but the spreadsheet radicalizes the process. Assets become rows. Columns become categories. People become accounts. Futures become discounted cash flows.

In this SACT regime, land is substituted by real estate securities, labor by productivity metrics, risk by volatility measures, and governance by compliance checklists. The spreadsheet does not merely represent reality, it becomes the operational reality within which decisions are made.

Abstraction Erases Friction by Detaching Context

Substitution alone is insufficient without abstraction. The power of spreadsheet capitalism lies in its ability to strip away context while preserving calculability. Abstraction sacrifices meaning for a gain in interoperability. This process is anchored in Indo-Arabic numerals, including zero, base-10 decimals, and positional place-value notation. These notational, signifying systems, remediated into digital form, allow quantities to expand exponentially and be manipulated independently of their origins. Time, risk, and obligation are flattened into comparable units. Abstraction allows value to be detached from local contingencies and made globally comparable.

Abstraction enables scale. It allows financial instruments to circulate globally, detached from the political, ecological, or social conditions that produced them. In the spreadsheet, all debt looks the same. A dollar of debt in Detroit becomes commensurable with a dollar of debt in Jakarta. Abstraction allows billions of participants to operate within a shared economic language. The spreadsheet is the machine that performs this abstraction millions of times per second, without fatigue or reflection.

Symbolic Computing Formulates Judgment and Power

With the world substituted and abstracted, symbolic computing takes over. Through formulas and algorithms, cells began to talk to each other. The spreadsheet is not a static paper ledger; it is a computational environment. This capability remediates centuries of mathematical reasoning and accounting practice into programmable logic. Symbolic computing is the engine of the SACT attack.

Once quantities are encoded numerically and placed into cells, formulas can act upon them. What appears as a neutral function in formulas like NPV(), VAR(), and correlation matrices becomes a crystallized theory of the world and its future. Assumptions about growth, risk, and rationality are embedded in code, repeated automatically, and rarely questioned. Formulas remediate earlier practices of calculation, accounting, and statistical reasoning, but they do so with unprecedented speed and authority due to the availability of microprocessing “compute.”

These spreadsheet formulas rose to prominence during the leveraged buyout (LBO) period of the 1980s, when they were used to break down and sell off some of the largest corporations in the US. Spreadsheets had a paper-based life before the digital spreadsheet, but the speed and interactivity of the personal computer made spreadsheets like VisiCalc and Lotus 1-2-3 the “killer apps” that often justified the purchase of the new information machines. The results of spreadsheet formulas convinced investment banks to lend short-term loans to “raiders” who would buy up companies, sell off parts to repay the loans, and keep a substantial profit.

IF, VLOOKUP, and SUMPRODUCT commands allowed the spreadsheet to simulate possible futures. “If interest rates rise by 0.5%, then the value of Cell D12 drops by 40%.” This computational power allows the creation of “synthetic” value, an artificial construct that mirrors the value or performance of a real-world asset or dataset without direct ownership. Money could be begotten of money, without ever “touching the ground.” The positional place-value notation meant that shifting a decimal point, adding a zero, or applying a leverage formula could conjure trillions of dollars of notional value.

Symbolic computing turns finance into a self-referential system. Models generate prices that validate models. Risk systems define what counts as risk. The spreadsheet becomes an epistemic enclosure, as what cannot be symbolically computed effectively ceases to exist.

Spreadsheets do not replace human judgment; they amplify it. Symbolic computing transforms abstraction into action, a process called semiosis. Once values are encoded numerically and organized into cells, formulas can operate on them, enabling automated calculation, forecasting, and optimization. They allow institutions to model scenarios, manage risk, and allocate resources at speeds and with consistency that would otherwise be impossible.

We face a situation in which symbolic computing completes the abstraction process by automating reasoning. Judgment migrates from human deliberation to computational execution. (SACT)AI in another post examines the influence of AI on symbolic computing and the other layers of spreadsheet capitalism. It addresses the future of human action and computation in global finance, particularly through inference from a wide range of data scanned, scraped, and scrubbed from the World Wide Web and other sources of crowdsourcing and sensing from the Internet of Things (IoT).

Telecommunications Synchronization through the Global Grid

The final component of the SACT stack is telecommunications synchronization. A spreadsheet is powerful in isolation; it becomes world-shaping when synchronized across global networks. Fiber-optic cabling, Earth-orbiting satellites, Application Programming Interfaces (APIs), and financial terminals ensure that the same abstractions appear simultaneously in New York, London, Singapore, and São Paulo.

This synchronization remediates the telegraph, the ticker tape, the telex, and the newspaper into a real-time global ledger of financial terminals that computes data and news into trading algorithms. Prices, yields, and positions update continuously, enforcing a single temporal regime: market time. Local rhythms, such as ecological cycles, labor schedules, and democratic deliberations, are subordinated to synchronized global financial clocks.

Telecommunications transforms spreadsheets from terminal tools into network infrastructures. They allow assets and liabilities to circulate instantly, anchoring global trade and debt. Network effects come into play favoring the Bloomberg terminal, but also the US dollar that operates prominently in the current spreadsheet logic as the absolute relative value in most calculations. As a result, the spreadsheet-enabled trade system has facilitated the USD becoming the major solution for global liquidity and wealth storage.

Telecom facilitates the time-space power of spreadsheet capitalism. Money can flow from a pension fund in California to a construction project in Dubai and back to a hedge fund in the Caymans in seconds. This speed eliminates the ability for human reflection. The market became a “real-time” entity, reacting to news before it was humanly understood.

The Masters of the Universe and the Dollar Grid

Together, SACT forms the epistemic back the bone of modern spreadsheet capitalism. It empowers Tom Wolfe’s “Masters of the Universe.” This honor was bestowed not because they are uniquely intelligent, but because they operate fluently as the high priests of this symbolic regime.

The spreadsheet is transformed from a tool into a global infrastructure. Fiber optics and satellites synchronize financial data in real-time across the globe (New York, London, Tokyo), enforcing a single “market time” that overrides local rhythms. This speed prevents human reflection and solidifies the US Dollar as the absolute relative value in global calculations.

The global financial system serves as a meta-layer over the physical world. In this “gridmatic” infrastructure, hundreds of trillions of dollars in debt and derivatives hover above the planet’s actual GDP. With spreadsheets, terminals, and synchronized networks, financial elites constructed a global system of USD-enabled trade, credit, derivatives, and shadow banking that now exceeds the size of the real economy by an order of magnitude. Hundreds of trillions of dollars in obligations circulate as abstractions, enforceable through legal, technical, and monetary power.

This is not merely commodity or fiat money; it is spreadsheet money: money whose legitimacy derives from internal and networked consistency within symbolic systems rather than from social consensus or material backing alone.

This synchronization remediates the telegraph, the ticker tape, the telex, and the newspaper into a real-time global ledger of financial terminals that computes data and news into trading algorithms. Prices, yields, and positions update continuously, enforcing a single temporal regime: market time. Local rhythms, such as ecological cycles, labor schedules, and democratic deliberations, are subordinated to synchronized global financial clocks.

Telecommunications transforms spreadsheets from terminal tools into network infrastructures. They allow assets and liabilities to circulate instantly, anchoring global trade and debt. Network effects come into play, favoring the Bloomberg terminal, but also the US dollar that operates prominently in the current spreadsheet logic as the absolute relative value in most calculations. As a result, the spreadsheet-enabled trade system has facilitated the USD becoming the major solution for global liquidity and wealth storage.

Telecom facilitates the time-space power of spreadsheet capitalism. Money can flow from a pension fund in California to a construction project in Dubai and back to a hedge fund in the Caymans in seconds. This speed eliminates the ability for human reflection. The market became a “real-time” entity, reacting to news before it was humanly understood.

Conclusion: An Insidious Epistemology

The modern world has been reshaped not by military force, but by a quiet, techno-epistemological “coup d’état” that I call the spreadsheet capitalism.[] It represents the SACT’s victory of the digital organizational grid over the chaotic reality of the physical world. By fusing remediation of numerical, mathematical, and writing tools with modern telecommunications, the digital spreadsheet has constructed a global “gridmatic” reality where the ability to substitute, abstract, compute, and synchronize data equates to power over human, financial, and material resources.

This system has done more than merely optimize finance; it has created an epistemic enclosure where only that which can be counted and computed is acknowledged as real. As we move toward an era of AI-driven symbolic computing, the “Masters of the Universe” no longer just read the markets; they operate the operating system of the planet itself, creating a world where time, space, and value are defined by the logic of the cell, the table, and the formula.

The danger of the SACT attack lies not in malice but in invisibility innovation, and performance. Spreadsheet capitalism presents itself as rational, objective, and technical, while quietly reshaping what can be known, valued, and governed. Political choices appear as technical constraints. Ethical questions dissolve into optimization problems. Because the digital spreadsheet remediates older media so seamlessly, it feels familiar, inevitable, and even boring. Numbers, language, tables, formulas combine Yet its scale, speed, and synchronization have no historical precedent.

To name the SACT stack is not to reject computation or coordination, but to recognize that the modern financial order is built on specific media choices, not universal truths. Understanding the SACT attack opens space for alternative epistemologies such as forms of accounting that re-embed context, slow synchronization, and the restoration of human judgment. Without such interventions, the grid will continue to expand, substituting the world with symbols until the symbols themselves are all that remain.

Notes

[1] Bolter, Jay David, and Richard Grusin. Remediation: Understanding New Media. Cambridge, MA: MIT Press, 2000. Exploring spreadsheet technology through the lens of “remediation,” where older media forms are incorporated into newer ones gives us additional insights about their capabilities and effectiveness. Financial news, for instance, remediates print (textual data) and speech (anchors) within the televisual medium, which itself now incorporates web techniques and enables programming like Bloomberg TV and CNBC. Two key concepts from Bolter and Grusin’s theory of remediation are the logics of transparent immediacy and hypermediation. The first strategy aims to make the medium itself disappear, offering an immersive, seemingly unblemished, and “authentic” experience of the reality being presented. In financial news, this is partly achieved through the authoritative presence of news anchors who guide narratives and conduct live interviews, fostering a sense of direct access to information and expertise. “Breaking News” segments and live reports further enhance this feeling of immediacy. Conversely, the hypermediation strategy foregrounds the medium by integrating multiple data streams, windows, charts, graphs, and scrolling tickers – reminiscent of a computer interface or sports “datatainment.” This approach offers an augmented, quantitative view of the world, drawing on the perceived truth of numeracy and remediating tools like spreadsheets. Financial news relies heavily on this, displaying a constant flow of stock prices, commodity values, and economic indicators.
[1a]Unicode replaced ASCI that remediated British banking penmanship trained for legible ledgers. British banking penmanship historically favored clear, legible cursive styles like Copperplate (English Roundhand) for formal business and financial documents.
[2] Tom Wolfe’s “Masters of the Universe” is from his Bonfire of the Vanities novel that was also turned into a motion picture. It was about a bond trader whose world comes crashing down after a hit and run accident.
[3] Giddens, Anthony. (1983) A Contemporary Critique of Historical Materialism. (Berkeley, CA: University of California Press). p 117.
Giddens, A. (1983) The Nation-State and Violence. (Berkeley, CA: University of California Press). These two volumes were particularly influential in my graduate work theorizing digital money. Also see my analysis with Patrick Rose of spreadsheets in organizations.
[4] I developed the SACT layers – substitution, abstraction, symbolic computing, and telecommunications synchronization to construct an understanding of “spreadsheet capitalism” that now governs much of our economic reality. It was a term I heard from my mentor Majid Tehranian during the 1980s. Prof. Tehranian was an Iranian who fled before the theocratic regime took power in 1979 and was a professor at the University of Hawaii at the time.
Prompt: Construct a narrative about the “SACT attack” on the modern world. Describe how Substitution, Abstraction, Symblic computing, and Telecommunications synchronization (SACT), the foundations of spreadsheet capitalism have combined to introduce an insidious techno-epistemology shaping modern financial practices. Based on the remediation of historical media practices into the modern digital spreadsheet (Indo-Arabic numerals with the zero and base-10 decimal system within a positional place-value notation, unicode language capability, lists, tables, cells, and formulas) this technology has empowered the ‘masters of the universe” to create a global system of USD-enabled trade and debt that has extended into hundreds of trillions of dollars.

© ALL RIGHTS RESERVED

Not to be considered financial advice.



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea and a Research Professor for Stony Brook University. He teaches AI and broadband policy. From 2002-2012 he taught digital economics and information systems management at New York University. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in Korea.

Seven Phases of Global US Dollar Transformation: Past and Future

Posted on | November 30, 2025 | No Comments

Recommended Citation APA (7th Edition)

Pennings, A.J. (2025, Nov 30) The Seven Phases of Global US Dollar Transformation: Past and Future. apennings.com https://apennings.com/crisis-communications/the-seven-global-phases-of-past-and-future-us-dollar-transformations/

Introduction

Below is a brief historical overview and future preview of the US dollar’s transformation from a gold-backed sovereign currency to a debt-backed global standard for central bank reserves and trade invoicing. President Franklin Delano Roosevelt (FDR) built vaults at Fort Knox in Kentucky during the Great Depression and moved gold bars there from New York. He wanted to protect them from a foreign invasion of Manhattan and to be extra safe, it was a tank training ground. As gold poured into the US during World War II, the stage was set for the US dollar to take the reins from the declining British pound as the world’s predominant currency and source of liquidity.

Fort Knox, Kentucky

This post traces the US Dollar (USD) from a global gold-backed currency to a fiat currency to a programmed, algorithmic instrument of global power, with stablecoins on blockchain, and in multicurrency collaboration with several AI blocs. It outlines the transition through seven distinct phases, highlighting how the US has repeatedly adapted its monetary infrastructure to supply with world with a liquidity solution, and yet maintain economimc hegemony

The Bretton Woods system pegged the global economy to the USD, which was backed by gold. However, the Triffin Dilemma revealed a fatal flaw: to provide international liquidity, the US had to run deficits, undermining trust in the gold peg. This tension led France and others to drain US gold reserves, labeling the system an “exorbitant privilege.”

A shadow banking system emerged in London to hold dollars outside US jurisdiction (driven by Soviet and Chinese fear of seizure). This Eurodollar market operated via telegraph, telex, and later SWIFT, creating a proto-digital “ledger money” that bypassed physical gold constraints to supply USD worldwide as bank credit.

The “Nixon Shock” (See video) ended gold convertibility. The dollar floated, anchored only by state power, the Group of 5 (G-5) and the Group of 10 (G-10), and the technological innovation of Reuters called Monitor that provided a virtual market for FX.[1] The petrodollar deal with Saudi Arabia cemented the USD’s dominance and increased demand for US debt. Citibank’s Walter Wriston argued that gold had been replaced by an “Information Standard,” where currency value was determined by market judgment and financial news processed through global telecommunications.

Under President Ronald Reagan, the Euromarkets anchored the dollar to US government debt. Massive US deficits produced US Treasuries, which became the “prime collateral” for the Eurodollar market. The world’s central banks shifted reserves from gold to US debt, effectively turning the US deficit into the global savings account.[2]

The debt-backed system was insufficient for the growth in global trade. It caused volatility and shortages, antagonizing the world, especially the Global South. The BRICS nations (Brazil, Russia, India, China, South Africa) sought to capitalize on this unease and pushed to de-dollarize finance and trade using alternative payment rails (CIPS) but failed to displace the USD due to a lack of deep, liquid collateral.

The US has responded to de-dollarization with a technological offensive. Riding on the back of cryptocurrencies, the Genius Act mandates stablecoins must be backed 1:1 by US Treasuries. This policy creates a “sovereign digital dollar” that moves instantly on blockchains but is anchored by US debt. Penetrating globally, it challenges local currencies with a programmable, US-controlled asset that relieves the US debt pressure.

Artificial Intelligence (AI) fundamentally alters the friction of global trade. AI agents automate compliance, hedging, and routing, making a multicurrency world computationally feasible. While the USD remains the backbone, AI enables an “algorithmic polycentrism” in which payment paths are optimized dynamically, potentially reducing the dollar’s exclusivity while maintaining its structural power through private stablecoin networks.[3]

Phase 1: The Golden Anchor and the Triffin Dilemma (1944–1971)

The modern story of the global dollar begins at the Bretton Woods Conference in 1944. The Allied powers established a system where the US dollar was the world’s reserve currency, pegged to gold at $35 per ounce, while other currencies were pegged to the dollar. This connection created a system of fixed exchange rates that provided post-war stability. Japan, for example, was brought in at 360 yen to a US dollar. Why that number? 360 degrees in a circle, the rising Sun.

However, this system was doomed by a structural contradiction identified by Belgian-American economist Robert Triffin. Triffin recognized that to facilitate global trade and post-war reconstruction, the world needed US dollars to provide “liquidity.” It needed a currency to facilitate buying and selling. This required the US to “export” dollars to Europe and Asia. They needed to run constant trade deficits (buy stuff from other countries), increase foreign aid, and build military bases to avoid dollar shortages.[3]

However, the more dollars the US printed and exported, the less trust the world had that the US actually held enough gold to back them. Bretton Woods contained the promise that US dollars could be traded for gold at that price. In December 1958, Europeans made their currencies convertible, including for the US dollar, for international transactions. This move dramatically increased the US Treasury’s demand for gold.

The “Triffin Dilemma” exposed the following contrast of values:

If the US stopped printing and exporting dollars, the global economy would suffocate (liquidity shortage).

If the US kept printing dollars, the gold peg could collapse if other countries tried to redeem them for gold (a confidence crisis).

On February 4th, 1964, President Charles de Gaulle of France made a historic statement about the US dollar during a press conference at the Élysée Palace. He criticized the US dollar’s special status as the world’s dominant currency, which his Finance Minister, Valéry Giscard d’Estaing, called “exorbitant privilege.” He began sending ships to the US with their dollar holdings (many of which were siphoned from the growing US military involvement in Vietnam, which had previously been a French colony). Germany and Spain did the same, slowly draining the gold from Fort Knox over the next few years.

Phase 2: The Shadow Rise of the Eurodollar

While Bretton Woods governed official channels, a shadow system emerged in the 1950s, the Eurodollar. These were simply US dollar deposits held in banks outside the United States (primarily London), and thus initially outside the jurisdiction of the Federal Reserve. It began mainly with the Soviet Union and China, who wanted to hold dollars for trade but feared US seizure.[4]

European banks began lending these deposits in USD. They used LIBOR, a survey of London banks’ recommended interest rates, to target interest rates. This market exploded in the 1960s, fueled by multinational corporations. It was a network of “ledger money” transacted not by physical transport, but by telegraph and telex. The telex machine allowed a bank in London to transfer ownership of dollars to a bank in Tokyo instantly via coded messages.

It expanded with the “petro-dollar” during the oil crisis, when oil prices quadrupled and surpluses needed to be invested worldwide. The spread of Eurodollar created the “Third World Debt Crisis, which Reagan used to mobilize the IMF to pressure debtors to open up their economies, and Clinton/Gore used to internationalize the Internet.

This technology produced the globalized proto-digital dollar. It became system of electronic spreadsheets and pure information transfer that bypassed the physical constraints of the dollar and its connection to the Bretton Woods gold peg. It has continued to grow as demand for USD increased, but slowly brought back under the observations of the US Federal Reserve and SOFR (Secured Overnight Financing Rate). The big question: Will it survive the USD stablecoin?

Phase 3: The Nixon Shock and the “Information Standard” (1971–1980)

By the late 1960s, the Triffin Dilemma had reached its breaking point. Foreign nations were demanding gold for their surplus dollars, and the US gold vaults were bleeding dry.

On August 15, 1971, President Richard Nixon went on national TV and announced he was temporarily suspending the convertibility of the dollar into gold (which was revalued to $42 per oz). The “Nixon Shock” ended the Bretton Woods system. The dollar became fiat. US Treasury William Connelly in the Nixon administration famously gaffed, “The dollar is our currency, but it’s your problem.” to the G-10 room of European finance ministers.

To stabilize this unmoored system, the US relied on the coordination of the G-5 nations (US, UK, France, Germany, Japan) to manage exchange rates, but ultimately allowed them to “float.” At the same time, Reuters Money Monitor Rates went online and became the de facto virtual market for global currencies. Banks paid a subscription to view currency prices, and banks paid to have their prices listed. Traders got on a phone or two, to negotiate a FX transaction. It was a relatively simple setup at first, but built on Reuters’ long history of providing financial news over telegraph and other media.

It was aided by the volatility of the Arab-Israeli War, the depreciation of the USD, and rising oil prices that were now priced in US dollars. Going off gold meant a devaluation of gold, and the oil-producing countries (OPEC) were not happy. Prices quickly quadrupled from $3 to $12 a barrel. The Ford administration worked out a deal with Saudi Arabia. The Saudis would price all oil sales in USD, and the US would provide military protection in the volatile Middle East. They also agreed to hold large amounts of US Treasuries with the proceeds from their oil sales.

While Bretton Woods governed official channels, a shadow system emerged in the 1950s, the Eurodollar. These were simply US dollar deposits held in banks outside the United States (primarily London), and thus outside the jurisdiction of the Federal Reserve. It began mainly with the Soviet Union and China, who wanted to hold dollars for trade but feared US seizure.

European banks began lending these deposits in USD. They used LIBOR, a survey of London banks’ recommended interest rates, to target interest rates. This market exploded in the 1960s, fueled by multinational corporations. It was a network of “ledger money” transacted not by physical transport, but by telegraph and telex. The telex machine allowed a bank in London to transfer ownership of dollars to a bank in Tokyo instantly via coded messages. This technology produced the proto-digital dollar. It was a system of pure information transfer that bypassed the physical constraints of the dollar and its connection to the Bretton Woods gold peg.

reuters money monitor

Chairman of Citibank and financial visionary Walter Wriston (ATMS, CDs) famously declared that the gold standard had been replaced by the “information standard.” He argued that in a world of telex and satellite instant communication, the value of currency was no longer determined by bullion, but by the information available about the issuing country’s economic health. The market’s judgment, processed through telecommunications, became the new economic discipline. Capital would go where it was well treated.[5]

Phase 4: Reagan, Regan, and the Collateralization of Debt (1981–2008)

In the 1980s, the Eurodollar found a new anchor: US government debt. Under President Ronald Reagan and his Treasury Secretary (and former Merrill Lynch CEO) Donald Regan, the US drove massive budget deficits through tax cuts and military spending. In conventional economics, this was reckless. In the logic of the global dollar, it was structural brilliance. By running deficits, the US pumped trillions of dollars of US Treasury bonds into the global financial system, facilitating the liquidity needed for the global trade that filled the coffers of China and Russia.

The ever-expanding Eurodollar market needed high-quality “prime collateral” to secure loans and derivatives. US Treasuries provided this. Trust was no longer based on gold; it was based on the depth and liquidity of the US Treasury market. US debt was the cheapest collateral used to borrow Eurodollars. You could use other collateral like corporate bonds, but would have to pay a more expensive “haircut,” an insurance policy against price volatility.

The Reagan administration began a process to computerize US Treasury operations, eliminating bearer bonds. This helped US export debt, and the world’s central banks eagerly bought it to use as the “safe asset” backing their own banking systems. Bank reserves shifted from gold to dollars and treasuries. The US deficit became the global economy’s savings account.

Phase 5: Shortages, Tension, and BRICS De-dollarization (2008–2024)

This debt-backed system maintained US dollar global hegemony. It provided a liquidity solution for the world, but it also created problems. Because the global economy relies on the dollar for trade (oil, commodities) and debt servicing, any tightening by the Federal Reserve creates increased global dollar demand and eventually shortages. Spreading dollars meant increased trade deficits, foreign direct investment (FDI), military bases, foreign aid, and Eurodollars.

Crises cause additional demand for US dollars. Global elites protect their wealth by buying USD and USD-denominated assets, the so-called “Milk Road.” This weaponized volatility led to Global South and the rise of the BRICS nations (Brazil, Russia, India, China, South Africa). They embraced the new term coined by Goldman Sachs economist Jim O’Neill in 2001 to describe the four emerging economies (South Africa joined later) that had grown tremendously with the spread of neoliberal-based trade, fiat dollars, and US Navy shipping providing international protection for commerce. They began pushing for de-dollarization.

China and Russia sought to escape from the sanction capabilities of the West, where the US could confiscate their reserves, cut them off from SWIFT (the successor to telex messaging), or set price levels for their oil sales.

They attempted to build alternative payment rails (CIPS, SPFS) and trade in local currencies (Yuan/Ruble) or gold-backed tokens. However, they lacked the deep, liquid collateral markets that the US Treasury provided.

Phase 6: The Genius Act and the Supercharged Dollar (2026–Future)

This brings us to the present transition. The Genius Act represents the US response to de-dollarization: not a retreat, but a technological offensive. By mandating that stablecoins be backed 1:1 by US Treasuries, the US creates a non-CBDC “sovereign digital dollar” that combines the characteristics of all previous eras:

Like the Eurodollar, it moves instantly on digital rails (blockchains), bypassing clunky correspondent banking.

Like the Reagan Era, it is backed by US debt, creating structural demand for Treasuries.

Unlike the BRICS alternatives, it offers “pristine” collateral and liquidity that the Yuan and any other BRICS currency cannot match.

The Genius Act weaponizes stablecoins to penetrate markets the traditional dollar couldn’t reach. It provides the Global South with digital dollars that are easier to use than local currency and safer than unregulated crypto. Instead of losing the world to de-dollarization, the Genius Act ensures the dollar’s “Information Standard” is upgraded to Spreadsheet Capitalism’s “Programmable Standard,” re-anchoring the global economy to the US Treasury.

From 2027 onward, every time the Federal Reserve’s FOMC changes the fed funds rate, hundreds of millions of people outside the United States who hold USDC, USDT (compliant version), or similar stablecoin tokens will feel it immediately in their phone wallets. The Fed will have the same tools it always had — but now those tools control a much larger, truly global dollar system that lives on blockchains and reaches places traditional banking never could. The GENIUS Act doesn’t give the Fed new powers; it magnifies their old powers on a planetary, real-time reach. The Fed once again becomes the global lender of last resort, this time for global digital dollars.

Phase 7: AI, USD, and Emerging Multicurrency Regimes (2028-Future)

Historically, alternative currency blocs struggled due to liquidity fragmentation, complex FX risk management, costly compliance overhead, and information asymmetry between participants. AI dramatically reduces these frictions by automating FX clearing and pricing, cross-border AML (Anti-Money Laundering) and KYC (Know Your Customer) (KYC/AML), trade documentation analysis, and risk assessment for invoicing in local currencies. It lowers the operational barriers to non-USD settlement networks (e.g., CNY, INR, AED, EUR multicurrency corridors).

AI-driven “autonomous financial agents” dynamically choose settlement currencies based on fees, liquidity, and regulatory risk, as well as execute multicurrency arbitrage or hedging in real time. It can also route payments across emerging Central Bank Digital Currencies (CBDC) networks.

A world of AI-mediated multicurrency routing resembles packet-switched networks in telecom, where the “best path” may not always run through USD but routes around it. Currency choice becomes computational rather than political. This fluency pushes the system toward algorithmic polycentrism.[6]

Countries developing national AI infosystems (China, UAE, Singapore, India) can use AI to improve risk underwriting and enable more trade in local currency. Countries deploy AI-enhanced capital controls and efficiently supervise financial institutions, creating alternative compliance regimes that are not necessarily reliant on Western standards. This technological transition reduces dependence on USD-centric regulatory infrastructures.

For countries trading energy, minerals, and food, AI enables predictive logistics and supply chain optimization, automated escrow and settlement systems, smart-contract invoicing in multiple currencies, and automated FX hedging. These innovations make it easier for commodity exporters (Saudi Arabia, Brazil, Russia) to price and settle trades in non-USD currencies without significant operational risk.

AI acts as a risk-mitigation technology, making de-dollarization more feasible. AI does not eliminate the dollar’s strengths, such as liquidity, deep bond markets, the rule of law, and global payment rails tied to US infrastructure. But AI does erode the dollar’s exclusivity.

Global finance is shifting from monocentric (USD) to a multiplex system with multiple currencies routed algorithmically. The USD remains the backbone, but not the only spine. Currencies are increasingly driven by algorithmic trading models, sentiment classification, real-time monitoring of commodity flows, and predictive macro models. This makes FX potentially more volatile but also more dynamically optimized.

Multicurrency systems become computationally manageable, reducing the bias toward a single dominant currency. AI fosters a struggle between state-based monetary sovereignty (central banks, CBDCs) and platform-based private monetary infrastructures (stablecoins, platform credit systems, AI liquidity brokers). The USD system may face challenges from private networks (such as USDC and PayPal’s PYUSD) operating atop AI-based payment routing. Yet these mostly reinforce US monetary power because they remain dollar-denominated.

AI turns currency competition into a real-time, highly automated strategic contest. Instead of slow diplomatic or macroeconomic adjustments, currency pressures can be amplified and executed by autonomous agents that route payments, reprice collateral, trigger liquidations, and exploit latency arbitrage across networks and tokenized markets. The result: faster, more profound, and more systemic shocks to liquidity and prices.

Global finance is shifting from monocentric (USD) to a multiplex system with multiple currencies routed algorithmically. The USD remains the backbone, but not the only spine. Currencies are increasingly driven by algorithmic trading models, sentiment classification, real-time monitoring of commodity flows, and predictive macro models. This makes FX potentially more volatile but also more dynamically optimized.

Conclusion

The history of the US Dollar is not a story of static dominance, but of ruthless technological adaptation. The US successfully maintained its hegemony of liquidity by shifting the substrate of its currency from Gold to Information and to Code (Stablecoins/AI).

The “Genius Act” represents the culmination of the USD policy and technological trajectory. By fusing the liquidity of the Eurodollar with the solidity of US Treasuries on a blockchain, the US is attempting to resolve the Triffin Dilemma through stablecoin technology. It allows the US to export liquidity without losing control (via programmable compliance).

However, the impending AI era introduces a wild card. As money becomes purely computational, the “exorbitant privilege” of the US may be challenged not by rival states, but by rival algorithms that route value around the US STAC stack. The future battle for the global economy will not be fought over interest rates, but over the architecture of the network itself.

Notes

[1] The G-10 consisted of governments of eight International Monetary Fund (IMF) members—Belgium, Canada, France, Italy, Japan, the Netherlands, the United Kingdom, and the United States—and the central banks Germany and Sweden. This group was, by default, in charge of maintaining the economic growth and stability of international currencies. Although in effect, its powers are limited, it still presented an important image of national sovereignty.
[2] This research project started with my MA thesis on the deregulation of finance and telecommunications in the post-Bretton Woods era. Thesis: The Message is Money: Deregulation and the Telecommunications Structure of Transnational Financial Industries.(1986)PhD Dissertation: Symbolic Economies and the Politics of Global Cyberspaces. (1993)
[3] Moffit, M. (1983) The World’s Money: International Banking from Bretton Woods to the Brink of Insolvency. NY: Simon & Schuster. This is a classic on monetary events leading up to the Nixon devaluation.
[4] Moffit, M. (1983) The World’s Money: International Banking from Bretton Woods to the Brink of Insolvency. NY: Simon & Schuster.
[5] Wriston, W. (1992) The Twilight of Sovereignty: How the Information Revolution is Changing our World. (New York: Charles Scribner’s Sons).
[6] Tyson, K. (2023) Multicurrency Mercantilism: The New International Monetary Order. Independently published. ISBN 9798864645031.
[7] Initial Gemini Prompt: Provide a historical overview of the globalized US dollar. Please start with the USD-gold standard established at the Bretton Woods conference and how the Triffin Dilemma challenged it. Explain how Nixon stopped the convertibility of the dollar and gold. Be sure to mention the shadow USD, which came to be known as the Eurodollar, and how it was transacted by telegraph and telex. With Nixon’s decision, the USD became fiat, anchored only by the G-5 nations and by what Walter Wriston called the “Information Standard.” Soon, Reagan and Regan organized the dollar and Eurodollar around national debt. They grew the US deficit to produce US Treasuries that would serve as prime collateral for eurodollar borrowing, providing trust among transacting financial institutions worldwide. The USD continued as the world’s primary transacting and reserve currency. Still, shortages heightened international tensions, leading to debates about “de-dollarization” among the BRICS countries, dominated by China and Russia. Finally, explain how stablecoins, weaponized by the Genius Act, will supercharge the global USD and AI will bring into effect new multi-currency regimes.

References

Dordick, H.S., and Neubauer, D. (1985) “Information as Currency:
Organizational Restructuring under the Impact of the Information
Revolution.” Keio Review, No. 25.
Eichengreen, B. (1996) Globalizing Capital: A History of the International Monetary System. Princeton, NJ: Princeton University Press. I am indebted to his fifth chapter, “From Floating to Monetary Unification,” which contains a good overview of this process. Pages 136-141 were particularly useful.
Giddens, A. (1983) The Nation-State and Violence. (Berkeley, CA:
University of California Press).
Goux, J. (1990) Symbolic Economies. (Ithaca: Cornell University Press).
Gowan, P. Global Gamble: Washington’s Faustian Bid for World Dominance. London: Verso.
Greider, W. (1987) Secrets of the Temple. How the Federal Reserve Runs the Country. New York: Simon and Schuster. pp. 337-338. A very extensive treatise on the economic conditions of the 1970s.
Harvey, D. (1989) The Condition of Postmodernity. (Oxford: Basil
Blackwell, Ltd).
Levy, S. (1989) “A Spreadsheet Way of Knowledge,” in Computers in the
Human Context: Information Technology, Productivity, and People.
Tom Forester (ed.) (Oxford: Basil Blackwell).
Roberts, John (1995) $1000 Billion a Day: Inside Foreign Exchange Markets. London: Harper-Collins.
Shapiro, M. J. (1989) “Textualizing Global Politics,” In Der Derian, J. and Shapiro, M. J. (eds.) International/Intertextual Relations:
Postmodern Readings of World Politics. Issues in World Politics Series. (MA: Lexington Books: D.C. Heath and Company).
Smith, R. (1989) The Global Bankers. New York: Truman Talley Books/ Plume.
Strange, S. (1997) Mad Money: When Markets Outgrow Governments. Ann Arbor: University of Michigan Press.
Tehranian, M. (1990) Technologies of Power: Information Machines and
Democratic Prospects. (NJ: Ablex Publishing Company).
Tyson, K. (2023) Multicurrency Mercantilism: The New International Monetary Order. Independently published. ISBN 9798864645031.
Wachtel, H. (1986) The Money Mandarins. NY: Pantheon Books.
Wriston, W. (1992) The Twilight of Sovereignty: How the Information
Revolution is Changing our World. (New York: Charles Scribner’s Sons).

© ALL RIGHTS RESERVED

Not to be considered financial advice.



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea and a Research Professor for Stony Brook University. He teaches AI and broadband policy. From 2002-2012 he taught digital economics and information systems management at New York University. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in Korea.

The Genius Act, USD-backed Stablecoins, and the Death of the Eurodollar

Posted on | November 23, 2025 | No Comments

Recommended Citation APA (7th Edition)

Pennings, A.J. (2025, Nov 23) The Genius Act, USD-backed Stablecoins, and the Death of the Eurodollar. apennings.com https://apennings.com/data-analytics-and-meaning/the-genius-act-usd-backed-stablecoins-and-the-death-of-the-eurodollar/

Introduction

With the passage of the US Genius Act of 2025, the era of “wildcat” crypto experimentation is over.[1] In its place rises a state-sanctioned, gridmatic architecture that fundamentally alters the plumbing of the global economy. By mandating that all US-compliant stablecoins be backed 1-to-1 by short-term US Treasury securities, the Act has transformed the humble stablecoin from a casino chip for crypto-speculation into the primary vector of US monetary sovereignty and hegemony.

This innovation is the ultimate realization of the Semiotic-Abstraction-Computation-Telecom (SACT) stack analysis of spreadsheet capitalism that I have been developing.[2] If the Excel spreadsheet privatizes corporate and government value, the “Genius-compliant” stablecoin nationalizes global liquidity. It is a technology that does not just represent the dollar but weaponizes it, challenging and ultimately replacing the Eurodollar system that has governed global finance since the 1950s.

Introduced on May 1, 2025, by Sen. Bill Hagerty [R-TN], a former hedge fund co-founder and Ambassador to Japan during the first Trump administration, the US Genius Act envisages stablecoins as a technological upgrade to the USD, and it’s shadow variant, the Eurodollar. The law, which stands for Guiding and Establishing National Innovation for US Stablecoins Act, requires stablecoin issuers to back their digital assets with corresponding assets and comply with standards for capital, liquidity, transparency, and risk management. Instead of viewing stablecoins as a threat to the dollar, the Act allows the US to export its currency more efficiently than ever before, while locking the issuers and purchasers into financing the US national debt.[3]

In the context of my SACT framework, they function as a pure semiotic Substitution (S-Layer) vehicle, as exemplified in this USD example.

Signifier – The digital token (e.g., USDC, USDT).
Signified – The US Dollar held in a bank vault (or Treasury bill).
The Promise – The substitution is 1:1 and reversible at any time.

The Semiotic Shift: From Bank Promises to Sovereign Tokens

To understand the magnitude of this shift, we must look at the Substitution (S) layer of our stack.

For 70 years, the world ran on Eurodollars. A Eurodollar was a semiotic signifier that stood for “a US dollar held in a bank outside the US.” Crucially, this was credit money. It was a liability of a private bank (like HSBC or Barclays) backed by deposits but with no reserve ratio. When the world needed liquidity, these offshore banks “printed” it by extending credit on their spreadsheets.

The Genius Act destroys this logic. A Genius-compliant stablecoin (like the new “Clean” USDC) is not a bank liability. It is a tokenized claim on US Sovereign Debt.

By forcing stablecoin issuers to hold US Treasuries, the Act creates a massive liquidity vacuum. Capital is now fleeing the risky, fractional-reserve balance sheets of offshore Eurodollar banks and flowing into the full-reserve, “risk-free” vaults of US stablecoin issuers.

Old Semiosis: Dollar = Private Bank Promise (Credit)

New Semiosis: Dollar = US Treasury Bill (Equity of the State)

The result is the defunding of the global banking system to finance the US government. The “offshore” world is no longer a place where money is created (credit); it is a place where US government debt is distributed (collateral).

The Abstraction Layer: The Bifurcation of Value and the “Pristine” Dollar

In the Abstraction (A) layer, the Genius Act creates a new, ruthless hierarchy of value. We are witnessing a bifurcation of the global currency supply into “Clean” and “Dirty” dollars.

“Clean” dollars are Genius-compliant tokens. These are programmable, KYC-compliant, and backed by the full faith and credit of the US Treasury. Because they act as “pristine collateral” in the new Cloud-Spreadsheet Capitalism stack, they trade at a structural premium.

“Dirty” dollars are legacy offshore deposits and non-compliant algorithmic stablecoins. These are viewed as “sub-prime” money, trading at a discount due to regulatory risk and lack of direct Treasury access.

This distinction will likely drive the price of the US dollar upward. As the Global South and international corporations scramble to access “Clean” dollars for trade settlement, demand for the underlying asset (US Treasuries) skyrockets. The Genius Act effectively turns the US dollar into a global membership fee for the digital economy.

The Telecom Layer: Weaponized Diffusion in the Global South

It is in the Telecommunications (T) layer where the geopolitical consequences are most acute. The Genius Act does not just export currency; it exports a new monetary operating system. In the Global South, the diffusion of these Genius-compliant stablecoins is replacing local currencies not just for transactions, but as the primary store of wealth.

Consider the following transaction: A merchant in Nigeria no longer needs SWIFT or a correspondent bank to pay a supplier in Vietnam. They use a Genius-coin rail that settles instantly. The “T” layer friction is gone, but so is the local central bank’s visibility into the trade.

Regarding wealth, by holding a Genius-coin on a smartphone, a citizen in Argentina or Turkey is effectively holding a US Treasury bond. They are bypassing their domestic banking system entirely to lend directly to the US government.

This change creates a form of “Hyper-Dollarization” that challenges national bank systems. It strips developing nations of their monetary sovereignty. Their central banks can no longer manage their money supply because their citizens operate on a parallel, superior grid governed by US code and backed by US debt.

Conclusion: The Spreadsheet Becomes the Sovereign

The Genius Act proves that stablecoins are not a deviation from spreadsheet capitalism; they are its apotheosis. They complete the transition of the dollar from a passive store of value to an active, computational object. This object is:

Written (Substitution) as a tokenized Treasury bill.

Abstracted (Abstraction) as a “Clean,” premium asset.

Computed (Computation) via smart contracts that enforce US sanctions and tax compliance.

Synchronized (Telecommunications) across a global cloud that renders local borders irrelevant.

We have moved from the “gridmatic” governance of the corporation to the “chainmatic” governance of the planet. The US government has effectively “IPO’d” the dollar on the blockchain, turning every stablecoin wallet in the world into a brokerage account for American power. The Eurodollar is dead; long live the Sovereign Spreadsheet.

Notes

[1] Hockett, Robert C., “Money’s Past is Fintech’s Future: Wildcat Crypto, the Digital Dollar, and Citizen Central Banking,” 2 Stanford
Journal of Blockchain Law & Policy (2019)
[2] Pennings, A.J. (2025, July 24) Stablecoins, Blockchains, and the Semiotic-Telecom-Computational Stack of Spreadsheet Capitalism. apennings.com https://apennings.com/artificial-intelligence/stablecoins-blockchains-and-the-semiotic-telecom-computational-stack-of-spreadsheet-capitalism/
[3] Issuers of stablecoins are required to buy US treasuries. They can hold these securities and collect the fixed interest payments. As well as some fees. Additional information on Genious Act regulations are available.

© ALL RIGHTS RESERVED

Not to be considered financial advice.



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea and a Research Professor for Stony Brook University. He teaches AI and broadband policy. From 2002-2012 he taught digital economics and information systems management at New York University. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in Korea.

Remediating the Blurred Lines of Human-AI Collaboration in Disaster Management and Public Safety Communications

Posted on | November 14, 2025 | No Comments

This is a follow-up to my prepared presentation for the Asia-Pacific Economic Cooperation (APEC) meeting on July 31, 2025, on Disaster Leadership with Saebom Jin from the National AI Research Lab at KAIST. We used media theory to talk about the possibility of “healing” the Common Operating Pictures (COP) used in disaster-oriented situation rooms and command centers with Artificial Intelligence (AI) and Application Programming Interfaces (APIs). By healing, we drew meaning from the theories of remediation by Bolter and Grusin (2000), which propose that new media forms integrate older forms to create a more “authentic” version of reality.[1]

Operating Systems (OS) coordinate the flow of applications within a RAM-limited digital environment.[2] They facilitate the flow of software data like traffic cops manage the movement of automobiles in a busy intersection. Artificial Intelligence (AI) can function as a sophisticated “operating system” to coordinate APIs, enabling the seamless gathering of multiple streams of data and video to achieve both hypermediation and transparent immediacy in critical information displays like Common Operating Pictures (COPs). This involves AI acting at an orchestration layer, intelligently managing data flow like a maestro, and dynamically shaping the user experience.

This post uses remediation theory to offer ideas about the multimediated experience of connecting different data stream and windows in an individual device or common screen like the COP. Remediation is the process by which new media refashion old media to create a more “authentic” experience. In this case, it “heals” COPs by intelligently merging legacy media (TV, maps, dashboards, spreadsheets) with digital innovations (AI, APIs, streaming video). AI becomes the coordinator and translator of these media, enhancing their functionality and intelligibility.[2]

Drawing on Bolter and Grusin’s theory of remediation, with its two logics of transparent immediacy and hypermediation, we can piece together how AI can function as a next-generation operating system for Common Operating Picture (COPs) in disaster management and public safety command centers, dashboards, and mobile Personal Operating Pictures (POP). More importantly, we can see how media can stake a claim on mediated reality. The two logics work together. Transparent immediacy creates live experiences, making the medium “disappear” and enabling a direct, real-time experience. AI auto-selects, filters, and narrates live feeds or alerts for immediate situational awareness (e.g., live drone feed of a flooded area). Point-of-view (POV) perspective in visual art forms contribute to this experience. This combination enables fast, intuitive decision-making in the control room and the field.

Hypermediation uses multiple windows of statistical and indexical representation (e.g., temperature, wind speed, traffic data), allowing users to see and interact with the complexity of an incident. The AI OS organizes and synchronizes diverse sources (e.g., tweets, GIS layers, 911 calls, camera feeds) into COPs and visual dashboards to help leaders and analysts see patterns, anomalies, and priorities in multi-facted crises like a hurricane with flooding.

The Critical Role of Common Operating Pictures (COPs) and Dashboards in Disaster Management

In the demanding landscape of disaster management and risk reduction, Common Operating Pictures (COPs) and dashboards stand as hopeful pillars for effective surveillance and response. These sophisticated information systems are central to command centers, providing real-time monitoring and management of complex situations, for incident management, emergency response, and the protection of critical infrastructure. Their fundamental utility lies in their ability to aggregate vast amounts of surveillance data and diverse information sources, synthesizing them into a unified, real-time, immediate view of ongoing activities and unfolding situations. This comprehensive display is expected to significantly enhance situational awareness for all parties involved, from field responders to strategic leaders.  

The strategic value of COPs extends beyond mere data display; they are instrumental in fostering collaborative planning and reducing confusion among multiple agencies operating in a crisis. By providing a consistent, up-to-date view of critical information, COPs enable faster, more informed decision-making across the entire response structure. A prime example of this is the Department of Homeland Security (DHS) Common Operating Picture program, which delivers strategic-level situational awareness for a wide array of events, including natural disasters, public safety incidents, and transportation disruptions. This program facilitates collaborative planning and information sharing across federal, state, local, tribal, and territorial communities, underscoring the vital role of integrated information platforms in national security and public safety.  

AI as an Operating System for API-fed Media Management Layers

AI orchestration refers to the coordination and management of data models, systems, and integration across an entire workflow or application. In this context, AI acts as the “maestro” of a technological symphony, integrating and coordinating various AI tools and components to create efficient, automated informational and media workflows.  

The role of the AI-OS is to ingest data via API from sources such as satellite feeds, IoT sensors, weather models, traffic systems, and social media chatter. AI uses algorithms and techniques, often from machine learning and neural nets, to process visual data, recognize objects, and analyze scenes. It tags and contextualizes data using NLP, computer vision, and geospatial AI to collect, label, and align data across multiple users. It can also manages narrative flows by creating text summaries, video captions, or alerts that guide public broadcasts or Personal Operating Pictures (POP) user interpretation. AI adapts interfaces and adjusts the dashboard to user roles (e.g., responder vs. planner vs. public).

AI-first API management integrates machine learning (ML), natural language processing (NLP), and predictive analytics to gain deeper insights into performance and usage trends to forecast weather patterns, detect fire and water anomalies in real-time, and automate response governance. This means AI can intelligently manage the flow of information and tasks between different media components, ensuring the right data reaches the right COPs and models at the right time, preventing data bottlenecks and optimizing predictive resource utilization.  

AI-coordinated data streams draw in various API-enabled data and video sources to “heal” the COP experience. For instance, real-time streaming (RTSP/Drones) provides live visual feeds and real-time object detection, as well as thermal imagery for immediacy. GIS/Map APIs gather hyperreal terrain, zoning, and infrastructure information and then overlay evacuation zones and hazard proximity models that may involve chemical leakages, flooding, traffic, and other factors. Social media APIs draw on the writing of public posts and conduct sentiment analysis, while geo-tagging locations for search engine optimization (SEO). They also have to factor in panic signals and filter out misinformation from mischievous posts. IoT Sensors (MQTT) provide (infra)structural and environmental data that can trigger alerts based on thresholds and can be used in predictive modeling. EMS/911 feeds draw on voice and text emergency dispatches and may require transcription and triage classifications for harmful accidents. Additionally, Weather/NOAA APIs collect storm forecast data and generate path predictions and risk zone maps.

AI-powered API integration solutions automate and streamline connections between disparate software platforms, enabling seamless communication and real-time data flow. This eliminates manual data entry, reduces human error, and provides unified access to business-critical data, allowing systems to scale faster and adapt to market demands. AI systems include computational resources, data stores, and data flows/pipelines that transmit data. Data engineers design these pipelines for efficient transfer, reliable quality, and easy access for integration and analysis. AI orchestration platforms automate these workflows, track progress, manage resources, and monitor data flow.  

Gathering Multiple Streams of Data and Video – Transparent Immediacy in Practice

In the high-stakes domain of crisis management, transparent immediacy is an indispensable principle for designing intuitive COP and dashboards that facilitate rapid decision-making by minimizing cognitive friction. Real-time data visualization tools are specifically engineered to present complex information in a highly usable manner, enabling decision-makers to quickly grasp the unfolding situation without being distracted by the interface itself. Achieving true immediacy in data delivery is critically dependent on low latency within the underlying network infrastructure, particularly in mission-critical environments where split-second decisions can have profound consequences.[Diffserv]  

The integrative interface of AI-assisted media “vanishes” into the COP to provide a multidimensional hyperrealized and contextualized display. It blends multiple inputs (e.g., real-time GPS + bodycam) while maintaining the im-mediated telepresence “touchpoint” to events in the field. It synthesizes spoken alerts from text messages and narrates the changing situation (“A levee breach has been detected 3 miles south of Sector 3”). The result is that decision-makers read the mediated displays as if they are seeing the world with a healed blend of immediate and hyperreal perspectives — without delayed video feeds or digging through massive amounts of raw data.

The principles of transparency and trust are fundamental to effective crisis communication. Providing clear, accurate, and timely updates through dedicated platforms and channels helps to build confidence and establish credibility with affected populations and stakeholders. This approach aligns directly with the human desire for direct, unmediated information. Proactive communication, a commitment to telling the truth, and consistently adhering to factual information are essential strategies for maintaining transparency and regaining control of the narrative during a crisis. These practices mirror the pursuit of immediacy by delivering information that feels direct, honest, and unadulterated, thereby reinforcing public trust.  

In the high-stress, information-rich environments of crisis management, operators frequently encounter information overload. The core objective of transparent immediacy is to make the mediating technology disappear from the user’s conscious awareness, thereby allowing direct engagement with the critical information. By meticulously designing COPs and dashboards to include an “interfaceless” quality, the cognitive burden associated with navigating complex interfaces is substantially reduced. This reduction in cognitive friction enables faster assimilation of critical data, expedites the identification of patterns or anomalies, and ultimately leads to more rapid and effective decision-making, which is of paramount importance in emergency response scenarios. The less mental effort expended on understanding the tool, the more attention can be dedicated to understanding the crisis.  

While transparent immediacy strives to erase the medium and present information as unmediated reality, the integration of AI introduces a new, inherently complex layer of algorithmic mediation. AI can indeed create the appearance of greater immediacy by providing real-time insights and indexical predictive analytics, seemingly cutting through complexity to deliver direct understanding. However, the internal workings of AI processes — how they learn, process vast datasets, and generate their outputs—are often opaque, frequently referred to as a “black box” problem.

This creates a fundamental paradox: the desire for an “interfaceless” and seemingly unmediated experience directly conflicts with the ethical imperative for transparency and explainability in AI systems. Disaster management leaders must carefully navigate this tension, balancing the undeniable benefits of AI-driven immediacy with the critical need to understand and trust the AI’s “judgments,” particularly when human lives and safety are at stake. This complex challenge may necessitate the development and integration of new forms of “explainable AI” (XAI) within COP interfaces to ensure that accountability and trust are maintained, even as the technology becomes more sophisticated.  

Gathering Multiple Streams and Windows of Data and Video – Hypermediation in Practice

When a new AI-assisted digital COP remediates older, perhaps less dynamic, informational sources, it carries an implicit promise of higher fidelity, greater accuracy, and real-time relevance. Fulfilling this promise can significantly enhance comfort and trust in the information presented. For instance, an interactive Geographic Information System (GIS) map that updates in real time is inherently perceived as more reliable and trustworthy than a static, outdated map.

However, the very process of mediation, by transforming data and introducing new digital layers, can also introduce new forms of hyperreality. If these indexical layers are not transparent, or if the transformation process itself is flawed or introduces biases, it could inadvertently undermine the very trust it seeks to build. Therefore, disaster management leaders must ensure that the “improvement” offered by new forms of remediation is genuinely beneficial and does not obscure the underlying data’s provenance, potential limitations, or inherent biases.

This transition demands a critical understanding of the transformative pitfalls and potentials of digital remediation. AI operating systems can call for displays that are layered and windowed that offer diverse media representations, each with its own source, credibility, and relevance. It stacks weather maps, traffic flows, shelter capacity, and 911 calls. It shows social sentiment spikes alongside physical sensor alerts. It also tags uncertainties, such as “unverified reports” or “possible false positives.”

The result is that it enables strategic coordination by exposing the full complexity of the crisis landscape. AI’s core strength lies in its ability to ingest, process, and fuse massive volumes of data from diverse sources in real-time. AI then infers from the large datasets that are collected to inform human-based guidance and decision-making.

Multimodal AI systems are designed to integrate and process multiple data types (modalities) such as text, images, audio, and video. By combining these various data modalities, the AI system interprets a more diverse and richer set of information, enabling it to make accurate, human-like predictions and produce contextually aware outputs. This is achieved through multimodal deep learning, neural networks, and fusion techniques that synthesize different data types.

Real-Time video stream analysis draws on AI-powered video intelligence APIs that can recognize over 20,000 objects, places, and actions in both stored and streaming video. They can extract rich metadata at the video, shot, or frame level, and provide near real-time insights with streaming video annotation and object-based event triggers. Advanced APIs like Google’s Live API enable low-latency bidirectional voice and video interactions, allowing for live streaming video input and real-time processing of multimodal input (text, audio, video) to generate text or audio.  

AI and knowledge graph capabilities automate data ingestion, preparation, analysis, and reporting, significantly reducing manual tasks. This allows for quickly connecting internal data sources with case-specific data like device dumps, license plate readers (LPRs), financial records, social media, and public records for a comprehensive view.  

Coordination for Hypermediation

Hypermediation foregrounds the mediating function, explicitly enhancing the multiplicity of information and exposing the limitations of direct, “unmediated” transparent representation. AI enhances this by intelligently managing and presenting diverse, fragmented data streams in coordinated spreadsheet grids and windows. AI enables hyper-personalization of content by analyzing user behavior and preferences, tailoring content to individual needs on a granular level. This extends to dynamic user interfaces that can adapt based on user theming, curation, and real-time feedback, moving beyond static software to more organic, rapidly changing displays.  

AI in OS mode synthesizes and presents complex disaster information in usable, interactive multimodal dashboards that feature multiple windows, dynamic visualizations, and drill-down options. This allows for enhanced interactivity and navigation, enabling users to explore data in depth and filter information based on specific criteria.  

AI acts as the technological enabler for sophisticated hypermediation, allowing for the intelligent management of interconnected media at a scale and speed previously unattainable. It helps transform a potential deluge of data into a coherent and actionable common operating picture by connecting related information from fragmented sources and streamlining complex analyses.

Coordination for Transparent Immediacy

Transparent immediacy aims to make the user “forget the presence of the medium,” fostering a belief in direct, unmediated presence with the represented information. AI contributes to this by gathering and simplifying complex data into clear, actionable insights and enabling seamless, real-time interactions. [1]  

AI-powered data visualization transforms complex data into clear, dynamic visuals, identifying patterns and relationships that would take humans hours to uncover. It provides real-time insights, automatically updating visuals as new data flows in, allowing for faster, more informed decisions. This simplifies the complex, distilling mountains of raw data into actionable visuals that can be understood “at a glance”.  

AI-enhanced interfaces offer natural language processing, allowing users to ask questions and receive results in clear, interactive charts. By making the mediating technology disappear from conscious awareness, AI-driven COPs reduce the cognitive burden on operators, enabling faster assimilation of critical data and expediting decision-making.  

AI can power immersive VR and AR environments that extend traditional 2D displays into a third dimension, enabling novel operations and more intuitive, collaborative interactions. These environments aim to create a shared, real-time, and seemingly unmediated understanding of a crisis, akin to William Gibson’s “consensual hallucination” of cyberspace. AI-powered insights assist in selecting appropriate hardware solutions for these immersive technologies, streamlining their integration.  

AI Black Box’s Paradox and Ethical Considerations

While AI strives for transparent immediacy by simplifying complexity, it introduces a “black box” problem where the internal workings of AI processes are often opaque. This creates a paradox where the desire for an unmediated experience conflicts with the ethical imperative for transparency and explainability in AI systems. For critical applications like COPs, ensuring data quality, mitigating algorithmic bias, and maintaining human accountability are paramount. The effectiveness and reliability of AI models are directly contingent upon the quality, diversity, and cleanliness of the data on which they are trained, as substandard data can propagate and amplify flaws.  

AI acts as a central nervous system for a hypermediated and transparent immediated COP by orchestrating the complex interplay of data streams, video feeds, and user interfaces. It enables real-time data fusion, dynamic content adaptation, and intuitive visualization, but requires careful human oversight to ensure trust, accountability, and ethical deployment. AI as an operating system doesn’t just manage media, it interprets, structures, and presents it as meaning. In the process it strives to enable leaders and users to move from data overwhelm to narrative clarity.

Concluding Thoughts

The strategic application of media theory concepts, primarily Remediation and its logics of Transparent Immediacy, and Hypermediation, works in conjunction with advanced AI capabilities. This process is paramount for optimizing the “authentic” healed experience in Common Operating Pictures and dashboards in disaster management.

This post has demonstrated how these frameworks provide a critical lens for understanding, designing, and enhancing the information systems and COPs that underpin modern crisis response. From transforming chaotic and static data forms into dynamic visual flows (Remediation) to fostering seamless situational awareness (Transparent Immediacy) and orchestrating complex multi-source indexical and graphical information (Hypermediation), media theory offers a profound guide. AI, in turn, acts as the indispensable engine, the OS enabling these transformations through API real-time data fusion, formulas for predictive analytics, and automated communication displayed on COPs.

The future of crisis response lies in intelligently navigating the increasingly blurred lines of human-AI collaboration. This demands a nuanced understanding of AI as a powerful co-author, operating system, and assistant, one that enhances human capabilities but never fully replaces human accountability and intent. Fostering trust in increasingly mediated information requires unwavering commitment to transparency, data quality, and ethical AI deployment. By consciously integrating theoretical understanding with technological innovation, disaster management leaders can leverage these converged media landscapes to create COPs and dashboards that are not merely displays of data, but dynamic, intelligent platforms capable of shaping perception, informing decisive action, and ultimately building a more resilient and responsive global community in the face of escalating threats.

Citation APA (7th Edition)

Pennings, A.J. (2025, Nov 14) Remediating the Blurred Lines of Human-AI Collabollaboration in Disaster Management and Public Safety Communications. apennings.com https://apennings.com/crisis-communications/remediating-the-blurred-lines-of-human-ai-collabollaboration-in-disaster-management-and-public-safety-communications/

Notes

[1] Bolter, Jay David, and Richard Grusin. Remediation: Understanding New Media. Cambridge, MA: MIT Press, 2000. They followed up on “probes” by Marshall McLuhan, that the content of any new medium is always an older medium. This means new technologies integrate, repurpose, and older media. McLuhan’s main message was to point to the fundamental change these new forms create in human scale, pace, or pattern. These ideas were primarily expressed in The Mechanical Bride (1951) and Understanding Media (1964).
[2] By RAM-limited digital environment I mean the workspace needed for multiple applications to be supported. “RAM” can also be used as an analogy for a human’s ability to deal with multiple streams of information coming into their perspective.
[3] I used two prompts to address the issues I was thinking about.
Prompt 1. How can AI be an operating system to coordinate APIs gathering multple streams of data and video for hypermediated and transparent immediacy? Prompt 2. Drawing on the concepts of remediation and its two logics of hypermediation and transparent immediacy, how can AI be an operating system to coordinate APIs gathering multple streams of data and video for Common Operating Pictures used in Disaster Management and Public Safety?

© ALL RIGHTS RESERVED

Not to be considered financial advice.



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea and a Research Professor for Stony Brook University. He teaches AI and broadband policy. From 2002-2012 he taught digital economics and information systems management at New York University. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in Korea.

Taylor’s Fab Rescue: The A16 Chip

Posted on | November 2, 2025 | No Comments

My home in Austin, Texas, is about 25 miles from the newly built Samsung microprocessor fabrication “fab” factory. The fab has been running behind schedule since it switched its plans from 4-nanometer chips to 2-nanometer chips. But Tesla recently signed a deal with Samsung to build its A16 (and maybe A18) chips in Texas for future use in its data center, EVs, and robots.

While the Taylor location has many advantages, the demands of producing such small devices are numerous. While very few earthquakes occur in Texas, 2nm chip production operates at atomic levels that cannot tolerate vibrations from trains, traffic, or settling of the earth below and near the fab. Unlike the new TSMC fab in Phoenix, Arizona, Samsung had to invent many solutions to problems with electricity supply, ultra-clean water and air, and a workforce not used to 2nm production.

These problems and Samsung’s heroic attempts to overcome them are discussed nicely in this video, including a crucial factor for the fab’s success—a customer to point its development towards. This is why Tesla’s decision to work with Samsung on its A16 chip was a welcome relief for both parties. Samsung locks in a much-needed customer while Tesla avoids competing with Apple, NVIDIA, and a myriad of other companies for TSMC’s chips.

This video explains the situation in more detail.

What will Tesla do with these chips? The internally designed chips will be used for both inference and training in a variety of products, including self-driving systems for its EVs, the Optimus humanoid robot, and AI data centers.

Citation APA (7th Edition)

Pennings, A.J. (2025, Nov 02) Taylor’s Fab Rescue: The A16 Chip. apennings.com https://apennings.com/digital-geography/taylors-fab-rescue-the-a16-chip/

© ALL RIGHTS RESERVED

Not to be considered financial advice.



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea and a Research Professor for Stony Brook University. He teaches AI and broadband policy. From 2002-2012 he taught digital economics and information systems management at New York University. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in Korea.

The AI Prompter as Auteur? Examining LLM Authorship Through the Lens of Film Theory

Posted on | November 1, 2025 | No Comments

I’ve been teaching film classes on and off since graduate school. It has never been my primary focus, but the historical and theoretical depth of film studies intrigued me as well as the technologies and techniques that shape thinking and understanding. I incorporate much of it in my EST 240 – Visual Rhetoric and Information Technology class from Stony Brook University, which looks at the camera and editing techniques that persuade and signify in different media.

This year, we added generative AI as a topic to address logo design, photo-graphics, and video synthesis. Here is an example of the basics for a video prompting plan. In this text, I explore the concept of authorship in the age of generative AI, drawing parallels with established theories from film and media studies, particularly auteur theory.

One of the interesting questions in film studies and media studies is authorship, or “auteur,” from its French roots. Who is the “author” of a film? Is it the screenwriter? The producer? The director? How much do the actors contribute to the creative process? A similar query asked where the meaning in the film experience is produced. Is it in the author? In the content or “text”? How about in the audience or the viewer? A related perspective asks how much authorship or meaning is limited or organized by the “genre,” such as a comedy, drama, or science fiction?

The proliferation of Large Language Models (LLMs) capable of generating sophisticated text and imagery has introduced a novel figure into the creative landscape, the AI prompter. This individual, through the crafting of textual instructions, elicits responses from AI systems, thereby participating in the creation of content that often blurs the lines of traditional authorship and creative control. The question is whether a human AI prompter can be considered the “author” of text or imagery produced by an LLM? This discussion has become very relevant in fields such as education, law, and publishing. It invokes debates around creativity, control, and the very definition of authorship, portending friction between emergent AI tools and social institutions.

Or is it more akin to a “curator?” As understood in museum studies, this person is a figure who makes decisions about what to include, what to exclude, and how to narrate the story an exhibition tells. In the realm of AI-generated content, the “prompter” is the individual who crafts the audio, pictorial, or textual instructions (prompts) that guide the LLM to produce an output, be it an essay, a poem, or a digital image. The quality, specificity, and iterative refinement of these prompts significantly influence the final result. Alternative analogies that may capture different facets of the prompter’s interaction with LLMs include the “chef,” the “collaborator,” the “instrumentalist,” or “partner.” But these mostly fall outside the realm of film and media theory and will be mentioned only briefly.

This exploration is not merely an academic exercise; the increasing ubiquity of LLMs in diverse fields, from literary creation to legal analysis, necessitates new conceptual tools to understand this new form of human-machine interaction. The US Copyright Office ruled in May 2025 that the results of prompts can, for the most part, be copyrighted. The US Constitution recognized early the social value of such arrangements in Article I, Section 8, Clause 8 when it codified the following:

“To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.”

The “Copyright Clause” is meant to promote innovation and economic development and the developer’s of AI LLMs do not seem to have much incentive to oppose the Copyright Office’s “Copyright and Artificial Intelligence Part 3: Generative AI Training.” This is not to say that other content creators have agreed, including people whose likeness has appeared in AI produced content. The Copyright Office has stated its intention to “monitor developments in technology, case law, and markets, and to offer further assistance to Congress as it considers these issues.”

Understanding Auteur Theory and AI

Originating with early film critics in the 1950s, Auteur theory posited that the director is the primary creative force behind a film. She or he is the “author” whose personal vision, recurring themes, and distinct stylistic choices are imprinted on their body of work. An auteur, like Alfred Hitchcock, Akira Kurosawa, or Steven Speilberg is seen as exerting a high level of control over all aspects of production. So some film theorists argued they make their films recognizable and uniquely their own, regardless of the participation of other collaborators like screenwriters or actors.

American critic Andrew Sarris later popularized and systematized auteur theory, coining the term in his 1962 essay, “Notes on the Auteur Theory” in The American Cinema. Later, he proposed three concentric circles of criteria for evaluating directors: technical competence (the ability to make a well-crafted film), a distinguishable personality (a recurring stylistic signature evident across their films), and interior meaning (arising from the tension between the director’s personality and the material). It refers to the often unconscious or ambiguous, personal vision, themes, and preoccupations of the director that permeate their films, even when working with diverse material or within the constraints of a studio system. The theory celebrated directors like Alfred Hitchcock, who, despite working within the constraints of the Hollywood studio system, managed to produce distinctive and deeply personal works.[2]

French film critics associated with Cahiers du Cinéma and “la politique des auteurs” championed the film director as the primary artistic force whose personal influence, individual sensibility, and distinct artistic vision could be identified across their body of work. This approach shifted critical attention from studios or screenwriters to the director, viewing them as an artistic creator of the work. Auteur theory includes determining whether directors have a personal vision and style based on a consistent aesthetic and thematic worldview, demonstrated mastery of the medium, recurring themes and motifs, and a willingness to push the boundaries of the medium. Do they maintain significant control or influence over the final product?

A key aspect of the auteur was the identification of a particular film style encompassing: visual elements, narrative structures, and thematic preoccupations that could be consistently associated with a director, serving as their “authorial signature.” Auteur theory drew a critical distinction between “true auteurs,” who infused their work with a personal vision and artistic depth, and “metteurs en scène,” who were seen as competent but workmanlike directors merely executing the details of a script without a discernible personal stamp. For instance, Michael Curtiz who won Best Director for Casablanca (1942) was often placed in the latter category, while Nicholas Ray, known for the (1955) Rebel Without a Cause, starring James Dean was celebrated as an auteur.  

The “method actor” analogy reinforced the prompter is a director who guides the suggests the LLM that can be treated as an actor. The LLM “act outs” a specific role or persona to solve problems or generate content. By “casting” the LLM in a role (e.g., “You are a historian…”) and providing a “script” (the detailed prompt), the prompter can elicit more structured, context-aware, and human-like responses. This framework emphasizes the prompter’s role in setting the scene, defining the character, and guiding the performance.

A skilled prompter often has a specific vision for the desired output. They use carefully chosen words, structures, and iterative refinements (prompt engineering) to steer the AI towards this vision. This can involve defining style, tone, subject matter, and even attempting to evoke specific emotions or ideas, much like a director outlines a scene.

Experienced prompters can develop recognizable patterns or approaches to prompting that yield particular types_of results from specific AI models. They learn the nuances of the AI and how to elicit desired aesthetics or textual qualities. Prompting is rarely a one-shot command. It often involves a back-and-forth conversation, a process of trial, error, and refinement, akin to a director working through multiple takes or editing choices. Even if the AI generates multiple options, the prompter often makes the final selection, curating the output that best aligns with their initial intent. This act of selection can be seen as a creative choice.

However, the US Copyright Office notes that AI models operate with a degree of unpredictability but concluded that content produced with AI can be copyrighted.

Key concerns of an auteur theory include determining who is the primary creative force, whether they have a personal vision and style based on a consistent aesthetic and thematic worldview, demonstrated mastery of the medium, recurring themes and motifs, and a willingness to push the boundaries of the medium. Finally, do they maintain significant control or influence over the final product?

The US Copyright Office’s stance that AI-generated works can be copyrighted with substantial human input further complicates this relationship. If the generated work is not copyrightable in the prompter’s name alone, or if the AI’s contribution is deemed substantial enough to negate sole human authorship, the prompter’s status as an “auteur” in a legally recognized sense is undermined.

However, current legal and creative consensus, notably highlighted by bodies like the US Copyright Office, generally holds that AI-generated works are not copyrightable unless there is substantial human creative input beyond mere prompting. The reasoning is that even detailed prompts can lead to unpredictable outputs from the AI, meaning the prompter may not have the same level of direct, granular control over the final work as a traditional artist. The AI model itself, with its complex algorithms and vast training data, plays a significant, if not primary, role in the generation process. The LLM’s output is inherently shaped by the massive datasets it was trained on, a factor far beyond the prompter’s control and introducing a vast external influence on the “style” and content.

By “carefully crafting prompts,” the user provides the model with context, instructions, and examples that help it understand the prompter’s intent and respond meaningfully. This includes setting clear goals, defining the desired length and format, specifying the target audience, and using action verbs to specify the desired action. Such detailed instruction suggests a high level of intentionality on the part of the prompter, aiming to steer the AI towards a preconceived vision.  

Arguments supporting a significant authorial role for the prompter often emerge from discussions about writers’ creative practices with AI. Studies indicate that writers utilize AI to overcome creative blocks, generate starting points, and then actively shape the AI’s output into something they consider useful, thereby maintaining a sense of ownership and control over the creative process. This active shaping and refinement, driven by the writer’s “authenticity” and desire for “ownership,” can be seen as analogous to an auteur’s imposition of their vision onto the raw materials of filmmaking.  

The Auteur Analogy Falters

Despite these points of correspondence, applying the auteur analogy to the AI prompter faces significant challenges. LLMs’ construction complicates the notion of a singular, controlling vision. These models are trained on vast datasets of existing human-created text and images and function by “mimicking human writing.” This training design makes it difficult to disentangle the prompter’s “pure” vision from the inherent capabilities, biases, and stylistic tendencies embedded within the LLM’s architecture and training data. AI may not be a neutral tool but an active, albeit non-conscious, participant in the generation process.  

This trajectory leads to the “black box” problem. The prompter rarely has full transparency or control over the internal workings of the LLM. While a film director ideally orchestrates various human and technical elements (cast, crew, script, camera), the prompter interacts with a system whose decision-making processes are often opaque. The output can sometimes be unpredictable, with LLMs even known to “hallucinate” or generate unexpected results, challenging the idea of complete authorial control.  

Intellectual property law presents another major hurdle. Current legal frameworks, particularly in jurisdictions like the United States, generally require human authorship for copyright protection. AI, as it stands, blurs the lines between authorship, ownership, and originality. If the generated work is not copyrightable in the prompter’s name alone, or if the AI’s contribution is deemed substantial enough to negate sole human authorship, the prompter’s status as an “auteur” in a legally recognized sense may be undermined.

The debate over prompter-as-auteur is thus deeply intertwined with these evolving legal definitions. The lawsuits involving creators like Scarlett Johansson and organizations like Getty Images and The New York Times against AI companies for using copyrighted material in training data further complicate the picture, as the very foundation upon which the LLM generates content is itself a site of contested authorship.  

Moreover, many instances of AI prompting might more accurately align with the role of the metteur en scène rather than the true auteur. A prompter might be highly skilled in eliciting specific outputs from an LLM, demonstrating technical competence. However, they may be seen as proficient technicians rather than visionary artists without a consistent, distinguishable personal style, thematic depth, or “interior meaning” traceable across a body of their AI-generated works. The inherent weaknesses often found in AI-generated writing—such as blandness, repetitiveness, or a lack of overarching logical structure — can also limit the perceived artistic merit of the output, thereby challenging the prompter’s claim to full auteurship if they are simply instructing a tool with such limitations.  

Furthermore, a significant aspect of the prompter’s role involves navigating the LLM’s potential for bias, inaccuracy, and “hallucinations.” This requires a curatorial-like responsibility to ensure that the information or content presented is sound, ethically considered, and appropriately contextualized. A museum curator has an ethical duty to research, authenticate, and provide accurate context for artifacts. Similarly, an AI prompter, especially in professional or public-facing applications, must critically vet and potentially correct or contextualize AI output to prevent the dissemination of falsehoods or biased information. This positions the prompter as a gatekeeper, quality controller, and interpreter of the AI’s output—all key curatorial functions.  

Despite these compelling parallels, the curator analogy also has its limitations when applied to AI prompting. Traditionally, curators work with existing, often tangible, artifacts or discrete pieces of information. LLMs, however, generate new content, albeit derived from their training data. The question then arises, is the prompter curating “generated” data, or are they more accurately co-creating it? This ambiguity blurs the line between curation and creation.

The “collection” from which an AI prompter “selects” is also fundamentally different from a museum’s holdings. An LLM’s latent space represents a near-infinite realm of potential outputs, not a predefined set of objects. This abundance makes the act of “selection” by prompting a more generative and less bounded process than choosing from a finite collection. The prompter is not merely selecting from a catalog but actively shaping what can be chosen or brought forth by the structure of their prompts. This suggests a more active, co-creative form of curation than traditional models imply, where the collection is dynamic and responsive to the prompter’s interaction.  

Acknowledging that the AI prompter’s role is not monolithic it’s crucial; it varies significantly based on the level of skill, labor, and “intellectual expression” invested, ranging from a simple user to a highly skilled co-creator. At one end of the spectrum, a user might issue simple, direct instructions to an LLM for a straightforward task, acting more as a client or basic operator. On the other hand, a highly skilled individual might engage in complex, iterative dialogues with AI, meticulously refining prompts and outputs in a process that resembles deep co-creation. The level of skill and labor or “intellectual expression” invested by the prompter can differ dramatically, and this variance directly impacts how their role is perceived and classified. A casual user asking an LLM to “write a poem about a cat” is performing a different function than an artist spending weeks crafting and refining prompts to achieve a specific aesthetic for a series of generated images or a legal expert carefully structuring queries to extract nuanced information for a case.  

Structural and Post-Structural Theoretical Challenges to Auteur Theory

Structuralism and its successor, post-structuralism, launched a powerful critique of the traditional notion of “authorship,” particularly challenging the idea of the author as the sole, authoritative source of a text’s meaning. Two of the most influential figures in this critique are Roland Barthes and Michel Foucault. Roland Barthes, in “The Death of the Author” (1967), argued for the “death of the author” as a key concept relevant to literary and textual analysis. His key arguments included that focusing on the author’s biography, intentions, or psychological state to interpret a text is a misguided and limiting practice. He believed that the author’s life and experiences are irrelevant to the meaning of the work once it is produced. He famously claimed that a text is not a linear expression of an author’s singular thought but rather “a multi-dimensional space in which a variety of writings, none of them original, blend and clash.” A text is a “tissue of quotations drawn from innumerable centers of culture,” meaning it comprises pre-existing linguistic conventions, cultural references, and discourses.

For Barthes, the meaning of a text is not fixed by the author but is produced in the act of reading. The reader is the one who brings together the various strands of meaning in the text. “The birth of the reader must be at the cost of the death of the Author.” This move liberates the text from a single, imposed meaning and opens it up to multiple interpretations. Barthes viewed the author not as a “creator” in the traditional sense, but as a “scriptor” – someone who merely sets down words, drawing from an already existing linguistic and cultural archive.

Michel Foucault’s “What is an Author?” (1969) shared Barthes’ skepticism about traditional authorship but approached the issue from a different angle, focusing on the historical and institutional construction of the “author-function.” His main point was that the “author” is not a natural or timeless entity but a specific function that emerged within certain historical and discursive practices. The author-function is a way of classifying, circulating, authenticating, and assigning meaning to texts within a given society.

The author is not a person, but a discursive principle. The author-function is not equivalent to the individual writer. It’s a set of rules and constraints that govern how we understand and use an author’s name. For instance, the author’s name serves as a mark of ownership (property), a way to hold someone responsible for transgressive statements, and a means to unify a body of diverse works.

Foucault linked the emergence of the author-function to systems of control and power, particularly the rise of copyright law and the need to regulate speech. Attributing a text to an author became a way to police discourse and assign responsibility, especially for subversive or dangerous ideas. Foucault emphasized that the author-function has not always existed or applied equally to all types of texts. For example, scientific texts often gain authority from their content rather than solely from their author, whereas literary texts are more heavily tied to the author’s name.

In summary, Barthes and Foucault challenged the romanticized, humanist view of the author as a solitary genius from whom all meaning emanates. Instead, they argued that meaning is either generated by the reader (Barthes) or shaped by complex social, historical, and institutional forces (Foucault), effectively “decentering” the author from their traditional position of authority. This critique had a profound impact across the humanities, including film studies, by shifting focus from the individual creator to the structures of language, discourse, and reception.

By applying Barthes and Foucault, we can argue that the LLM’s output is not a direct, unmediated expression of the prompter’s “genius” or sole intent. It is a complex interplay of the prompt, the LLM’s massive training data (a vast, unauthored “library” of human discourse), and its internal algorithms.

The “meaning” of AI-generated content is fluid and negotiated between the prompt, the LLM’s “knowledge,” and the audience’s interpretation. The emphasis on the prompter as “author” is often driven by practical, legal, and social needs to assign responsibility and fit new technology into old frameworks, rather than a true reflection of the creative process. The prompter is more accurately a skilled orchestrator of existing information and patterns or a “curator” and “editor” of potential outputs rather than a solitary inventor. This critical lens helps to move beyond a simplistic “human-as-master-of-machine” narrative, acknowledging the distribution of creativity and meaning-making in the age of AI.

Summary and Final Thoughts

Furthermore, as a visual studies and AI professor, I now incorporate generative AI, prompting questions about who the “author” is in AI-generated content. Traditionally, film studies grappled with authorship (the “auteur”), considering if the director, screenwriter, or others were the primary creative force. Similarly, the meaning of a film could be attributed to the author, the content, or the audience.

The emergence of the AI prompter—the human who crafts instructions for Large Language Models (LLMs)—creates a new debate. Is the prompter the “author” of the AI’s output? The text considers applying auteur theory to the prompter, noting similarities like the prompter’s vision, iterative refinement (prompt engineering), and selection process. It even uses the analogy of a “method actor” for the LLM, guided by the “director” (prompter).

However, the post argues that the auteur analogy could ultimately falter. LLMs, trained on vast human-created datasets, operate as “black boxes” with unpredictable outputs, making it difficult to attribute a singular, controlling vision to the prompter. Legal frameworks still largely require human authorship for copyright. The prompter might often function more like a “metteur en scène” (a competent technician) rather than a visionary auteur.

Instead, the post suggests the prompter’s role might be more akin to a curator, responsible for vetting, correcting, and contextualizing AI output due to the LLM’s potential for bias and “hallucinations.” However, this analogy also has limitations, as prompters generate new content rather than just selecting existing artifacts, making their role a blend of curation and co-creation.

Citation APA (7th Edition)

Pennings, A.J. (2025, Nov 1) The AI Prompter/Conversationalist as Auteur? Examining LLM Authorship Through the Lens of Film Theory. apennings.com https://apennings.com/books-and-films/the-ai-prompter-conversationalist-as-auteur-examining-llm-authorship-through-the-lens-of-film-theory/


Notes

[1] The notion of a prompter is being challenged because it is able to store conversations and responses.
[2] Okwuowulu, Charles. (May 2016). Auteur Theory and Mise-en-scence construction: A Study of Selected Nollywood Directors.
[3] Copyright and Artificial Intelligence
Part 3: Generative AI Training. Pre-publication version
A REPORT of the Register of Copyrights, May 2025

© ALL RIGHTS RESERVED

Not to be considered financial advice.



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea and a Research Professor for Stony Brook University. He teaches Visual Rhetoric, AI, and broadband policy. From 2002-2012 he taught digital economics and digital media management at New York University. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in Korea.

Spreadsheet Knowledge and Production of the “Modern Fact”

Posted on | October 25, 2025 | No Comments

Mary Poovey’s A History of the Modern Fact: Problems of Knowledge in the Sciences of Wealth and Society is one of my favorite books on the rise of the modern economy. A professor of English and director of the Institute for the History of the Production of Knowledge at New York University, she pursued how numerical representation became the preferred vehicle for generating useful “facts” in accounting and other business and financial knowledge.[1]

This post strives to apply Poovey’s insights to the rise of digital spreadsheets and the development of human-machine knowledge by examining how these technologies replicate, extend, and complicate the historical trends she identifies. The digital spreadsheet has been instrumental in shaping modern meaning-making practices by creating a seamless, interactive environment where humans and machines collaborate in the production and interpretation of data. Furthermore, this collaboration has been instrumental in forming a global economy and social organization based on networked spreadsheet rationality.

“Spreadsheet capitalism” is a term mentioned to me by one of my mentors in graduate school and refers to an economic system in which financial models, data analysis, and algorithmic decision-making, often facilitated by digital spreadsheets and related software, become the primary drivers of economic activity, social organization, and value creation.[2] Its origins lie in the increasing financialization of economies, the widespread adoption of personal computers and digital tools, and the belief in data-driven and gridmatic objectivity.[3]

Poovey’s book provides a critical background and framework for understanding how certain forms of knowledge became authoritative and seemingly “objective.” She argued that the rise of double-entry bookkeeping and statistical sciences in the early modern period was not merely a technical advancement but a profound epistemological shift. These systems created a new way of seeing and organizing the world through numerical representation, presenting complex realities as quantifiable and manageable facts.

Poovey emphasizes that this process involved a conflation of descriptive accuracy in emerging accounting practices with moral rectitude and truth, giving numerical data an impression of neutrality and unquestionable authority. And in the process, establishing the authenticity of merchant commerce.

Poovey’s analysis of double-entry bookkeeping highlights its role in standardizing economic information, fostering a sense of accuracy and accountability, and enabling the aggregation of individual transactions into a coherent, verifiable whole. The digital spreadsheet, in many ways, is the direct descendant of this legacy, but with exponentially increased power, interactivity, and reach. This power comes through automation, accessibility, the creation of objectivity, and the production of acceptable facts.

Just as double-entry bookkeeping streamlined manual accounting, digital spreadsheets automated formulaic calculations and data organization, making complex financial and statistical analysis accessible to a much broader audience beyond trained accountants. This accessibility has democratized data manipulation, allowing individuals and small businesses to generate reports and models that previously required specialized expertise.

Spreadsheets, like their analog predecessors, present data in a clean, tabular format that visually reinforces a framing of order, precision, and objectivity. The grid structure and instant calculation updates give the impression that the numbers are “just there,” reflecting reality without bias. However, the data entered, the formulas chosen, and the interpretations drawn are still products of human decision and are subject to potential error or bias.

Poovey argues that facts are not simply discovered but are produced through specific methodological and representational choices. Spreadsheets vividly demonstrate this, as users actively construct “facts” by inputting raw data in lists and categories, applying formulas, and structuring tabular information. The “fact” within a spreadsheet is a result of this human-machine collaboration where the spreadsheet is both the receptor and shaper of knowledge.

Summary

This blog post uses Poovey’s (1998) A History of the Modern Fact, as a framework for understanding the rise of spreadsheet capitalism. It explains that Poovey’s work shows how early modern representational practices such as double-entry bookkeeping created a new kind of knowledge: the quantifiable “fact.” This process gave structured (lists, tables, cells) numerical data an aura of objective truth and moral authority, laying the epistemological groundwork for the modern economy.

In the globalized, financialized economy of today, digital spreadsheets and their more advanced progeny (algorithmic trading platforms such as Bloomberg and Wind, AI-driven analytics) are the direct descendants of what Poovey calls the “modern fact.” They don’t just record economic activity; they actively constitute it. Spreadsheet capitalism operates by taking these quantifiable facts, often collected through networks, and generated by highly abstract financial models, and using them as the basis for global economic decisions.

The post emphasizes that spreadsheets are not neutral tools. Instead, they are digital environments where humans and machines collaborate to actively construct facts through the selection and use of data, placement, and formulas. The post concludes that spreadsheet capitalism is the modern culmination of the historical shift Poovey described. Digital tools don’t just record economic activity; they actively constitute and shape it by generating abstract, quantifiable facts that are used in formulas and functions to facilitate global economic decisions and capital accumulation.

Citation APA (7th Edition)

Pennings, A.J. (2025, Oct 25) Spreadsheet Knowledge and Production of the “Modern Fact”. apennings.com https://apennings.com/technologies-of-meaning/spreadsheet-knowledge-and-production-of-the-modern-fact/

Notes

[1] Poovey, M. (1998). A History of Modern Fact: Problems of Knowledge in the Sciences of Wealth and Society. University of Chicago Press.
[2] Majid Tehranian, a Harvard trained political economist and author of Technologies of Power (1992), was a member of both my MA and PhD committees. He guided my MA direction by suggesting that I focus on deregulation of finance and telecommunications. At some point he mentioned the term “spreadsheet capitalism” and it stuck with me.
[3] The post defines spreadsheet capitalism as an economic system where financial models and algorithmic decision-making, powered by spreadsheets, become the primary drivers of value and social organization. The digital spreadsheet is presented as the direct and far more powerful successor to the bookkeeping systems Poovey analyzed.

© ALL RIGHTS RESERVED

Not to be considered financial advice.



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea and holds a joint position as a Research Professor for Stony Brook University. He teaches AI and broadband policy. From 2002-2012 he taught digital economics and information systems management at New York University. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in Korea.

keep looking »
  • Referencing this Material

    Copyrights apply to all materials on this blog but fair use conditions allow limited use of ideas and quotations. Please cite the permalinks of the articles/posts.
    Citing a post in APA style would look like:
    Pennings, A. (2015, April 17). Diffusion and the Five Characteristics of Innovation Adoption. Retrieved from https://apennings.com/characteristics-of-digital-media/diffusion-and-the-five-characteristics-of-innovation-adoption/
    MLA style citation would look like: "Diffusion and the Five Characteristics of Innovation Adoption." Anthony J. Pennings, PhD. Web. 18 June 2015. The date would be the day you accessed the information. View the Writing Criteria link at the top of this page to link to an online APA reference manual.

  • About Me

    Professor (full) at State University of New York (SUNY) Korea since 2016. Research Professor for Stony Brook University. Moved to Austin, Texas in August 2012 to join the Digital Media Management program at St. Edwards University. Spent the previous decade on the faculty at New York University teaching and researching information systems, digital economics, and global political economy

    You can reach me at:

    anthony.pennings@gmail.com
    apennings70@gmail.com
    anthony.pennings@sunykorea.ac.kr

    Follow apennings on X

  • About me

  • Writings by Category

  • Flag Counter
  • Pages

  • Calendar

    January 2026
    M T W T F S S
     1234
    567891011
    12131415161718
    19202122232425
    262728293031  
  • Disclaimer

    The opinions expressed here do not necessarily reflect the views of my employers, past or present. Articles are not meant as financial advice.

  • Verified by MonsterInsights