Remediating the Blurred Lines of Human-AI Collaboration in Disaster Management and Public Safety Communications
Posted on | November 14, 2025 | No Comments
This is a follow-up to my prepared presentation for the Asia-Pacific Economic Cooperation (APEC) meeting on July 31, 2025, on Disaster Leadership with Saebom Jin from the National AI Research Lab at KAIST. We used media theory to talk about the possibility of “healing” the Common Operating Pictures (COP) used in disaster-oriented situation rooms and command centers with Artificial Intelligence (AI) and Application Programming Interfaces (APIs). By healing, we drew meaning from the theories of remediation by Bolter and Grusin (2000), which propose that new media forms integrate older forms to create a more “authentic” version of reality.[1]
Operating Systems (OS) coordinate the flow of applications within a RAM-limited digital environment.[2] They facilitate the flow of software data like traffic cops manage the movement of automobiles in a busy intersection. Artificial Intelligence (AI) can function as a sophisticated “operating system” to coordinate APIs, enabling the seamless gathering of multiple streams of data and video to achieve both hypermediation and transparent immediacy in critical information displays like Common Operating Pictures (COPs). This involves AI acting at an orchestration layer, intelligently managing data flow like a maestro, and dynamically shaping the user experience.
This post uses remediation theory to offer ideas about the multimediated experience of connecting different data stream and windows in an individual device or common screen like the COP. Remediation is the process by which new media refashion old media to create a more “authentic” experience. In this case, it “heals” COPs by intelligently merging legacy media (TV, maps, dashboards, spreadsheets) with digital innovations (AI, APIs, streaming video). AI becomes the coordinator and translator of these media, enhancing their functionality and intelligibility.[2]
Drawing on Bolter and Grusin’s theory of remediation, with its two logics of transparent immediacy and hypermediation, we can piece together how AI can function as a next-generation operating system for Common Operating Picture (COPs) in disaster management and public safety command centers, dashboards, and mobile Personal Operating Pictures (POP). More importantly, we can see how media can stake a claim on mediated reality. The two logics work together. Transparent immediacy creates live experiences, making the medium “disappear” and enabling a direct, real-time experience. AI auto-selects, filters, and narrates live feeds or alerts for immediate situational awareness (e.g., live drone feed of a flooded area). Point-of-view (POV) perspective in visual art forms contribute to this experience. This combination enables fast, intuitive decision-making in the control room and the field.
Hypermediation uses multiple windows of statistical and indexical representation (e.g., temperature, wind speed, traffic data), allowing users to see and interact with the complexity of an incident. The AI OS organizes and synchronizes diverse sources (e.g., tweets, GIS layers, 911 calls, camera feeds) into COPs and visual dashboards to help leaders and analysts see patterns, anomalies, and priorities in multi-facted crises like a hurricane with flooding.
The Critical Role of Common Operating Pictures (COPs) and Dashboards in Disaster Management
In the demanding landscape of disaster management and risk reduction, Common Operating Pictures (COPs) and dashboards stand as hopeful pillars for effective surveillance and response. These sophisticated information systems are central to command centers, providing real-time monitoring and management of complex situations, for incident management, emergency response, and the protection of critical infrastructure. Their fundamental utility lies in their ability to aggregate vast amounts of surveillance data and diverse information sources, synthesizing them into a unified, real-time, immediate view of ongoing activities and unfolding situations. This comprehensive display is expected to significantly enhance situational awareness for all parties involved, from field responders to strategic leaders.
The strategic value of COPs extends beyond mere data display; they are instrumental in fostering collaborative planning and reducing confusion among multiple agencies operating in a crisis. By providing a consistent, up-to-date view of critical information, COPs enable faster, more informed decision-making across the entire response structure. A prime example of this is the Department of Homeland Security (DHS) Common Operating Picture program, which delivers strategic-level situational awareness for a wide array of events, including natural disasters, public safety incidents, and transportation disruptions. This program facilitates collaborative planning and information sharing across federal, state, local, tribal, and territorial communities, underscoring the vital role of integrated information platforms in national security and public safety.
AI as an Operating System for API-fed Media Management Layers
AI orchestration refers to the coordination and management of data models, systems, and integration across an entire workflow or application. In this context, AI acts as the “maestro” of a technological symphony, integrating and coordinating various AI tools and components to create efficient, automated informational and media workflows.
The role of the AI-OS is to ingest data via API from sources such as satellite feeds, IoT sensors, weather models, traffic systems, and social media chatter. AI uses algorithms and techniques, often from machine learning and neural nets, to process visual data, recognize objects, and analyze scenes. It tags and contextualizes data using NLP, computer vision, and geospatial AI to collect, label, and align data across multiple users. It can also manages narrative flows by creating text summaries, video captions, or alerts that guide public broadcasts or Personal Operating Pictures (POP) user interpretation. AI adapts interfaces and adjusts the dashboard to user roles (e.g., responder vs. planner vs. public).
AI-first API management integrates machine learning (ML), natural language processing (NLP), and predictive analytics to gain deeper insights into performance and usage trends to forecast weather patterns, detect fire and water anomalies in real-time, and automate response governance. This means AI can intelligently manage the flow of information and tasks between different media components, ensuring the right data reaches the right COPs and models at the right time, preventing data bottlenecks and optimizing predictive resource utilization.
AI-coordinated data streams draw in various API-enabled data and video sources to “heal” the COP experience. For instance, real-time streaming (RTSP/Drones) provides live visual feeds and real-time object detection, as well as thermal imagery for immediacy. GIS/Map APIs gather hyperreal terrain, zoning, and infrastructure information and then overlay evacuation zones and hazard proximity models that may involve chemical leakages, flooding, traffic, and other factors. Social media APIs draw on the writing of public posts and conduct sentiment analysis, while geo-tagging locations for search engine optimization (SEO). They also have to factor in panic signals and filter out misinformation from mischievous posts. IoT Sensors (MQTT) provide (infra)structural and environmental data that can trigger alerts based on thresholds and can be used in predictive modeling. EMS/911 feeds draw on voice and text emergency dispatches and may require transcription and triage classifications for harmful accidents. Additionally, Weather/NOAA APIs collect storm forecast data and generate path predictions and risk zone maps.
AI-powered API integration solutions automate and streamline connections between disparate software platforms, enabling seamless communication and real-time data flow. This eliminates manual data entry, reduces human error, and provides unified access to business-critical data, allowing systems to scale faster and adapt to market demands. AI systems include computational resources, data stores, and data flows/pipelines that transmit data. Data engineers design these pipelines for efficient transfer, reliable quality, and easy access for integration and analysis. AI orchestration platforms automate these workflows, track progress, manage resources, and monitor data flow.
Gathering Multiple Streams of Data and Video – Transparent Immediacy in Practice
In the high-stakes domain of crisis management, transparent immediacy is an indispensable principle for designing intuitive COP and dashboards that facilitate rapid decision-making by minimizing cognitive friction. Real-time data visualization tools are specifically engineered to present complex information in a highly usable manner, enabling decision-makers to quickly grasp the unfolding situation without being distracted by the interface itself. Achieving true immediacy in data delivery is critically dependent on low latency within the underlying network infrastructure, particularly in mission-critical environments where split-second decisions can have profound consequences.[Diffserv]
The integrative interface of AI-assisted media “vanishes” into the COP to provide a multidimensional hyperrealized and contextualized display. It blends multiple inputs (e.g., real-time GPS + bodycam) while maintaining the im-mediated telepresence “touchpoint” to events in the field. It synthesizes spoken alerts from text messages and narrates the changing situation (“A levee breach has been detected 3 miles south of Sector 3”). The result is that decision-makers read the mediated displays as if they are seeing the world with a healed blend of immediate and hyperreal perspectives — without delayed video feeds or digging through massive amounts of raw data.
The principles of transparency and trust are fundamental to effective crisis communication. Providing clear, accurate, and timely updates through dedicated platforms and channels helps to build confidence and establish credibility with affected populations and stakeholders. This approach aligns directly with the human desire for direct, unmediated information. Proactive communication, a commitment to telling the truth, and consistently adhering to factual information are essential strategies for maintaining transparency and regaining control of the narrative during a crisis. These practices mirror the pursuit of immediacy by delivering information that feels direct, honest, and unadulterated, thereby reinforcing public trust.
In the high-stress, information-rich environments of crisis management, operators frequently encounter information overload. The core objective of transparent immediacy is to make the mediating technology disappear from the user’s conscious awareness, thereby allowing direct engagement with the critical information. By meticulously designing COPs and dashboards to include an “interfaceless” quality, the cognitive burden associated with navigating complex interfaces is substantially reduced. This reduction in cognitive friction enables faster assimilation of critical data, expedites the identification of patterns or anomalies, and ultimately leads to more rapid and effective decision-making, which is of paramount importance in emergency response scenarios. The less mental effort expended on understanding the tool, the more attention can be dedicated to understanding the crisis.
While transparent immediacy strives to erase the medium and present information as unmediated reality, the integration of AI introduces a new, inherently complex layer of algorithmic mediation. AI can indeed create the appearance of greater immediacy by providing real-time insights and indexical predictive analytics, seemingly cutting through complexity to deliver direct understanding. However, the internal workings of AI processes — how they learn, process vast datasets, and generate their outputs—are often opaque, frequently referred to as a “black box” problem.
This creates a fundamental paradox: the desire for an “interfaceless” and seemingly unmediated experience directly conflicts with the ethical imperative for transparency and explainability in AI systems. Disaster management leaders must carefully navigate this tension, balancing the undeniable benefits of AI-driven immediacy with the critical need to understand and trust the AI’s “judgments,” particularly when human lives and safety are at stake. This complex challenge may necessitate the development and integration of new forms of “explainable AI” (XAI) within COP interfaces to ensure that accountability and trust are maintained, even as the technology becomes more sophisticated.
Gathering Multiple Streams and Windows of Data and Video – Hypermediation in Practice
When a new AI-assisted digital COP remediates older, perhaps less dynamic, informational sources, it carries an implicit promise of higher fidelity, greater accuracy, and real-time relevance. Fulfilling this promise can significantly enhance comfort and trust in the information presented. For instance, an interactive Geographic Information System (GIS) map that updates in real time is inherently perceived as more reliable and trustworthy than a static, outdated map.
However, the very process of mediation, by transforming data and introducing new digital layers, can also introduce new forms of hyperreality. If these indexical layers are not transparent, or if the transformation process itself is flawed or introduces biases, it could inadvertently undermine the very trust it seeks to build. Therefore, disaster management leaders must ensure that the “improvement” offered by new forms of remediation is genuinely beneficial and does not obscure the underlying data’s provenance, potential limitations, or inherent biases.
This transition demands a critical understanding of the transformative pitfalls and potentials of digital remediation. AI operating systems can call for displays that are layered and windowed that offer diverse media representations, each with its own source, credibility, and relevance. It stacks weather maps, traffic flows, shelter capacity, and 911 calls. It shows social sentiment spikes alongside physical sensor alerts. It also tags uncertainties, such as “unverified reports” or “possible false positives.”
The result is that it enables strategic coordination by exposing the full complexity of the crisis landscape. AI’s core strength lies in its ability to ingest, process, and fuse massive volumes of data from diverse sources in real-time. AI then infers from the large datasets that are collected to inform human-based guidance and decision-making.
Multimodal AI systems are designed to integrate and process multiple data types (modalities) such as text, images, audio, and video. By combining these various data modalities, the AI system interprets a more diverse and richer set of information, enabling it to make accurate, human-like predictions and produce contextually aware outputs. This is achieved through multimodal deep learning, neural networks, and fusion techniques that synthesize different data types.
Real-Time video stream analysis draws on AI-powered video intelligence APIs that can recognize over 20,000 objects, places, and actions in both stored and streaming video. They can extract rich metadata at the video, shot, or frame level, and provide near real-time insights with streaming video annotation and object-based event triggers. Advanced APIs like Google’s Live API enable low-latency bidirectional voice and video interactions, allowing for live streaming video input and real-time processing of multimodal input (text, audio, video) to generate text or audio.
AI and knowledge graph capabilities automate data ingestion, preparation, analysis, and reporting, significantly reducing manual tasks. This allows for quickly connecting internal data sources with case-specific data like device dumps, license plate readers (LPRs), financial records, social media, and public records for a comprehensive view.
Coordination for Hypermediation
Hypermediation foregrounds the mediating function, explicitly enhancing the multiplicity of information and exposing the limitations of direct, “unmediated” transparent representation. AI enhances this by intelligently managing and presenting diverse, fragmented data streams in coordinated spreadsheet grids and windows. AI enables hyper-personalization of content by analyzing user behavior and preferences, tailoring content to individual needs on a granular level. This extends to dynamic user interfaces that can adapt based on user theming, curation, and real-time feedback, moving beyond static software to more organic, rapidly changing displays.
AI in OS mode synthesizes and presents complex disaster information in usable, interactive multimodal dashboards that feature multiple windows, dynamic visualizations, and drill-down options. This allows for enhanced interactivity and navigation, enabling users to explore data in depth and filter information based on specific criteria.
AI acts as the technological enabler for sophisticated hypermediation, allowing for the intelligent management of interconnected media at a scale and speed previously unattainable. It helps transform a potential deluge of data into a coherent and actionable common operating picture by connecting related information from fragmented sources and streamlining complex analyses.
Coordination for Transparent Immediacy
Transparent immediacy aims to make the user “forget the presence of the medium,” fostering a belief in direct, unmediated presence with the represented information. AI contributes to this by gathering and simplifying complex data into clear, actionable insights and enabling seamless, real-time interactions. [1]
AI-powered data visualization transforms complex data into clear, dynamic visuals, identifying patterns and relationships that would take humans hours to uncover. It provides real-time insights, automatically updating visuals as new data flows in, allowing for faster, more informed decisions. This simplifies the complex, distilling mountains of raw data into actionable visuals that can be understood “at a glance”.
AI-enhanced interfaces offer natural language processing, allowing users to ask questions and receive results in clear, interactive charts. By making the mediating technology disappear from conscious awareness, AI-driven COPs reduce the cognitive burden on operators, enabling faster assimilation of critical data and expediting decision-making.
AI can power immersive VR and AR environments that extend traditional 2D displays into a third dimension, enabling novel operations and more intuitive, collaborative interactions. These environments aim to create a shared, real-time, and seemingly unmediated understanding of a crisis, akin to William Gibson’s “consensual hallucination” of cyberspace. AI-powered insights assist in selecting appropriate hardware solutions for these immersive technologies, streamlining their integration.
AI Black Box’s Paradox and Ethical Considerations
While AI strives for transparent immediacy by simplifying complexity, it introduces a “black box” problem where the internal workings of AI processes are often opaque. This creates a paradox where the desire for an unmediated experience conflicts with the ethical imperative for transparency and explainability in AI systems. For critical applications like COPs, ensuring data quality, mitigating algorithmic bias, and maintaining human accountability are paramount. The effectiveness and reliability of AI models are directly contingent upon the quality, diversity, and cleanliness of the data on which they are trained, as substandard data can propagate and amplify flaws.
AI acts as a central nervous system for a hypermediated and transparent immediated COP by orchestrating the complex interplay of data streams, video feeds, and user interfaces. It enables real-time data fusion, dynamic content adaptation, and intuitive visualization, but requires careful human oversight to ensure trust, accountability, and ethical deployment. AI as an operating system doesn’t just manage media, it interprets, structures, and presents it as meaning. In the process it strives to enable leaders and users to move from data overwhelm to narrative clarity.
Concluding Thoughts
The strategic application of media theory concepts, primarily Remediation and its logics of Transparent Immediacy, and Hypermediation, works in conjunction with advanced AI capabilities. This process is paramount for optimizing the “authentic” healed experience in Common Operating Pictures and dashboards in disaster management.
This post has demonstrated how these frameworks provide a critical lens for understanding, designing, and enhancing the information systems and COPs that underpin modern crisis response. From transforming chaotic and static data forms into dynamic visual flows (Remediation) to fostering seamless situational awareness (Transparent Immediacy) and orchestrating complex multi-source indexical and graphical information (Hypermediation), media theory offers a profound guide. AI, in turn, acts as the indispensable engine, the OS enabling these transformations through API real-time data fusion, formulas for predictive analytics, and automated communication displayed on COPs.
The future of crisis response lies in intelligently navigating the increasingly blurred lines of human-AI collaboration. This demands a nuanced understanding of AI as a powerful co-author, operating system, and assistant, one that enhances human capabilities but never fully replaces human accountability and intent. Fostering trust in increasingly mediated information requires unwavering commitment to transparency, data quality, and ethical AI deployment. By consciously integrating theoretical understanding with technological innovation, disaster management leaders can leverage these converged media landscapes to create COPs and dashboards that are not merely displays of data, but dynamic, intelligent platforms capable of shaping perception, informing decisive action, and ultimately building a more resilient and responsive global community in the face of escalating threats.
Citation APA (7th Edition)
Pennings, A.J. (2025, Nov 14) Remediating the Blurred Lines of Human-AI Collabollaboration in Disaster Management and Public Safety Communications. apennings.com https://apennings.com/crisis-communications/remediating-the-blurred-lines-of-human-ai-collabollaboration-in-disaster-management-and-public-safety-communications/
Notes
[1] Bolter, Jay David, and Richard Grusin. Remediation: Understanding New Media. Cambridge, MA: MIT Press, 2000. They followed up on “probes” by Marshall McLuhan, that the content of any new medium is always an older medium. This means new technologies integrate, repurpose, and older media. McLuhan’s main message was to point to the fundamental change these new forms create in human scale, pace, or pattern. These ideas were primarily expressed in The Mechanical Bride (1951) and Understanding Media (1964).
[2] By RAM-limited digital environment I mean the workspace needed for multiple applications to be supported. “RAM” can also be used as an analogy for a human’s ability to deal with multiple streams of information coming into their perspective.
[3] I used two prompts to address the issues I was thinking about.
Prompt 1. How can AI be an operating system to coordinate APIs gathering multple streams of data and video for hypermediated and transparent immediacy? Prompt 2. Drawing on the concepts of remediation and its two logics of hypermediation and transparent immediacy, how can AI be an operating system to coordinate APIs gathering multple streams of data and video for Common Operating Pictures used in Disaster Management and Public Safety?
© ALL RIGHTS RESERVED
Not to be considered financial advice.
Anthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea and a Research Professor for Stony Brook University. He teaches AI and broadband policy. From 2002-2012 he taught digital economics and information systems management at New York University. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in Korea.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: double logic of remediation > Hypermediation > remediation theory > Transparent Immediacy
Taylor’s Fab Rescue: The A16 Chip
Posted on | November 2, 2025 | No Comments
My home in Austin, Texas, is about 25 miles from the newly built Samsung microprocessor fabrication “fab” factory. The fab has been running behind schedule since it switched its plans from 4-nanometer chips to 2-nanometer chips. But Tesla recently signed a deal with Samsung to build its A16 (and maybe A18) chips in Texas for future use in its data center, EVs, and robots.
While the Taylor location has many advantages, the demands of producing such small devices are numerous. While very few earthquakes occur in Texas, 2nm chip production operates at atomic levels that cannot tolerate vibrations from trains, traffic, or settling of the earth below and near the fab. Unlike the new TSMC fab in Phoenix, Arizona, Samsung had to invent many solutions to problems with electricity supply, ultra-clean water and air, and a workforce not used to 2nm production.
These problems and Samsung’s heroic attempts to overcome them are discussed nicely in this video, including a crucial factor for the fab’s success—a customer to point its development towards. This is why Tesla’s decision to work with Samsung on its A16 chip was a welcome relief for both parties. Samsung locks in a much-needed customer while Tesla avoids competing with Apple, NVIDIA, and a myriad of other companies for TSMC’s chips.
This video explains the situation in more detail.
What will Tesla do with these chips? The internally designed chips will be used for both inference and training in a variety of products, including self-driving systems for its EVs, the Optimus humanoid robot, and AI data centers.
Citation APA (7th Edition)
Pennings, A.J. (2025, Nov 02) Taylor’s Fab Rescue: The A16 Chip. apennings.com https://apennings.com/digital-geography/taylors-fab-rescue-the-a16-chip/
© ALL RIGHTS RESERVED
Not to be considered financial advice.
Anthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea and a Research Professor for Stony Brook University. He teaches AI and broadband policy. From 2002-2012 he taught digital economics and information systems management at New York University. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in Korea.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: 2 nm chips > A16 Chip > fabs > Taylor > Taylor fab > TX
The AI Prompter as Auteur? Examining LLM Authorship Through the Lens of Film Theory
Posted on | November 1, 2025 | No Comments
I’ve been teaching film classes on and off since graduate school. It has never been my primary focus, but the historical and theoretical depth of film studies intrigued me as well as the technologies and techniques that shape thinking and understanding. I incorporate much of it in my EST 240 – Visual Rhetoric and Information Technology class from Stony Brook University, which looks at the camera and editing techniques that persuade and signify in different media.
This year, we added generative AI as a topic to address logo design, photo-graphics, and video synthesis. Here is an example of the basics for a video prompting plan. In this text, I explore the concept of authorship in the age of generative AI, drawing parallels with established theories from film and media studies, particularly auteur theory.
One of the interesting questions in film studies and media studies is authorship, or “auteur,” from its French roots. Who is the “author” of a film? Is it the screenwriter? The producer? The director? How much do the actors contribute to the creative process? A similar query asked where the meaning in the film experience is produced. Is it in the author? In the content or “text”? How about in the audience or the viewer? A related perspective asks how much authorship or meaning is limited or organized by the “genre,” such as a comedy, drama, or science fiction?
The proliferation of Large Language Models (LLMs) capable of generating sophisticated text and imagery has introduced a novel figure into the creative landscape: the AI prompter. This individual, through the crafting of textual instructions, elicits responses from AI systems, thereby participating in the creation of content that often blurs the lines of traditional authorship and creative control. The question is whether a human AI prompter can be considered the “author” of text or imagery produced by an LLM? This discussion has become very relevant in fields such as education, law, and publishing. It invokes debates around creativity, control, and the very definition of authorship, portending friction between emergent AI tools and social institutions.
Or is it more akin to a “curator,” as understood in museum studies —a figure who makes decisions about what to include, what to exclude, and how to narrate the story an exhibition tells? In the realm of AI-generated content, the “prompter” is the individual who crafts the audio, pictorial, or textual instructions (prompts) that guide the LLM to produce an output, be it an essay, a poem, or a digital image. The quality, specificity, and iterative refinement of these prompts significantly influence the final result. Alternative analogies that may capture different facets of the prompter’s interaction with LLMs include the “chef,” the “collaborator,” the “instrumentalist,” or “partner.” But these mostly fall outside the realm of film and media theory and will be mentioned only briefly.
This exploration is not merely an academic exercise; the increasing ubiquity of LLMs in diverse fields, from literary creation to legal analysis, necessitates new conceptual tools to understand this new form of human-machine interaction. The US Copyright Office ruled in May 2025 that the results of prompts can, for the most part, be copyrighted. The US Constitution recognized early the social value of such arrangements in Article I, Section 8, Clause 8 when it codified the following:
“To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.”
The “Copyright Clause” is meant to promote innovation and economic development and the developer’s of AI LLMs do not seem to have much incentive to oppose the Copyright Office’s “Copyright and Artificial Intelligence Part 3: Generative AI Training.” This is not to say that other content creators have agreed, including people whose likeness has appeared in AI produced content. The Copyright Office has stated its intention to “monitor developments in technology, case law, and markets, and to offer further assistance to Congress as it considers these issues.”
Understanding Auteur Theory and AI
Originating with early film critics in the 1950s, Auteur theory posited that the director is the primary creative force behind a film. She or he is the “author” whose personal vision, recurring themes, and distinct stylistic choices are imprinted on their body of work. An auteur, like Alfred Hitchcock, Akira Kurosawa, or Steven Speilberg is seen as exerting a high level of control over all aspects of production. So some film theorists argued they make their films recognizable and uniquely their own, regardless of the participation of other collaborators like screenwriters or actors.
American critic Andrew Sarris later popularized and systematized auteur theory, coining the term in his 1962 essay, “Notes on the Auteur Theory” in The American Cinema. Later, he proposed three concentric circles of criteria for evaluating directors: technical competence (the ability to make a well-crafted film), a distinguishable personality (a recurring stylistic signature evident across their films), and interior meaning (arising from the tension between the director’s personality and the material). It refers to the often unconscious or ambiguous, personal vision, themes, and preoccupations of the director that permeate their films, even when working with diverse material or within the constraints of a studio system. The theory celebrated directors like Alfred Hitchcock, who, despite working within the constraints of the Hollywood studio system, managed to produce distinctive and deeply personal works.[2]
French film critics associated with Cahiers du Cinéma and “la politique des auteurs” championed the film director as the primary artistic force whose personal influence, individual sensibility, and distinct artistic vision could be identified across their body of work. This approach shifted critical attention from studios or screenwriters to the director, viewing them as an artistic creator of the work. Auteur theory includes determining whether directors have a personal vision and style based on a consistent aesthetic and thematic worldview, demonstrated mastery of the medium, recurring themes and motifs, and a willingness to push the boundaries of the medium. Do they maintain significant control or influence over the final product?
A key aspect of the auteur was the identification of a particular film style encompassing: visual elements, narrative structures, and thematic preoccupations that could be consistently associated with a director, serving as their “authorial signature.” Auteur theory drew a critical distinction between “true auteurs,” who infused their work with a personal vision and artistic depth, and “metteurs en scène,” who were seen as competent but workmanlike directors merely executing the details of a script without a discernible personal stamp. For instance, Michael Curtiz who won Best Director for Casablanca (1942) was often placed in the latter category, while Nicholas Ray, known for the (1955) Rebel Without a Cause, starring James Dean was celebrated as an auteur.
The “method actor” analogy reinforced the prompter is a director who guides the suggests the LLM that can be treated as an actor. The LLM “act outs” a specific role or persona to solve problems or generate content. By “casting” the LLM in a role (e.g., “You are a historian…”) and providing a “script” (the detailed prompt), the prompter can elicit more structured, context-aware, and human-like responses. This framework emphasizes the prompter’s role in setting the scene, defining the character, and guiding the performance.
A skilled prompter often has a specific vision for the desired output. They use carefully chosen words, structures, and iterative refinements (prompt engineering) to steer the AI towards this vision. This can involve defining style, tone, subject matter, and even attempting to evoke specific emotions or ideas, much like a director outlines a scene.
Experienced prompters can develop recognizable patterns or approaches to prompting that yield particular types_of results from specific AI models. They learn the nuances of the AI and how to elicit desired aesthetics or textual qualities. Prompting is rarely a one-shot command. It often involves a back-and-forth conversation, a process of trial, error, and refinement, akin to a director working through multiple takes or editing choices. Even if the AI generates multiple options, the prompter often makes the final selection, curating the output that best aligns with their initial intent. This act of selection can be seen as a creative choice.
However, the US Copyright Office notes that AI models operate with a degree of unpredictability but concluded that content produced with AI can be copyrighted.
Key concerns of an auteur theory include determining who is the primary creative force, whether they have a personal vision and style based on a consistent aesthetic and thematic worldview, demonstrated mastery of the medium, recurring themes and motifs, and a willingness to push the boundaries of the medium. Finally, do they maintain significant control or influence over the final product?
The US Copyright Office’s stance that AI-generated works can be copyrighted with substantial human input further complicates this relationship. If the generated work is not copyrightable in the prompter’s name alone, or if the AI’s contribution is deemed substantial enough to negate sole human authorship, the prompter’s status as an “auteur” in a legally recognized sense is undermined.
However, current legal and creative consensus, notably highlighted by bodies like the US Copyright Office, generally holds that AI-generated works are not copyrightable unless there is substantial human creative input beyond mere prompting. The reasoning is that even detailed prompts can lead to unpredictable outputs from the AI, meaning the prompter may not have the same level of direct, granular control over the final work as a traditional artist. The AI model itself, with its complex algorithms and vast training data, plays a significant, if not primary, role in the generation process. The LLM’s output is inherently shaped by the massive datasets it was trained on, a factor far beyond the prompter’s control and introducing a vast external influence on the “style” and content.
By “carefully crafting prompts,” the user provides the model with context, instructions, and examples that help it understand the prompter’s intent and respond meaningfully. This includes setting clear goals, defining the desired length and format, specifying the target audience, and using action verbs to specify the desired action. Such detailed instruction suggests a high level of intentionality on the part of the prompter, aiming to steer the AI towards a preconceived vision.
Arguments supporting a significant authorial role for the prompter often emerge from discussions about writers’ creative practices with AI. Studies indicate that writers utilize AI to overcome creative blocks, generate starting points, and then actively shape the AI’s output into something they consider useful, thereby maintaining a sense of ownership and control over the creative process. This active shaping and refinement, driven by the writer’s “authenticity” and desire for “ownership,” can be seen as analogous to an auteur’s imposition of their vision onto the raw materials of filmmaking.
The Auteur Analogy Falters
Despite these points of correspondence, applying the auteur analogy to the AI prompter faces significant challenges. LLMs’ construction complicates the notion of a singular, controlling vision. These models are trained on vast datasets of existing human-created text and images and function by “mimicking human writing.” This training design makes it difficult to disentangle the prompter’s “pure” vision from the inherent capabilities, biases, and stylistic tendencies embedded within the LLM’s architecture and training data. AI may not be a neutral tool but an active, albeit non-conscious, participant in the generation process.
This trajectory leads to the “black box” problem. The prompter rarely has full transparency or control over the internal workings of the LLM. While a film director ideally orchestrates various human and technical elements (cast, crew, script, camera), the prompter interacts with a system whose decision-making processes are often opaque. The output can sometimes be unpredictable, with LLMs even known to “hallucinate” or generate unexpected results, challenging the idea of complete authorial control.
Intellectual property law presents another major hurdle. Current legal frameworks, particularly in jurisdictions like the United States, generally require human authorship for copyright protection. AI, as it stands, blurs the lines between authorship, ownership, and originality. If the generated work is not copyrightable in the prompter’s name alone, or if the AI’s contribution is deemed substantial enough to negate sole human authorship, the prompter’s status as an “auteur” in a legally recognized sense may be undermined.
The debate over prompter-as-auteur is thus deeply intertwined with these evolving legal definitions. The lawsuits involving creators like Scarlett Johansson and organizations like Getty Images and The New York Times against AI companies for using copyrighted material in training data further complicate the picture, as the very foundation upon which the LLM generates content is itself a site of contested authorship.
Moreover, many instances of AI prompting might more accurately align with the role of the metteur en scène rather than the true auteur. A prompter might be highly skilled in eliciting specific outputs from an LLM, demonstrating technical competence. However, they may be seen as proficient technicians rather than visionary artists without a consistent, distinguishable personal style, thematic depth, or “interior meaning” traceable across a body of their AI-generated works. The inherent weaknesses often found in AI-generated writing—such as blandness, repetitiveness, or a lack of overarching logical structure — can also limit the perceived artistic merit of the output, thereby challenging the prompter’s claim to full auteurship if they are simply instructing a tool with such limitations.
Furthermore, a significant aspect of the prompter’s role involves navigating the LLM’s potential for bias, inaccuracy, and “hallucinations.” This requires a curatorial-like responsibility to ensure that the information or content presented is sound, ethically considered, and appropriately contextualized. A museum curator has an ethical duty to research, authenticate, and provide accurate context for artifacts. Similarly, an AI prompter, especially in professional or public-facing applications, must critically vet and potentially correct or contextualize AI output to prevent the dissemination of falsehoods or biased information. This positions the prompter as a gatekeeper, quality controller, and interpreter of the AI’s output—all key curatorial functions.
Despite these compelling parallels, the curator analogy also has its limitations when applied to AI prompting. Traditionally, curators work with existing, often tangible, artifacts or discrete pieces of information. LLMs, however, generate new content, albeit derived from their training data. The question then arises, is the prompter curating “generated” data, or are they more accurately co-creating it? This ambiguity blurs the line between curation and creation.
The “collection” from which an AI prompter “selects” is also fundamentally different from a museum’s holdings. An LLM’s latent space represents a near-infinite realm of potential outputs, not a predefined set of objects. This abundance makes the act of “selection” by prompting a more generative and less bounded process than choosing from a finite collection. The prompter is not merely selecting from a catalog but actively shaping what can be chosen or brought forth by the structure of their prompts. This suggests a more active, co-creative form of curation than traditional models imply, where the collection is dynamic and responsive to the prompter’s interaction.
Acknowledging that the AI prompter’s role is not monolithic is crucial; it varies significantly based on the level of skill, labor, and “intellectual expression” invested, ranging from a simple user to a highly skilled co-creator. At one end of the spectrum, a user might issue simple, direct instructions to an LLM for a straightforward task, acting more as a client or basic operator. On the other hand, a highly skilled individual might engage in complex, iterative dialogues with AI, meticulously refining prompts and outputs in a process that resembles deep co-creation. The level of skill and labor or “intellectual expression” invested by the prompter can differ dramatically, and this variance directly impacts how their role is perceived and classified. A casual user asking an LLM to “write a poem about a cat” is performing a different function than an artist spending weeks crafting and refining prompts to achieve a specific aesthetic for a series of generated images or a legal expert carefully structuring queries to extract nuanced information for a case.
Structural and Post-Structural Theoretical Challenges to Auteur Theory
Structuralism and its successor, post-structuralism, launched a powerful critique of the traditional notion of “authorship,” particularly challenging the idea of the author as the sole, authoritative source of a text’s meaning. Two of the most influential figures in this critique are Roland Barthes and Michel Foucault. Roland Barthes, in “The Death of the Author” (1967), argued for the “death of the author” as a key concept relevant to literary and textual analysis. His key arguments included that focusing on the author’s biography, intentions, or psychological state to interpret a text is a misguided and limiting practice. He believed that the author’s life and experiences are irrelevant to the meaning of the work once it is produced. He famously claimed that a text is not a linear expression of an author’s singular thought but rather “a multi-dimensional space in which a variety of writings, none of them original, blend and clash.” A text is a “tissue of quotations drawn from innumerable centers of culture,” meaning it comprises pre-existing linguistic conventions, cultural references, and discourses.
For Barthes, the meaning of a text is not fixed by the author but is produced in the act of reading. The reader is the one who brings together the various strands of meaning in the text. “The birth of the reader must be at the cost of the death of the Author.” This move liberates the text from a single, imposed meaning and opens it up to multiple interpretations. Barthes viewed the author not as a “creator” in the traditional sense, but as a “scriptor” – someone who merely sets down words, drawing from an already existing linguistic and cultural archive.
Michel Foucault’s “What is an Author?” (1969) shared Barthes’ skepticism about traditional authorship but approached the issue from a different angle, focusing on the historical and institutional construction of the “author-function.” His main point was that the “author” is not a natural or timeless entity but a specific function that emerged within certain historical and discursive practices. The author-function is a way of classifying, circulating, authenticating, and assigning meaning to texts within a given society.
The author is not a person, but a discursive principle. The author-function is not equivalent to the individual writer. It’s a set of rules and constraints that govern how we understand and use an author’s name. For instance, the author’s name serves as a mark of ownership (property), a way to hold someone responsible for transgressive statements, and a means to unify a body of diverse works.
Foucault linked the emergence of the author-function to systems of control and power, particularly the rise of copyright law and the need to regulate speech. Attributing a text to an author became a way to police discourse and assign responsibility, especially for subversive or dangerous ideas. Foucault emphasized that the author-function has not always existed or applied equally to all types of texts. For example, scientific texts often gain authority from their content rather than solely from their author, whereas literary texts are more heavily tied to the author’s name.
In summary, Barthes and Foucault challenged the romanticized, humanist view of the author as a solitary genius from whom all meaning emanates. Instead, they argued that meaning is either generated by the reader (Barthes) or shaped by complex social, historical, and institutional forces (Foucault), effectively “decentering” the author from their traditional position of authority. This critique had a profound impact across the humanities, including film studies, by shifting focus from the individual creator to the structures of language, discourse, and reception.
By applying Barthes and Foucault, we can argue that the LLM’s output is not a direct, unmediated expression of the prompter’s “genius” or sole intent. It is a complex interplay of the prompt, the LLM’s massive training data (a vast, unauthored “library” of human discourse), and its internal algorithms.
The “meaning” of AI-generated content is fluid and negotiated between the prompt, the LLM’s “knowledge,” and the audience’s interpretation. The emphasis on the prompter as “author” is often driven by practical, legal, and social needs to assign responsibility and fit new technology into old frameworks, rather than a true reflection of the creative process. The prompter is more accurately a skilled orchestrator of existing information and patterns or a “curator” and “editor” of potential outputs rather than a solitary inventor. This critical lens helps to move beyond a simplistic “human-as-master-of-machine” narrative, acknowledging the distribution of creativity and meaning-making in the age of AI.
Summary and Final Thoughts
Furthermore, as a visual studies and AI professor, I now incorporate generative AI, prompting questions about who the “author” is in AI-generated content. Traditionally, film studies grappled with authorship (the “auteur”), considering if the director, screenwriter, or others were the primary creative force. Similarly, the meaning of a film could be attributed to the author, the content, or the audience.
The emergence of the AI prompter—the human who crafts instructions for Large Language Models (LLMs)—creates a new debate. Is the prompter the “author” of the AI’s output? The text considers applying auteur theory to the prompter, noting similarities like the prompter’s vision, iterative refinement (prompt engineering), and selection process. It even uses the analogy of a “method actor” for the LLM, guided by the “director” (prompter).
However, the post argues that the auteur analogy could ultimately falter. LLMs, trained on vast human-created datasets, operate as “black boxes” with unpredictable outputs, making it difficult to attribute a singular, controlling vision to the prompter. Legal frameworks still largely require human authorship for copyright. The prompter might often function more like a “metteur en scène” (a competent technician) rather than a visionary auteur.
Instead, the post suggests the prompter’s role might be more akin to a curator, responsible for vetting, correcting, and contextualizing AI output due to the LLM’s potential for bias and “hallucinations.” However, this analogy also has limitations, as prompters generate new content rather than just selecting existing artifacts, making their role a blend of curation and co-creation.
Citation APA (7th Edition)
Pennings, A.J. (2025, Nov 1) The AI Prompter/Conversationalist as Auteur? Examining LLM Authorship Through the Lens of Film Theory. apennings.com https://apennings.com/books-and-films/the-ai-prompter-conversationalist-as-auteur-examining-llm-authorship-through-the-lens-of-film-theory/
Notes
[1] The notion of a prompter is being challenged because it is able to store conversations and responses.
[2] Okwuowulu, Charles. (May 2016). Auteur Theory and Mise-en-scence construction: A Study of Selected Nollywood Directors.
[3] Copyright and Artificial Intelligence
Part 3: Generative AI Training. Pre-publication version
A REPORT of the Register of Copyrights, May 2025
© ALL RIGHTS RESERVED
Not to be considered financial advice.
Anthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea and a Research Professor for Stony Brook University. He teaches Visual Rhetoric, AI, and broadband policy. From 2002-2012 he taught digital economics and digital media management at New York University. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in Korea.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Auteur > Auteur Theory > author-function > Barthes > Cahiers du Cinéma > LLMs
Spreadsheet Knowledge and Production of the “Modern Fact”
Posted on | October 25, 2025 | No Comments
Mary Poovey’s A History of the Modern Fact: Problems of Knowledge in the Sciences of Wealth and Society is one of my favorite books on the rise of the modern economy. A professor of English and director of the Institute for the History of the Production of Knowledge at New York University, she pursued how numerical representation became the preferred vehicle for generating useful “facts” in accounting and other business and financial knowledge.[1]
This post strives to apply Poovey’s insights to the rise of digital spreadsheets and the development of human-machine knowledge by examining how these technologies replicate, extend, and complicate the historical trends she identifies. The digital spreadsheet has been instrumental in shaping modern meaning-making practices by creating a seamless, interactive environment where humans and machines collaborate in the production and interpretation of data. Furthermore, this collaboration has been instrumental in forming a global economy and social organization based on networked spreadsheet rationality.
“Spreadsheet capitalism” is a term mentioned to me by one of my mentors in graduate school and refers to an economic system in which financial models, data analysis, and algorithmic decision-making, often facilitated by digital spreadsheets and related software, become the primary drivers of economic activity, social organization, and value creation.[2] Its origins lie in the increasing financialization of economies, the widespread adoption of personal computers and digital tools, and the belief in data-driven and gridmatic objectivity.[3]
Poovey’s book provides a critical background and framework for understanding how certain forms of knowledge became authoritative and seemingly “objective.” She argued that the rise of double-entry bookkeeping and statistical sciences in the early modern period was not merely a technical advancement but a profound epistemological shift. These systems created a new way of seeing and organizing the world through numerical representation, presenting complex realities as quantifiable and manageable facts.
Poovey emphasizes that this process involved a conflation of descriptive accuracy in emerging accounting practices with moral rectitude and truth, giving numerical data an impression of neutrality and unquestionable authority. And in the process, establishing the authenticity of merchant commerce.
Poovey’s analysis of double-entry bookkeeping highlights its role in standardizing economic information, fostering a sense of accuracy and accountability, and enabling the aggregation of individual transactions into a coherent, verifiable whole. The digital spreadsheet, in many ways, is the direct descendant of this legacy, but with exponentially increased power, interactivity, and reach. This power comes through automation, accessibility, the creation of objectivity, and the production of acceptable facts.
Just as double-entry bookkeeping streamlined manual accounting, digital spreadsheets automated formulaic calculations and data organization, making complex financial and statistical analysis accessible to a much broader audience beyond trained accountants. This accessibility has democratized data manipulation, allowing individuals and small businesses to generate reports and models that previously required specialized expertise.
Spreadsheets, like their analog predecessors, present data in a clean, tabular format that visually reinforces a framing of order, precision, and objectivity. The grid structure and instant calculation updates give the impression that the numbers are “just there,” reflecting reality without bias. However, the data entered, the formulas chosen, and the interpretations drawn are still products of human decision and are subject to potential error or bias.
Poovey argues that facts are not simply discovered but are produced through specific methodological and representational choices. Spreadsheets vividly demonstrate this, as users actively construct “facts” by inputting raw data in lists and categories, applying formulas, and structuring tabular information. The “fact” within a spreadsheet is a result of this human-machine collaboration where the spreadsheet is both the receptor and shaper of knowledge.
Summary
This blog post uses Poovey’s (1998) A History of the Modern Fact, as a framework for understanding the rise of spreadsheet capitalism. It explains that Poovey’s work shows how early modern representational practices such as double-entry bookkeeping created a new kind of knowledge: the quantifiable “fact.” This process gave structured (lists, tables, cells) numerical data an aura of objective truth and moral authority, laying the epistemological groundwork for the modern economy.
In the globalized, financialized economy of today, digital spreadsheets and their more advanced progeny (algorithmic trading platforms such as Bloomberg and Wind, AI-driven analytics) are the direct descendants of what Poovey calls the “modern fact.” They don’t just record economic activity; they actively constitute it. Spreadsheet capitalism operates by taking these quantifiable facts, often collected through networks, and generated by highly abstract financial models, and using them as the basis for global economic decisions.
The post emphasizes that spreadsheets are not neutral tools. Instead, they are digital environments where humans and machines collaborate to actively construct facts through the selection and use of data, placement, and formulas. The post concludes that spreadsheet capitalism is the modern culmination of the historical shift Poovey described. Digital tools don’t just record economic activity; they actively constitute and shape it by generating abstract, quantifiable facts that are used in formulas and functions to facilitate global economic decisions and capital accumulation.
Citation APA (7th Edition)
Pennings, A.J. (2025, Oct 25) Spreadsheet Knowledge and Production of the “Modern Fact”. apennings.com https://apennings.com/technologies-of-meaning/spreadsheet-knowledge-and-production-of-the-modern-fact/
Notes
[1] Poovey, M. (1998). A History of Modern Fact: Problems of Knowledge in the Sciences of Wealth and Society. University of Chicago Press.
[2] Majid Tehranian, a Harvard trained political economist and author of Technologies of Power (1992), was a member of both my MA and PhD committees. He guided my MA direction by suggesting that I focus on deregulation of finance and telecommunications. At some point he mentioned the term “spreadsheet capitalism” and it stuck with me.
[3] The post defines spreadsheet capitalism as an economic system where financial models and algorithmic decision-making, powered by spreadsheets, become the primary drivers of value and social organization. The digital spreadsheet is presented as the direct and far more powerful successor to the bookkeeping systems Poovey analyzed.
© ALL RIGHTS RESERVED
Not to be considered financial advice.
Anthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea and holds a joint position as a Research Professor for Stony Brook University. He teaches AI and broadband policy. From 2002-2012 he taught digital economics and information systems management at New York University. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in Korea.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: A History of the Modern Fact: Problems of Knowledge in the Sciences of Wealth and Society > Double-entry bookkeeping > lists > Mary Poovey > Spreadsheets
Telecom Synchronization in Spreadsheet Capitalism
Posted on | October 12, 2025 | No Comments
For the last 4 years, I have been teaching a broadband course that explores the Internet layers (Application, Transport, Network, Link, and Physical). This deep understanding helped shape an conceptualization of the telecom synchronization logic in the context of global spreadsheet capitalism.[1]
This post investigates the telecom synchronization logic by connecting infrastructural power with the actual stack of global financial telecommunications. It shows how spreadsheet capitalism’s logics operates through three stratified but interdependent layers: physical, network, and value-added — the telecommunicative architecture of global monetary coordination.
Telecom synchronization is the third and most infrastructural logic of spreadsheet capitalism. Suppose semiotic substitution abstracts value into symbols, and symbolic computation turns those symbols into executable models. In that case, telecom synchronization ensures that those models and values move together across the globe — instantly, verifiably, and continuously. Telecom synchronization binds operations together across space and time.
In spreadsheet capitalism, telecom synchronization refers to the global grids of simultaneity that allow markets, models, and machines to operate as one coordinated system. It is the infrastructural condition that makes symbolic computation real-time. These grids have a physical layer, extending from undersea cables, microwave transmission towers, orbiting satellites, and fiber networks through a network layer of digital protocols such as TCP/IP, DNS, and HTTP to a value-added network of financial terminals and now distributed ledgers.
Layers of Coordination and Power
In practice, this logic is stratified across three interconnected layers that together form the operating system of global finance.
The Physical Layer — The Substrate of Connection
This is the material infrastructure through which synchronization becomes possible: fiber-optic cables, satellites, microwave towers, undersea conduits, and connected data centers. These are owned or maintained by global telecommunications firms such as AT&T, Verizon, Orange, NTT, China Telecom, etc. These systems form the chronometric skeleton of the financial world by enabling the nanosecond transmission of trades, clearing signals, and pricing updates.
In Foucauldian terms, this layer provides the “conditions of possibility” for simultaneity — the way capital escapes locality by inhabiting an infrastructural present tense. At this layer, power is infrastructural: whoever controls the bandwidth, the latency, and the data sovereignty controls the temporal regime of capital. In spreadsheet capitalism, this physical layer is the hidden foundation of the grid — the reason all the world’s ledgers can appear on one screen, showing the price of USD, updated in real time.[2]
The Network Layer — Routing and Synchronizing Data
Above the physical substrate sits the network layer, managed by Internet Service Providers (ISPs) regionally, and global backbone operators connected via Internet Exchange Points (IXPs) and Tier 1 ISPs like AT&T, Level 3, NTT, and Google.
.
This layer functions to packetize, route, and synchronize data flows among nodes of global finance. It connects trading hubs, cloud services, regulatory servers, and devices such as financial terminals, as well as applications on smartphones and other devices.
The network layer governs the movement of data rather than its meaning. It ensures that spreadsheets, ledgers, and blockchains remain temporally aligned across jurisdictions. Protocols such as TCP/IP, Domain Name System (DNS), Network Time Protocol (NTP), and Border Gateway Protocol (BGP) provide the universal syntactic grammar that enables different financial platforms (Bloomberg, Aladdin, Wind, LSEG Workspace) to interoperate worldwide, yet within the same time-space continuum.
This layer provides the governance of circulation, not content — the ability to manage systems by regulating flows rather than commanding actors. The spreadsheet here becomes a live, networked entity — each cell potentially referencing a live data feed through an API. Capital thus lives in continuous synchronization, a recursive loop between computation and transmission.
The Value-Added Layer — Financial Messaging and Settlement Systems
The uppermost layer translates telecom synchronization into monetary order. Here, specialized financial organizations provide the value-added services that transform raw data into authorized transactions.
Key institutions at the value-added layer include:
SWIFT (Society for Worldwide Interbank Financial Telecommunication) provides messaging and authentication of international payments.
CHIPS (Clearing House Interbank Payments System) is for large-value US dollar settlements.
Fedwire (Federal Reserve Wire Network) is a real-time gross settlement system run by the Federal Reserve for US institutions, including the US Treasury.
CIPS (China International Payment System) is Beijing’s cross-border yuan settlement alternative to SWIFT.
TARGET2, SEPA, and Euroclear: European equivalents for euro-denominated transfers and securities clearing.
These systems are not simply utilities — they define the protocols of trust, verification, and sequencing that make digital money “real.” They establish the symbolic grammar of payment — deciding what counts as a legitimate transaction, whose time counts as real time, and which currencies synchronize as reserve standards. In this sense, telecom synchronization culminates in governance. It is the control of the networked grid and thus becomes control of financial temporality, settlement order, and geopolitical hierarchy.[4]
Synchronizing the Rhythms of Global Valuation
Telecomunications thus performs three epistemic functions:
Coordination — it aligns calculations across nodes, ensuring that a pricing model in London, a trading algorithm in New York, and a blockchain validation in Shanghai share a synchronized temporal reference. This simultaneity creates the illusion of a single, continuous global market.
Verification — it secures the legitimacy of symbolic operations by time-stamping and broadcasting them. Every transaction, from currency swaps to smart contracts, becomes part of a synchronized ledger of truth — a techno-semiotic archive.
Control — it enables governance at a distance, a cybernetic governmentality.[3] Power flows through feedback loops: dashboards, APIs, and spreadsheet terminals that monitor, compare, and optimize in real time.
Telecom synchronization transforms the grid from a visual metaphor into a world-machine — a planetary spreadsheet where the economy operates as a continuously updated database. Under this condition, spreadsheet capitalism becomes chrono-political. It governs temporality through synchronization. Whoever controls the timing, bandwidth, and data flow — from Aladdin’s real-time risk dashboards to SWIFT and CIPS settlement protocols — controls the rhythm of global valuation itself.
Thus, telecom Synchronization completes the logic of spreadsheet capitalism as the infrastructure of coordination and the medium of epistemic power. It is where information becomes circulation, and circulation becomes control.
Conclusion
Telecom synchronization is not just about speed or efficiency; it is about the production of a global temporal order. Through these three layers, spreadsheet capitalism achieves its totalizing coherence: a world where the financial grid and the communications grid are one and the same.
The physical layer makes connection possible.
The network layer maintains synchronization.
The value-added layer defines legitimacy, trust, and enables trading.
Together, they transform calculation into circulation and circulation into power — completing the operational unity of semiotic substitution, symbolic computation, and telecom synchronization.
In the next part of this analysis of telecom synchronization, I will explore the integration of blockchain and the crypto environment. It will show how telecom synchronization bifurcates into two interdependent temporalities. One is the institutionalized telecom grid that is centralized, high-speed, and hierarchically trusted. The other is the distributed grid — decentralized, cryptographically insured, and publicly auditable.
Citation APA (7th Edition)
Pennings, A.J. (2025, Oct 12) Telecom Synchronization in Spreadsheet Capitalism. apennings.com https://apennings.com/technologies-of-meaning/telecom-synchronization-in-spreadsheet-capitalism/
Notes
[1] As a undergraduate I did an internship with the Pacific Telecommunications Council (PTC) in Honolulu that gave me a pretty good understanding of telecommunications companies, that, combined with self-study, graduate classes at the University of Hawaii, and a senior thesis on ISDN and allowed me to add a Telecommunication major to my degree.
[2] This past summer, I took Michel Foucault’s The Order of Things on a trip through Italy where we got engaged 25 years ago. Relaxing and reading on the beaches of Tropea and quick looks while hiking in the Dolomites helped me formulate an overall conception of the three logics of power in financial grids.
[3] Cybernetic governmentality is a term I used in my dissertation on Symbolic Economics and the Politics of Global Cyberspaces (1993) for one of my chapters and refers to techniques of governance, primarily information technology but goes back to the development of political arithematic and “state-istics.”
[4] Financial institutions like banks and investment firms are considered Content Providers that use CDNs to deliver secure and fast content to customers. They use CDNs for their websites, mobile apps, and trading platforms, which require high security and low latency for services like account information, market data, and transaction processing. Providers like Cloudflare, Akamai, and Amazon CloudFront are often used by financial services due to their robust security and performance feature
© ALL RIGHTS RESERVED
Not to be considered financial advice.
Anthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea and a Research Professor for Stony Brook University. He teaches AI and broadband policy. From 2002-2012 he taught digital economics and information systems management at New York University. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in Korea.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: AT&T > China Telecom > CHIPS > CIPS > Fedwire > Internet layers > NTT > SWIFT > TARGET2
Will BRICS Effectively Tokenize Rare Earth Elements to Back a New Currency?
Posted on | October 8, 2025 | No Comments
Rose Mason in Medium makes a compelling plea for the tokenization of Rare Earth Elements (REEs) currently in high demand around the world. From another angle, Cyrus Janssen presents an intriguing but ultimately flawed argument that the BRICS+ countries will use REEs to back a new currency to challenge the US dollar.[1] This post examines the potential for financializing REEs and situates them in the Substitution – Symbolic Computing – Telecom Grid (SCT) Stack.
Several articles examined in this post to gauge the potential to tokenize Rare Earth Elements (REEs), framing it as a key battleground in the “spreadsheet capitalism” framework and the BRICS bloc’s challenge to the US dollar. The post analyzes the promise of financial tokenization, the BRICS strategy, and the significant practical barriers that make REEs poor candidates for effective financialization compared to assets like gold.
The post first outlines the theoretical benefits of tokenizing REEs: making a strategically vital but illiquid asset class accessible to global investors, improving transparency through blockchain, and channeling capital towards sustainable mining. It then analyzes the BRICS strategy, suggesting the bloc aims to leverage its control over REE reserves to create a new, commodity-backed financial system. This system would use tokenized REEs as a semiotic and computational anchor, creating an alternative to the USD-based grid by using its own payment channels (like China’s CIPS) and exchanges.
However, the posts’s core argument is that this vision will likely struggle due to fundamental aspects of REEs. Unlike standardized gold, rare earths are not “fungible;” a token for “neodymium” is meaningless without specifying its exact purity and form. Furthermore, the REE market is opaque, lacking the transparent, global price benchmarks needed for the computational formulas that drive modern finance.
The post concludes that while specific batches of REEs might be tokenized for niche industrial purposes, their physical and geopolitical complexity prevents them from being effectively abstracted into the simple, liquid symbols required to function as a major global asset. REEs remain stubbornly tied to the real world, resisting the clean logic of spreadsheet capitalism.
The post takes a critical look at tokenizing rare earth elements that promise to bring liquidity to a strategically vital but currently inaccessible asset class. A major problem is that REEs are physically and industrially bound to specific geographies (notably China, Brazil, and Africa) and often priced through opaque or bilateral contracts. Their physicality resists the liquidity and standardization demanded by global capital markets.
Historically, REEs have been coordinated and transacted by governments, state-owned enterprises (SOEs), large corporations, and very specialized traders. Tokenization hopes to break this barrier and give alternate investors exposure to these assets. For a manufacturer here in Korea, like Samsung or Hyundai, a token representing “neodymium” could be a powerful tool for hedging against price volatility. For investors, it could offer a new way to speculate on the green energy transition.
For BRICS, tokenization may provide non-dollar financing and avoids Western sanctions. REE tokens embody “material sovereignty” that represent tangible backing for new currencies. But what are the rare earths and what are limitations for its financialization?
What are Rare Earths?
Rare Earths are a set of seventeen metallic elements integral to a wide range of modern mechanisms, from consumer electronics to advanced military hardware. Although called “rare,” they are relatively abundant but difficult to find in economically viable concentrations for mining.
Rare Earth Elements (RREs) are roughly divided between two categories based on their position in chemistry’s periodic table. The heavy rare earth elements (HREEs) are a subgroup of the rare earth family characterized by their higher atomic numbers and greater atomic weights. Known as the “Heavies,” Dysprosium (Dy), Terbium (Tb), and Yttrium (Y) are critical for modern defense applications and the green energy revolution.
The Heavier Rare Earths Dysprosium (Dy) and Terbium (Tb) serve several purposes. They are vital additives to neodymium magnets, as they enable them to retain their powerful magnetic properties at the extremely high temperatures found inside an electric vehicle motor or a wind turbine generator. Yttrium (Y) is used as a red phosphor in older CRT displays and some modern LEDs. It’s also a critical component in certain high-performance lasers and medical devices.
Other heavy metals include Gadolinium (Gd), which has unique magnetic properties that make it critical as a contrast agent in MRI scans, helping to produce clearer images of internal organs and tissues. Erbium (Er) is a key element in modern telecommunications. It is used to create optical amplifiers for fiber-optic cables, boosting the data signal as it travels over long distances without requiring conversion back into electricity.
The “Lights” are the more common rare earths, often used in magnets, catalysts, and glass. Neodymium (Nd) is the most important rare earth element, critical for creating the world’s strongest permanent magnets (NdFeB magnets). These are used in everything from the tiny motors that make your smartphone vibrate to the giant electric motors in EVs (like a Tesla Model 3) and the generators in wind turbines.
Other “Lights” include Praseodymium (Pr), which enhances high-power magnets by improving their heat resistance when combined with neodymium, while also creating a yellow hue in glass and ceramics. Lanthanum (La) enhances optical clarity in camera lenses and telescopes. It is also a crucial component in nickel-metal hydride batteries used in hybrid vehicles, such as the Toyota Prius. Cerium (Ce) serves major industrial roles, acting as a primary catalyst in automotive catalytic converters to reduce emissions and as a polishing agent for manufacturing glass screens and lenses.
The Promise of Tokenizing Rare Earth Elements
Returning to Ms. Mason, as a blockchain consultant, she argues that tokenizing rare earth minerals promises to transform them from a hidden, industrial ingredient into a dynamic and accessible financial asset. The narrative behind this push is one of modernization and democratization, built on five key benefits.
The first benefit is a promise is to shatter the barriers of an exclusive market. Historically, investing in rare earths was a privilege reserved for governments, large corporations, and specialized investors. Tokenization aims to change this by offering everyday investors a chance to gain exposure to this critical asset class, making it globally accessible. This newfound accessibility is paired with a solution to the age-old problem of liquidity. Instead of the slow, cumbersome process of trading physical commodities, tokens enable a swift, 24/7 global market, allowing investors to move in and out of positions with ease.
Furthermore, this new market is to be built on a technological foundation of trust. By utilizing blockchain technology, every transaction is recorded on a transparent and immutable ledger. This move promises to drastically reduce the risk of fraud and create a more trustworthy supply chain. This newfound visibility makes rare earth tokens an attractive tool for portfolio diversification, offering an alternative asset that can perform differently from traditional stocks and other financial instruments during times of market volatility.
Perhaps most compellingly, the tokenization of rare earths is framed as a way to align profit with planetary goals. As demand for these minerals soars, driven by the green energy transition and the rise of electric vehicles, tokenization provides a direct channel for global capital to fund sustainable mining operations and renewable energy projects. This new ability allows investors to directly support and benefit from a more sustainable economy.
The BRICS Challenge
Cyrus Janssen’s argument that the BRICS plan to replace the US dollar with REE tokens is based on the recent 2025 Moscow Financial Forum held in mid-September. They announced plans for a dedicated precious metals exchange that would allow countries to settle payments in gold, diamonds, platinum, and also rare earth minerals. It would bypass Western systems like the London Metal Exchange (LME) that currently establishes most prices for critical commodities. After the Ukraine war started, this system excluded Russia, despite recently becoming the fifth largest gold holder worldwide.
The YouTube video argues that this new infrastructure is underpinned by the bloc’s dominance over strategic global resources. Janssen stresses that Brazil currently produces nearly all of the world’s niobium supply. BRIC countries also have significant amounts of gold. This level of resource control, he argues, is the foundation for a new system that moves away from the US’s fiat currency toward a REE-backed monetary standard.
The Watcher.guru article is more informative. It backs the claim that a viable tokenization market for rare earth elements (REEs) is planned as part of a BRICS+ strategy to create a commodity-backed, blockchain-based exchange system. Proponents of the political bloc cheer on its efforts to replace the US dollar in international trade by leveraging control over critical resources — gold, rare earths, energy, and food — to build a new pricing and settlement architecture.
The proposed mechanism would merge resource monetization (turning metals into tradable digital assets) with BRICS’ payment innovations, such as CIPS instead of SWIFT’s blockchain infrastructure, and gold-based exchange rates. The aim is to anchor a BRICS currency system in tokenized REE commodities rather than fiat trust.
This strategy is reportedly gaining attention amid a steep decline in US dollar usage, which the article notes has fallen to its lowest share of global reserves since 2000 (58%), with 68% of global trade now conducted without the dollar. The framework is said to be particularly appealing to emerging economies, especially in Africa, which see the new exchange as a way to leverage their own resource projects and escape the political influence tied to the Western financial system.
The article concludes that by combining direct control over critical commodities with an independent payment and exchange infrastructure, the BRICS bloc is creating a direct and increasingly plausible challenge aimed at systematically replacing the US dollar’s global hegemony with a new, resource-backed financial order.
However, will rare earth metals effectively tokenize on the SCT Stack of spreadsheet capitalism? The practical realities of the REE market clash with the clean abstractions required by the SCT stack. Will blockchains prove sufficient in creating a new infrastructure for REE tokenization that can back a new currency?
Major Barriers to Effective Tokenization in the SCT Stack
Rare earth metals will not tokenize as effectively as other assets like gold within the spreadsheet capitalism framework. The main reasons are their fundamental lack of fungibility and market transparency. Fungibility is the property of goods or assets where individual units are interchangeable and indistinguishable. Unlike gold, where one bar is a near-perfect substitute for another, each batch of rare earths is a unique industrial ingredient. This physical complexity resists the radical simplification needed to create a clean, tradable digital symbol. These weaknesses creates a flawed semiotic substitution, making them difficult to represent as simple, standardized symbols that the system requires.
The first step of the stack, turning a real-world asset into a digital symbol, fails at a basic level for REEs. Gold is highly standardized. A token like PAXG can represent a claim on one fine troy ounce of a London Good Delivery gold bar, a globally accepted standard. Rare earths are not standardized. A token for “one kilogram of neodymium” is a meaningless symbol without specifying its purity (e.g., 99.5% vs. 99.99%), its form (oxide, metal, or alloy), and its origin.
Spreadsheet capitalism thrives on a single, globally synchronized price that can be fed into computational formulas. The gold market has the XAUUSD, a real-time price stream from the COMEX and LBMA. The rare earths market has no such indicators. Prices are opaque and determined by private, bilateral contracts between a few major suppliers (primarily in China) and industrial buyers.
There is no reliable, liquid, global benchmark price for a REE token to peg itself to. This makes it incredibly difficult to use the token in the computational stack for risk modeling, derivatives pricing, or collateral valuation. A token is only as good as the asset backing it. The gold market has a mature, trusted, and highly audited network of vaults (e.g., in London, New York, and Zurich). The infrastructure for storing and verifying large quantities of REEs for the benefit of token holders simply does not exist on a similar scale. Establishing this trusted custodial layer would be a massive undertaking, especially given the geopolitical concentration of the supply chain.
Tokenization and Blockchain in the SCT Stack
The tokenization of REEs’ arguments sits at the intersection of the spreadsheet capitalism framework and the possibilities of the emerging tokenization of resource value. Below is a detailed analysis of the articles’ claim for a viable tokenization market for rare earth elements, framed through the three logics of Substitution/Abstraction, Symbolic Computing, and Telecom Grid Synchronization — and their geopolitical-economic implications for de-dollarization.
This analysis challenges the claims that a viable tokenization market for rare earth elements (REEs) will successfully emerge as part of a BRICS+ strategy to create a commodity-backed, blockchain-based exchange system in the near term. The plan to replace the US dollar in international trade by leveraging control over critical resources such as gold, rare earths, energy, and food to establish a new pricing and settlement architecture will face numerous challenges.
The proposed mechanism to merge resource monetization (turning metals into tradable digital assets) with payment innovation, such as China’s CIPS instead of SWIFT, aims to anchor a BRICS currency system in tokenized commodities rather than fiat trust. Tokenization represents a shift from fiat-denominated computational pricing to resource-denominated signification.
In the current dollar-based system, metals are priced in USD units, substituting their material value through the symbolic power of the dollar grid (Bloomberg, LSEG, Aladdin). The proposed BRICS system seeks to invert this substitution with digital tokens that would directly represent fractions of physical metals or reserves (e.g., 1 REE token = 1 kg neodymium stored in Angola). This would convert matter into sign, but without passing through the dollar — a new layer of semiotic substitution detached from US spreadsheet infrastructures.
Hence, REE tokenization is not just about digitization — it’s a semiotic rebellion, replacing the dollar as the global unit of account with resource-tied symbols recorded on BRICS-led blockchain exchanges. The viability of this market depends on whether tokenized REEs can be abstracted into comparable, liquid financial instruments:
Blockchain tokens offer fungibility and fractionalization, allowing them to represent micro-ownership in rare earth deposits, making these assets tradable on digital exchanges. This transition allows integration into risk models and cross-asset portfolios, similar to how ETFs abstract physical gold into digital gold.
However, liquidity, verification, and pricing transparency remain key obstacles. Substitution and abstraction requires standardization across jurisdictions, audits, and trading rules — the very functions that spreadsheet platforms (Bloomberg, LSEG) currently monopolize.
Thus, the challenge for BRICS+ is to build alternative abstraction infrastructures, such as transparent ledgers and decentralized oracles, which substitute for Western data terminals.
Symbolic computing is the performative layer of finance that involves modeling, pricing, and hedging. It is where tokenized REEs would gain traction or fail. Once tokenized, REE assets can enter computational environments like risk models, smart contracts, and DeFi-style derivative markets. These environments could compute prices via algorithmic market-making, automated yield, or resource-backed lending using smart contracts.
Symbolic computation combines with substitution to transform natural resources into programmable collateral. It integrates geopolitics into code. In spreadsheet capitalism terms, the REE token becomes a computable symbol, allowing new forms of liquidity and leverage. The question is, will it be within the alternative BRICS grid, and not the USD-based symbolic regime?
The current USD regime faces challenges from the emerging BRICS+ token regime. The pricing infrastructure at the London Bullion Market, COMEX, and displayed on USD-denominated tickers become challenged by the Shanghai Gold Exchange and the BRICS Precious Metals Exchange. At the computational layer terminals like Bloomberg, LSEG, Aladdin (USD risk models) could be replaced by BRICS exchange APIs and smart contract oracles. Telecom synchronization provided by SWIFT, Fedwire, CLS, and T2 is challenged by China’s CIPS and new blockchain settlement layers. The unit of account for USD regimes would be challenged by REE resources- or gold-backed tokens.
In sum, the semiotic anchor of “dollar liquidity” is challenged by “resource transparency.” This transition amounts to a shift in the semiotic-computational-telecom stack — from dollar-based spreadsheets to distributed ledgers as the new computation and synchronization substrate.
Strategic Implications of REE Tokenization
The article implies that REEs could become prime candidates for early tokenization, for several reasons. One is resource concentration. BRICS+ control some 72% of reserves — enabling monopoly-like coordination. Strategic REEs demand for EVs, semiconductors, and defense industries ensures long-term value. The political incentive is that tokenization provides non-dollar financing and avoids Western sanctions. The symbolic appeal is that REE tokens embody “material sovereignty” — representing tangible backing for possible new currencies from domestic sources.
However, for tokenization to function at scale, pricing transparency, custodial assurance, and convertibility mechanisms must rival the computing and synchronization of Western spreadsheet terminals. Otherwise, the REE token risks being symbolic without liquidity — an “unsettled” sign in the global financial grammar.
Integration into Spreadsheet Capitalism
If realized, REE token markets would appear within Aladdin, Bloomberg, and Wind terminals as synthetic tickers such as:
Nd_TKN, Co_TKN, Nb_TKN — linked to blockchain APIs.
In real-time feeds it will pull via =BDP(“REE_TKN”,”LAST_PRICE”).
In cross-hedge models, REE tokens correlate with gold, oil, and Treasury yields and in risk modules tokenized REEs enter global portfolios for diversification. Lastly, yield analytics integrate them into DeFi-linked or BRICS exchange-backed smart contracts. Thus, even as tokenization claims autonomy from the USD, its representation and computation would still occur through the spreadsheet logic — the universal language of substitution and abstraction.
Synthesis and Conclusion
This post ultimately envisions a tokenized commodity standard where semiotic substitution (token), symbolic computation (smart contract), and telecom synchronization (blockchain ledger) merge and add a new spreadsheet layer, with and beyond the US dollar.
In this system, rare earth tokens become both symbols and settlement instruments. Blockchain ledgers enter the grids of computation and coordination while AI-driven analytics and terminals (like Wind or Aladdin) price, hedge, and optimize across these new semiotic surfaces.
Tokenization will provide a niche tool for the near future, not a global asset. While specific, standardized batches of rare earths could be tokenized for supply chain tracking or to collateralize a specific loan between industrial partners, they are unlikely to become a liquid, globally traded asset class like tokenized gold. The very physical and geopolitical complexities that make rare earths strategically critical are what prevent them from being effectively abstracted into the fungible, placeless symbols of spreadsheet capitalism. They remain stubbornly tied to the physical world.
The operational grammar of symbolic computing is how spreadsheet capitalism expresses financial meaning as formulas and returns. When this is applied to tokenized assets — i.e., digital representations of securities, commodities, or currencies living on blockchain or managed through APIs — the symbolic layer translates blockchain state into spreadsheet logic.
But the paradox remains. While tokenization decentralizes value representation, symbolic computation re-centralizes it in global grids. Whoever controls the grid of valuation models will control the next monetary order. USD’s centrality equals control over pricing, modeling, and messaging. BRICS’ challenge is to create a parallel semiotic and computational infrastructure built on tokenization that global finance can compute with. Whoever defines the physical custodial layer (i.e. Chinese gold vaults in Saudi Arabia) and the symbolic grammar of tokenized commodities will be in a good place to define the new world order.
Notes
[1] This was written a few days before China announced new licensing restrictions on REEs that caused a significant drop in financial indexes and understandable furor in the White House.
Citation APA (7th Edition)
Pennings, A.J. (2025, Oct 8) Will BRICS Effectively Tokenize Rare Earth Elements to Back a New Currency? apennings.com https://apennings.com/dystopian-economies/will-brics-effectively-tokenize-rare-earth-elements-to-back-a-new-currency/
© ALL RIGHTS RESERVED
Not to be considered financial advice. LLMS were used in parts of this post.
Anthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea and a Research Professor for Stony Brook University. He teaches AI policy and broadband economics. From 2002-2012 he taught digital economics and information systems management at New York University. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in Korea.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: BRICS > fungibility > Rare earth elements (REEs) > SCT Stack
Tokenization of Gold in Blockchained Spreadsheet Capitalism
Posted on | October 5, 2025 | No Comments
When I was teaching my Macroeconomics course at New York University (NYU), we would often go down to Wall Street and deep (80 feet below street level) into the vaults at the Federal Reserve Bank. Over 6,000 tons of gold bullion were stored there from countries around the world. If no one was looking you could put your finger through the mesh fence and touch a few gold bars.[1]
This post describes how the tokenization of gold on a blockchain, representing ownership rights, operates in global spreadsheet capitalism. Within its core logics of semiotic substitution, symbolic computation, and grid of telecommunications synchronization (SCT stack), gold bullion becomes a tradable token represented in a spreadsheet cell. It then becomes a computable variable (risk metrics, yield structures, and algorithmic trading inputs), and finally, it is synchronized data in the global financial grid. Real-time ledger entries of tokenized harmonize over time and distance through connected financial terminals.[2]
Tokenization is emerging as a major strategic trend (not just a niche experiment) that’s likely to reshape financial markets. It will reconfigure how many financial instruments are used, priced, and accessed. Below I lay out the mechanism, plausible pathways, and likely effects on gold.
Tokenization transforms ownership claims and cash-like instruments into programmable, fractionable, and 24/7 tradable tokens. It makes previously illiquid real-world assets (RWAs) indexable and fungible in ledger-based markets. This innovation is not merely new tech; it changes market structure (settlement, custody, market-making), product design (fractionalized RE, tokenized treasuries, tokenized funds), and distribution (global retail access).
Market evidence and institutional surveys (2024–2025) show rapid growth in tokenized RWAs, stepped-up institutional pilots (tokenized treasuries, funds, tokenized cash/stablecoins), and consultancy roadmaps that treat 2025 as an inflection period when tokenization starts a major growth trend.
Semiotic Substitution
Gold has always lived between the material and the symbolic (a shiny metal, a bar in a vault, a futures contract, a line on a central bank’s balance sheet). Tokenization intensifies this dual life.
The SCT stack begins with semiotic substitution. The physical gold bar is abstracted into the universal ticker XAUUSD. This symbol represents abstract ideals — safety, enduring value, an inflation hedge, and a non-sovereign store of wealth. In tokenization, a bar of gold stored in a vault is represented on-chain as a digital token (e.g., “1 PAXG = 1 fine troy ounce of gold in custodian X’s vault”). The heavy, immobile metal is substituted by a new portable, tradeable signifier that facilitates purchases and acts of exchange.
In tokenization, a bar of gold stored in a vault is represented on-chain as a digital token (e.g., “1 PAXG = 1 fine troy ounce of gold in custodian X’s vault”). The heavy, immobile metal is substituted by a new portable, tradeable signifier that facilitates purchases and transactional exchanges. Gold, traditionally divisible only with difficulty, can now be fractioned into decimalized tokens (0.001 token = 0.001 oz), widening participation and enhancing its substitutability.
In Bloomberg, Aladdin, and Wind terminals, a “GoldToken” would appear as just another asset row with ticker, price, volatility, custody field. The token displaces bullion-as-object with a signifier that can circulate in the spreadsheet logic alongside equities, bonds, and derivatives.
To make the semiotic substitution especially vivid, here is a side-by-side comparison of how physical gold versus tokenized gold would appear and function within the fields of a modern financial spreadsheet. The table below illustrates how tokenization completes the abstraction of gold, transforming it from a tangible object with real-world constraints into a liquid, placeless symbol in the global grid.

Symbolic Computability of Gold-as-Token
Symbolic computing in spreadsheet capitalism transforms blockchain’s distributed ledgers into centrally abstracted calculation spaces. Even as tokens decentralize ownership, their meaning is reinscribed through formulas like VaR, Sharpe risk ratio, and discounted cash flow (DCF). This activity reintegrates them into the semiotic–computational telecom grid of the USD-dominated spreadsheet world.
Once gold is represented as blockchain tokens, it becomes programmable — available for smart contracts, algorithmic trading, collateralization, and automated settlement routines. Portfolio software in Aladdin and other terminals (MSCI risk engines, etc.) treats tokenized gold as a computationally tractable input. This development means it has an algorithm that can efficiently and quickly solve instances of a problem, such as volatility, correlation with USD, VaR, and stress tests. The system doesn’t “see” gold bars; it only sees the token’s price feed and contract logic.
Derivatives and structured computational products define “gold.” Symbolic computing layers tokenized gold into DeFi protocols (yield-bearing vaults, tokenized swaps) or institutional products (ETFs that wrap tokenized holdings). The computational layer abstracts gold from its material scarcity into formulas, models, and recursive instruments.
Telecommunications Grid Synchronization
Blockchains are synchronized ledgers where identical copies of the transaction record are stored on many computers (nodes). They are automatically updated to maintain a single, consistent version of the truth across the network. Tokenized gold trading relies on the blockchain’s global, time-stamped ledger. This distributed spreadsheet synchronizes ownership claims across jurisdictions. Every transfer is logged throughout the shared, machine-readable “grid” of nodes.
Gold tokenization is integrated with terminals leased to traders and researchers by Bloomberg, LSEG, and Wind. They ingest and display their feeds and synchronize them with other market data streams. Price ticks, custody updates, and compliance flags flow into the same tabular interfaces that already synchronize FX, equities, and bonds.
The Telecom grid facilitates a 24/7 liquidity layer. Unlike futures markets that close, blockchain-based gold tokens synchronize continuously, aligning with the always-on rhythm of digital networks. This real-time synchronicity remediates gold’s status as a slow, heavy, “ancient” money into the tempo of spreadsheet capitalism’s high-frequency grid.
Putting It Together
In spreadsheet capitalism, tokenized gold serves as a substitute for the financialized metal. Vaulted bullion becomes a tradable token represented in a spreadsheet cell. Gold becomes a computable variable (risk metrics, yield structures, algorithmic trading inputs) and a synchronized signal in the global financial grid. Real-time ledger entries harmonize with financial terminals like the Bloomberg Box and portfolio dashboards provided by BlackRocks’s Alladin.
Thus, tokenization pulls gold fully into the abstract, programmable, and globally synchronized order of spreadsheet capitalism, where its ancient materiality (bars in vaults, central bank storage) is absorbed into the logic of substitution, computation, and synchronized grids.
Citation APA (7th Edition)
Pennings, A.J. (2025, Oct 06) Tokenization of Gold in Blockchained Spreadsheet Capitalism. apennings.com https://apennings.com/technologies-of-meaning/tokenization-of-gold-in-blockchained-spreadsheet-capitalism/
Notes
[1] I really wanted to see the Open Market Operations (OMO) traders who bought and sold US securities but the gold was interesting, and even more so a decade later when its value more than doubled.
[2] For a more explicit analysis of political economy of gold, see Pennings, A.J. (2018, Nov 11). From Gold to G-20: Flexible Currency Rates and Global Power. apennings.com https://apennings.com/how-it-came-to-rule-the-world/digital-monetarism/from-gold-to-g-5-flexible-currency-rates-and-global-power/
© ALL RIGHTS RESERVED
Not to be considered financial advice. LLMs used in researching parts of this post.
Anthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea and a Research Professor for Stony Brook University. He teaches AI and broadband policy. From 2002-2012 he taught digital economics and information systems management at New York University. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in Korea.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: blockchain > gold > spreadsheet capitalism > Tokenization
The International Politics of Domain Name (DNS) Governance, Part 3: ICANN and AI
Posted on | September 22, 2025 | No Comments
As of September 2025, the international politics of the Domain Name System (DNS) have evolved into a high-stakes battle for control over the Internet’s foundational address book. The era of viewing the DNS as a purely technical utility is over. Today, it is a central arena where the competing interests of governments seeking control, corporations protecting valuable brands, and a lucrative domain registration market all collide.
In this post, I continue to examine DNS developments during the few short decades since the Internet was created, particularly primary debates over Internet governance, and even struggles over the balance of power in the digital world. I started the discussion with a post on the early development of the DNS system when it could be handled by one person, Jon Postel, during the early days of the ARPANET.
As Daniel W. Drezner pointed out in his All Politics Is Global: Explaining International Regulatory Regimes (2008), the Domain Name System (DNS) is a crucial technological resource that must be effectively managed worldwide. Drezner raised three concerns:
– Governments and corporations are acquiring the capability to control access to the Internet and specific services
– DNS management is essential for maintaining the trademarks of organizations such as samsung.com and tesla.com
– Registering Domain Names can generate a lot of money. Where does it go? [1]
A core conflict is the clash between two fundamentally different philosophies of Internet governance: multi-stakeholderism, the original model, championed by the US and its allies, where a consensus between technical experts, academics, corporations, and civil society governs the Internet. The Internet Corporation for Assigned Names and Numbers (ICANN) embodies this model.
Digital sovereignty is the alternative model. This state-centric model, pushed by nations like China and Russia, argues that a country has the right to control the Internet, including the DNS, within its own borders, just as it controls physical territory.
The Internet Assigned Numbers Authority (IANA) emerged informally in the 1970s as a set of technical functions managed by Postel and others. It maintained control of the Internet’s core address book through a direct contract with the United States government. It wasn’t a formal organization but managed domain names, IP addresses, and protocol numbers that were crucial for the network to operate as a single, interoperable system. In the 1990s, the Clinton-Gore administration recognized the emerging problems and created ICANN, the Internet Corporation for Assigned Names and Numbers, although overall management still remained with IANA.
In March 2014, the Obama administration asked ICANN to convene the Internet’s global multistakeholder community and come up with a new system for managing the Domain Name System (DNS), explicitly transitioning the oversight of specific key Internet functions away from the US government. This process gathered academics, civil society, governments, individual users, technical experts to come up with ideas to replace NTIA’s historic stewardship role.[2]
In 2016, the US government officially transitioned oversight of the Internet Assigned Numbers Authority (IANA) functions to ICANN. The transition was a landmark event that marked the end of the US government’s direct, formal oversight role over the DNS root zone. However, it did not create a fully privatized system, nor did it eliminate government influence. The reassignment simply transformed and globalized it.
Before 2016, the US Department of Commerce’s National Telecommunications and Information Administration (NTIA) held the contract for the IANA functions. This agreement meant the US government had the final, formal sign-off on any changes to the DNS root zone file, the authoritative master list of all top-level domains. While evidence of abuse was never established, its existence was a central point of political contention, giving the US a unique position of ultimate authority.
The 2016 transition let this contract expire. This change officially ended the US government’s unilateral oversight. The direct, “keys to the kingdom” role was replaced by a system where accountability flows to a global, multi-stakeholder community. The US moved from being the system’s overseer to being one of its most influential participants.
The transition did not create a “fully privatized” structure. The goal was not to sell the DNS to the private sector but to cement the multi-stakeholder model of governance and ward off authoritarian control. ICANN is a non-profit public-benefit corporation, not a for-profit company.
This model represents a unique global governance structure that allows different groups to have a voice in the decision-making process. This structure included the technical community, such as engineers and academics, who built the Internet’s infrastructure. Corporations (like Samsung here in Korea) that rely on the DNS for their brand and operations have important input. Civil society, including non-commercial users and public interest groups also participates. Lastly, nation-states with an interest in public policy and security participate. The system is designed so that no single entity, whether a company or a government, can capture or control the DNS.
While direct US control has diminished, government influence is still a powerful force within ICANN through the Governmental Advisory Committee (GAC). The GAC is the formal channel through which over 170 nations, including the US, China, Russia, and South Korea, provide advice to the ICANN Board on public policy matters.
The GAC’s advice is technically non-binding, as ICANN’s bylaws require the board to formally address and justify any decision that goes against it. In practice, the GAC holds significant sway, ensuring that government perspectives on issues like security, sovereignty, and law enforcement are deeply integrated into the DNS management process.
Therefore, the transition did not remove governments from the equation; it shifted the dynamic from unilateral US oversight to formalized, multilateral government influence within the broader multi-stakeholder community.
This overarching conflict is playing out across several key political battlegrounds. One is the rise of National DNS Firewalls and “Splinternets.” This development is the most direct manifestation of government control. Increasingly, nations are mandating that Internet Service Providers (ISPs) within their borders use state-managed DNS resolvers. These resolvers act as a national firewall, allowing the government to block access to specific domain names associated with foreign news outlets, opposition movements, or social media platforms.
China’s Great Firewall is a notable example, but Russia’s efforts to create a “sovereign internet” (RuNet) that can be functionally disconnected from the global DNS root represent the ultimate goal of this movement. This trend is creating a fragmented Internet, or “splinternet,” where a user’s access to information is determined by their geographic location, directly challenging the idea of a single, global network.
The Geopolitics of ICANN and Root Zone Management
At the highest level, the political struggle centers on who controls ICANN and the DNS root zone—the master list from which the entire global DNS hierarchy is derived. Although ICANN is now an international non-profit, its historical ties to the US government remain a significant point of contention.
Nations advocating for digital sovereignty are deeply uncomfortable with this US-centric arrangement. They consistently campaign to transfer the authority for Internet governance from ICANN to a United Nations body, such as the International Telecommunication Union (ITU). This move would shift power from the multi-stakeholder community to nation-states, giving governments a direct vote on how the Internet is run. This potential divide is a fundamental geopolitical fault line that defines nearly every international discussion about Internet governance.
The DNS is also a critical piece of commercial infrastructure. For a global corporation like Samsung in South Korea, the integrity and exclusive control of samsung.com are non-negotiable for its brand identity, security, and global e-commerce.
Corporations exert significant political influence within ICANN to create and enforce strong trademark protection policies. This is a constant battle, as they fight against cybersquatting and seek to control how their brand names are used in new Top-Level Domains (TLDs).
From .com to .xyz, the TLD market has been lucrative since the commercialization of the Internet in 1995. Assigning domains has been like printing money. The creation of new gTLDs, such as .app, .shop, or .news, has transformed domain names into a multi-billion-dollar industry. The political process within ICANN for approving and auctioning these new domains is intense, pitting powerful corporate consortia against each other as they vie for control over valuable digital real estate.
Challenges from Alternative DNS and AI
A growing counter-movement seeks to bypass this entire political structure. Decentralized DNS systems, such as the Ethereum Name Service (ENS), built on blockchain technology, and privacy-focused public resolvers like Quad9, offer an alternative to the traditional, centralized, hierarchical model. These systems are inherently more resistant to censorship by a single government or corporation. While still niche, they represent a significant technical and political challenge to the established order, promising a return to a more distributed and less easily controlled Internet.
AI is poised to fundamentally transform the DNS system by transitioning its management from a reactive, human-supervised process to a predictive and automated one. While this will bring significant technical benefits, it will also intensify the geopolitical tensions between the US and other nation-states by creating powerful new tools for both centralized control and decentralized resistance.
The core conflict over whether the DNS is governed by a US-centric, multi-stakeholder model (ICANN) or by sovereign nation-states will be amplified, with AI becoming a key weapon in this struggle. Operationally, AI will enhance the DNS, making it faster, more efficient, and vastly more secure. Instead of just reacting to DNS-based attacks like DDoS, an AI will analyze global traffic patterns to predict attacks before they happen. It can identify the anomalous buildup of a botnet and proactively block malicious queries or re-route traffic, neutralizing threats in real-time.
AI will automate the complex and sensitive process of managing the DNS root zone. It can validate requests for changes, check for errors, and implement updates with a speed and accuracy that surpasses human capability, reducing the risk of catastrophic configuration mistakes. AI-powered resolvers will be able to optimize DNS lookups based on real-time network conditions and user behavior, creating faster and more resilient connections.
These technical advancements will become powerful tools in the ongoing political battle over who controls the Internet’s core infrastructure. For nations like China and Russia, which advocate for state-centric control, AI is a potential game-changer. It allows them to build vastly more sophisticated national DNS firewalls.
An AI-powered system can move beyond simply blocking a list of domain names. It can analyze traffic patterns in real-time to identify and block the behavior associated with VPNs and other censorship-evasion tools, making state control more dynamic and difficult to circumvent.
This change gives these nations a powerful new argument. They can frame their sovereign DNS as a matter of superior national security and efficiency, managed by an AI tuned to their country’s specific needs. Conversely, the US and its allies will argue that only the current global, multi-stakeholder model can provide proper Internet security.
They will argue that only a global system has access to the diverse data needed to train an unbiased AI capable of defending the entire Internet. A national AI, they will claim, would be inherently blinkered and less secure.
This innovation transforms the debate at ICANN. The political tensions will shift from who has formal oversight of the root zone to questions like whose AI is managing the system? What are its hidden biases? Can the algorithms be audited for neutrality? The battle for control of the DNS will become a battle for control over the AI that runs it.
Summary and Conclusion
This post outlines changes in the Domain Name System (DNS) from a simple technical ledger into a central battleground for international politics. The core conflict lies between the US-led multi-stakeholder model of governance, embodied by ICANN, and the push for digital sovereignty by nations such as China and Russia, which seek state-centric control. The 2016 transition of IANA oversight from the US government to the global multi-stakeholder community did not end government influence, but rather formalized it on a multilateral basis.
This tension now plays out in several arenas: the rise of national DNS firewalls creating “splinternets,” geopolitical struggles over who controls ICANN, intense corporate lobbying for trademark protection, and the lucrative market for new domain names. Emerging technologies, such as decentralized DNS and AI, are poised to intensify this conflict further, offering powerful new tools for both state control and censorship evasion.
The emergence of multilateral DNS governance reveals that no amount of technical or organizational change can erase the fundamental political struggle for control. The 2016 transition was not the end of this tension, but merely the beginning of a new, more complex chapter. The introduction of AI will not solve the debate between multi-stakeholderism and digital state-centric sovereignty. AI will become the next powerful weapon in that fight.
Ultimately, the battle for the Internet’s future will not be about who holds the management contract, but about who writes the code and controls the intelligent algorithms that will soon manage the world’s most critical address book.
Notes
[1] Drezner, Daniel W. All Politics Is Global: Explaining International Regulatory Regimes. Princeton, N.J.: Princeton U, 2008. Print. Chapter on “Global Governance of the Internet.”
http://press.princeton.edu/titles/8422.html. Also see Drezner, D. (2004). The Global Governance of the Internet: Bringing the State Back In. Political Science Quarterly, 119(3), 477-498. doi:10.2307/20202392
[2] The change from IANA to ICANN resulted in the successful stewardship transition in 2016, transferring oversight of critical Internet functions from the US government to a global, decentralized, multistakeholder model. These changes reflected the Internet’s growth, the need for more inclusive governance, and ongoing efforts to address security, accessibility, and internationalization challenges. As the Internet continues to innovate, the management of DNS will likely adapt to meet new demands and challenges in the digital landscape.
Note: Chat GPT was used for parts of this post. Multiple prompts were used and parsed.
Citation APA (7th Edition)
Pennings, A.J. (2025, Sep 22) The International Politics of Domain Name (DNS) Governance, Part 3. apennings.com https://apennings.com/digital-geography/the-international-politics-of-domain-name-dns-governance-part-3/
© ALL RIGHTS RESERVED
Not to be considered financial advice.
Anthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea and a Research Professor for Stony Brook University. He teaches AI and broadband policy. From 2002-2012 he taught digital economics and information systems management at New York University. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in Korea.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Domain Name System (DNS) > Internet Assigned Numbers Authority (IANA) > Internet Corporation for Assigned Names and Numbers (ICANN) > Jon Postel > Paul Mockapetris > top-level domains (TLDs)



