Anthony J. Pennings, PhD

WRITINGS ON AI POLICY, DIGITAL ECONOMICS, ENERGY STRATEGIES, AND GLOBAL E-COMMERCE

Remediating the Blurred Lines of Human-AI Collaboration in Disaster Management and Public Safety Communications

Posted on | November 14, 2025 | No Comments

This is a follow-up to my prepared presentation for the Asia-Pacific Economic Cooperation (APEC) meeting on July 31, 2025, on Disaster Leadership with Saebom Jin from the National AI Research Lab at KAIST. We used media theory to talk about the possibility of “healing” the Common Operating Pictures (COP) used in disaster-oriented situation rooms and command centers with Artificial Intelligence (AI) and Application Programming Interfaces (APIs). By healing, we drew meaning from the theories of remediation by Bolter and Grusin (2000), which propose that new media forms integrate older forms to create a more “authentic” version of reality.[1]

Operating Systems (OS) coordinate the flow of applications within a RAM-limited digital environment.[2] They facilitate the flow of software data like traffic cops manage the movement of automobiles in a busy intersection. Artificial Intelligence (AI) can function as a sophisticated “operating system” to coordinate APIs, enabling the seamless gathering of multiple streams of data and video to achieve both hypermediation and transparent immediacy in critical information displays like Common Operating Pictures (COPs). This involves AI acting at an orchestration layer, intelligently managing data flow like a maestro, and dynamically shaping the user experience.

This post uses remediation theory to offer ideas about the multimediated experience of connecting different data stream and windows in an individual device or common screen like the COP. Remediation is the process by which new media refashion old media to create a more “authentic” experience. In this case, it “heals” COPs by intelligently merging legacy media (TV, maps, dashboards, spreadsheets) with digital innovations (AI, APIs, streaming video). AI becomes the coordinator and translator of these media, enhancing their functionality and intelligibility.[2]

Drawing on Bolter and Grusin’s theory of remediation, with its two logics of transparent immediacy and hypermediation, we can piece together how AI can function as a next-generation operating system for Common Operating Picture (COPs) in disaster management and public safety command centers, dashboards, and mobile Personal Operating Pictures (POP). More importantly, we can see how media can stake a claim on mediated reality. The two logics work together. Transparent immediacy creates live experiences, making the medium “disappear” and enabling a direct, real-time experience. AI auto-selects, filters, and narrates live feeds or alerts for immediate situational awareness (e.g., live drone feed of a flooded area). Point-of-view (POV) perspective in visual art forms contribute to this experience. This combination enables fast, intuitive decision-making in the control room and the field.

Hypermediation uses multiple windows of statistical and indexical representation (e.g., temperature, wind speed, traffic data), allowing users to see and interact with the complexity of an incident. The AI OS organizes and synchronizes diverse sources (e.g., tweets, GIS layers, 911 calls, camera feeds) into COPs and visual dashboards to help leaders and analysts see patterns, anomalies, and priorities in multi-facted crises like a hurricane with flooding.

The Critical Role of Common Operating Pictures (COPs) and Dashboards in Disaster Management

In the demanding landscape of disaster management and risk reduction, Common Operating Pictures (COPs) and dashboards stand as hopeful pillars for effective surveillance and response. These sophisticated information systems are central to command centers, providing real-time monitoring and management of complex situations, for incident management, emergency response, and the protection of critical infrastructure. Their fundamental utility lies in their ability to aggregate vast amounts of surveillance data and diverse information sources, synthesizing them into a unified, real-time, immediate view of ongoing activities and unfolding situations. This comprehensive display is expected to significantly enhance situational awareness for all parties involved, from field responders to strategic leaders.  

The strategic value of COPs extends beyond mere data display; they are instrumental in fostering collaborative planning and reducing confusion among multiple agencies operating in a crisis. By providing a consistent, up-to-date view of critical information, COPs enable faster, more informed decision-making across the entire response structure. A prime example of this is the Department of Homeland Security (DHS) Common Operating Picture program, which delivers strategic-level situational awareness for a wide array of events, including natural disasters, public safety incidents, and transportation disruptions. This program facilitates collaborative planning and information sharing across federal, state, local, tribal, and territorial communities, underscoring the vital role of integrated information platforms in national security and public safety.  

AI as an Operating System for API-fed Media Management Layers

AI orchestration refers to the coordination and management of data models, systems, and integration across an entire workflow or application. In this context, AI acts as the “maestro” of a technological symphony, integrating and coordinating various AI tools and components to create efficient, automated informational and media workflows.  

The role of the AI-OS is to ingest data via API from sources such as satellite feeds, IoT sensors, weather models, traffic systems, and social media chatter. AI uses algorithms and techniques, often from machine learning and neural nets, to process visual data, recognize objects, and analyze scenes. It tags and contextualizes data using NLP, computer vision, and geospatial AI to collect, label, and align data across multiple users. It can also manages narrative flows by creating text summaries, video captions, or alerts that guide public broadcasts or Personal Operating Pictures (POP) user interpretation. AI adapts interfaces and adjusts the dashboard to user roles (e.g., responder vs. planner vs. public).

AI-first API management integrates machine learning (ML), natural language processing (NLP), and predictive analytics to gain deeper insights into performance and usage trends to forecast weather patterns, detect fire and water anomalies in real-time, and automate response governance. This means AI can intelligently manage the flow of information and tasks between different media components, ensuring the right data reaches the right COPs and models at the right time, preventing data bottlenecks and optimizing predictive resource utilization.  

AI-coordinated data streams draw in various API-enabled data and video sources to “heal” the COP experience. For instance, real-time streaming (RTSP/Drones) provides live visual feeds and real-time object detection, as well as thermal imagery for immediacy. GIS/Map APIs gather hyperreal terrain, zoning, and infrastructure information and then overlay evacuation zones and hazard proximity models that may involve chemical leakages, flooding, traffic, and other factors. Social media APIs draw on the writing of public posts and conduct sentiment analysis, while geo-tagging locations for search engine optimization (SEO). They also have to factor in panic signals and filter out misinformation from mischievous posts. IoT Sensors (MQTT) provide (infra)structural and environmental data that can trigger alerts based on thresholds and can be used in predictive modeling. EMS/911 feeds draw on voice and text emergency dispatches and may require transcription and triage classifications for harmful accidents. Additionally, Weather/NOAA APIs collect storm forecast data and generate path predictions and risk zone maps.

AI-powered API integration solutions automate and streamline connections between disparate software platforms, enabling seamless communication and real-time data flow. This eliminates manual data entry, reduces human error, and provides unified access to business-critical data, allowing systems to scale faster and adapt to market demands. AI systems include computational resources, data stores, and data flows/pipelines that transmit data. Data engineers design these pipelines for efficient transfer, reliable quality, and easy access for integration and analysis. AI orchestration platforms automate these workflows, track progress, manage resources, and monitor data flow.  

Gathering Multiple Streams of Data and Video – Transparent Immediacy in Practice

In the high-stakes domain of crisis management, transparent immediacy is an indispensable principle for designing intuitive COP and dashboards that facilitate rapid decision-making by minimizing cognitive friction. Real-time data visualization tools are specifically engineered to present complex information in a highly usable manner, enabling decision-makers to quickly grasp the unfolding situation without being distracted by the interface itself. Achieving true immediacy in data delivery is critically dependent on low latency within the underlying network infrastructure, particularly in mission-critical environments where split-second decisions can have profound consequences.[Diffserv]  

The integrative interface of AI-assisted media “vanishes” into the COP to provide a multidimensional hyperrealized and contextualized display. It blends multiple inputs (e.g., real-time GPS + bodycam) while maintaining the im-mediated telepresence “touchpoint” to events in the field. It synthesizes spoken alerts from text messages and narrates the changing situation (“A levee breach has been detected 3 miles south of Sector 3”). The result is that decision-makers read the mediated displays as if they are seeing the world with a healed blend of immediate and hyperreal perspectives — without delayed video feeds or digging through massive amounts of raw data.

The principles of transparency and trust are fundamental to effective crisis communication. Providing clear, accurate, and timely updates through dedicated platforms and channels helps to build confidence and establish credibility with affected populations and stakeholders. This approach aligns directly with the human desire for direct, unmediated information. Proactive communication, a commitment to telling the truth, and consistently adhering to factual information are essential strategies for maintaining transparency and regaining control of the narrative during a crisis. These practices mirror the pursuit of immediacy by delivering information that feels direct, honest, and unadulterated, thereby reinforcing public trust.  

In the high-stress, information-rich environments of crisis management, operators frequently encounter information overload. The core objective of transparent immediacy is to make the mediating technology disappear from the user’s conscious awareness, thereby allowing direct engagement with the critical information. By meticulously designing COPs and dashboards to include an “interfaceless” quality, the cognitive burden associated with navigating complex interfaces is substantially reduced. This reduction in cognitive friction enables faster assimilation of critical data, expedites the identification of patterns or anomalies, and ultimately leads to more rapid and effective decision-making, which is of paramount importance in emergency response scenarios. The less mental effort expended on understanding the tool, the more attention can be dedicated to understanding the crisis.  

While transparent immediacy strives to erase the medium and present information as unmediated reality, the integration of AI introduces a new, inherently complex layer of algorithmic mediation. AI can indeed create the appearance of greater immediacy by providing real-time insights and indexical predictive analytics, seemingly cutting through complexity to deliver direct understanding. However, the internal workings of AI processes — how they learn, process vast datasets, and generate their outputs—are often opaque, frequently referred to as a “black box” problem.

This creates a fundamental paradox: the desire for an “interfaceless” and seemingly unmediated experience directly conflicts with the ethical imperative for transparency and explainability in AI systems. Disaster management leaders must carefully navigate this tension, balancing the undeniable benefits of AI-driven immediacy with the critical need to understand and trust the AI’s “judgments,” particularly when human lives and safety are at stake. This complex challenge may necessitate the development and integration of new forms of “explainable AI” (XAI) within COP interfaces to ensure that accountability and trust are maintained, even as the technology becomes more sophisticated.  

Gathering Multiple Streams and Windows of Data and Video – Hypermediation in Practice

When a new AI-assisted digital COP remediates older, perhaps less dynamic, informational sources, it carries an implicit promise of higher fidelity, greater accuracy, and real-time relevance. Fulfilling this promise can significantly enhance comfort and trust in the information presented. For instance, an interactive Geographic Information System (GIS) map that updates in real time is inherently perceived as more reliable and trustworthy than a static, outdated map.

However, the very process of mediation, by transforming data and introducing new digital layers, can also introduce new forms of hyperreality. If these indexical layers are not transparent, or if the transformation process itself is flawed or introduces biases, it could inadvertently undermine the very trust it seeks to build. Therefore, disaster management leaders must ensure that the “improvement” offered by new forms of remediation is genuinely beneficial and does not obscure the underlying data’s provenance, potential limitations, or inherent biases.

This transition demands a critical understanding of the transformative pitfalls and potentials of digital remediation. AI operating systems can call for displays that are layered and windowed that offer diverse media representations, each with its own source, credibility, and relevance. It stacks weather maps, traffic flows, shelter capacity, and 911 calls. It shows social sentiment spikes alongside physical sensor alerts. It also tags uncertainties, such as “unverified reports” or “possible false positives.”

The result is that it enables strategic coordination by exposing the full complexity of the crisis landscape. AI’s core strength lies in its ability to ingest, process, and fuse massive volumes of data from diverse sources in real-time. AI then infers from the large datasets that are collected to inform human-based guidance and decision-making.

Multimodal AI systems are designed to integrate and process multiple data types (modalities) such as text, images, audio, and video. By combining these various data modalities, the AI system interprets a more diverse and richer set of information, enabling it to make accurate, human-like predictions and produce contextually aware outputs. This is achieved through multimodal deep learning, neural networks, and fusion techniques that synthesize different data types.

Real-Time video stream analysis draws on AI-powered video intelligence APIs that can recognize over 20,000 objects, places, and actions in both stored and streaming video. They can extract rich metadata at the video, shot, or frame level, and provide near real-time insights with streaming video annotation and object-based event triggers. Advanced APIs like Google’s Live API enable low-latency bidirectional voice and video interactions, allowing for live streaming video input and real-time processing of multimodal input (text, audio, video) to generate text or audio.  

AI and knowledge graph capabilities automate data ingestion, preparation, analysis, and reporting, significantly reducing manual tasks. This allows for quickly connecting internal data sources with case-specific data like device dumps, license plate readers (LPRs), financial records, social media, and public records for a comprehensive view.  

Coordination for Hypermediation

Hypermediation foregrounds the mediating function, explicitly enhancing the multiplicity of information and exposing the limitations of direct, “unmediated” transparent representation. AI enhances this by intelligently managing and presenting diverse, fragmented data streams in coordinated spreadsheet grids and windows. AI enables hyper-personalization of content by analyzing user behavior and preferences, tailoring content to individual needs on a granular level. This extends to dynamic user interfaces that can adapt based on user theming, curation, and real-time feedback, moving beyond static software to more organic, rapidly changing displays.  

AI in OS mode synthesizes and presents complex disaster information in usable, interactive multimodal dashboards that feature multiple windows, dynamic visualizations, and drill-down options. This allows for enhanced interactivity and navigation, enabling users to explore data in depth and filter information based on specific criteria.  

AI acts as the technological enabler for sophisticated hypermediation, allowing for the intelligent management of interconnected media at a scale and speed previously unattainable. It helps transform a potential deluge of data into a coherent and actionable common operating picture by connecting related information from fragmented sources and streamlining complex analyses.

Coordination for Transparent Immediacy

Transparent immediacy aims to make the user “forget the presence of the medium,” fostering a belief in direct, unmediated presence with the represented information. AI contributes to this by gathering and simplifying complex data into clear, actionable insights and enabling seamless, real-time interactions. [1]  

AI-powered data visualization transforms complex data into clear, dynamic visuals, identifying patterns and relationships that would take humans hours to uncover. It provides real-time insights, automatically updating visuals as new data flows in, allowing for faster, more informed decisions. This simplifies the complex, distilling mountains of raw data into actionable visuals that can be understood “at a glance”.  

AI-enhanced interfaces offer natural language processing, allowing users to ask questions and receive results in clear, interactive charts. By making the mediating technology disappear from conscious awareness, AI-driven COPs reduce the cognitive burden on operators, enabling faster assimilation of critical data and expediting decision-making.  

AI can power immersive VR and AR environments that extend traditional 2D displays into a third dimension, enabling novel operations and more intuitive, collaborative interactions. These environments aim to create a shared, real-time, and seemingly unmediated understanding of a crisis, akin to William Gibson’s “consensual hallucination” of cyberspace. AI-powered insights assist in selecting appropriate hardware solutions for these immersive technologies, streamlining their integration.  

AI Black Box’s Paradox and Ethical Considerations

While AI strives for transparent immediacy by simplifying complexity, it introduces a “black box” problem where the internal workings of AI processes are often opaque. This creates a paradox where the desire for an unmediated experience conflicts with the ethical imperative for transparency and explainability in AI systems. For critical applications like COPs, ensuring data quality, mitigating algorithmic bias, and maintaining human accountability are paramount. The effectiveness and reliability of AI models are directly contingent upon the quality, diversity, and cleanliness of the data on which they are trained, as substandard data can propagate and amplify flaws.  

AI acts as a central nervous system for a hypermediated and transparent immediated COP by orchestrating the complex interplay of data streams, video feeds, and user interfaces. It enables real-time data fusion, dynamic content adaptation, and intuitive visualization, but requires careful human oversight to ensure trust, accountability, and ethical deployment. AI as an operating system doesn’t just manage media, it interprets, structures, and presents it as meaning. In the process it strives to enable leaders and users to move from data overwhelm to narrative clarity.

Concluding Thoughts

The strategic application of media theory concepts, primarily Remediation and its logics of Transparent Immediacy, and Hypermediation, works in conjunction with advanced AI capabilities. This process is paramount for optimizing the “authentic” healed experience in Common Operating Pictures and dashboards in disaster management.

This post has demonstrated how these frameworks provide a critical lens for understanding, designing, and enhancing the information systems and COPs that underpin modern crisis response. From transforming chaotic and static data forms into dynamic visual flows (Remediation) to fostering seamless situational awareness (Transparent Immediacy) and orchestrating complex multi-source indexical and graphical information (Hypermediation), media theory offers a profound guide. AI, in turn, acts as the indispensable engine, the OS enabling these transformations through API real-time data fusion, formulas for predictive analytics, and automated communication displayed on COPs.

The future of crisis response lies in intelligently navigating the increasingly blurred lines of human-AI collaboration. This demands a nuanced understanding of AI as a powerful co-author, operating system, and assistant, one that enhances human capabilities but never fully replaces human accountability and intent. Fostering trust in increasingly mediated information requires unwavering commitment to transparency, data quality, and ethical AI deployment. By consciously integrating theoretical understanding with technological innovation, disaster management leaders can leverage these converged media landscapes to create COPs and dashboards that are not merely displays of data, but dynamic, intelligent platforms capable of shaping perception, informing decisive action, and ultimately building a more resilient and responsive global community in the face of escalating threats.

Citation APA (7th Edition)

Pennings, A.J. (2025, Nov 14) Remediating the Blurred Lines of Human-AI Collabollaboration in Disaster Management and Public Safety Communications. apennings.com https://apennings.com/crisis-communications/remediating-the-blurred-lines-of-human-ai-collabollaboration-in-disaster-management-and-public-safety-communications/

Notes

[1] Bolter, Jay David, and Richard Grusin. Remediation: Understanding New Media. Cambridge, MA: MIT Press, 2000. They followed up on “probes” by Marshall McLuhan, that the content of any new medium is always an older medium. This means new technologies integrate, repurpose, and older media. McLuhan’s main message was to point to the fundamental change these new forms create in human scale, pace, or pattern. These ideas were primarily expressed in The Mechanical Bride (1951) and Understanding Media (1964).
[2] By RAM-limited digital environment I mean the workspace needed for multiple applications to be supported. “RAM” can also be used as an analogy for a human’s ability to deal with multiple streams of information coming into their perspective.
[3] I used two prompts to address the issues I was thinking about.
Prompt 1. How can AI be an operating system to coordinate APIs gathering multple streams of data and video for hypermediated and transparent immediacy? Prompt 2. Drawing on the concepts of remediation and its two logics of hypermediation and transparent immediacy, how can AI be an operating system to coordinate APIs gathering multple streams of data and video for Common Operating Pictures used in Disaster Management and Public Safety?

© ALL RIGHTS RESERVED

Not to be considered financial advice.



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea and a Research Professor for Stony Brook University. He teaches AI and broadband policy. From 2002-2012 he taught digital economics and information systems management at New York University. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in Korea.

Comments

Comments are closed.

  • Referencing this Material

    Copyrights apply to all materials on this blog but fair use conditions allow limited use of ideas and quotations. Please cite the permalinks of the articles/posts.
    Citing a post in APA style would look like:
    Pennings, A. (2015, April 17). Diffusion and the Five Characteristics of Innovation Adoption. Retrieved from https://apennings.com/characteristics-of-digital-media/diffusion-and-the-five-characteristics-of-innovation-adoption/
    MLA style citation would look like: "Diffusion and the Five Characteristics of Innovation Adoption." Anthony J. Pennings, PhD. Web. 18 June 2015. The date would be the day you accessed the information. View the Writing Criteria link at the top of this page to link to an online APA reference manual.

  • About Me

    Professor (full) at State University of New York (SUNY) Korea since 2016. Research Professor for Stony Brook University. Moved to Austin, Texas in August 2012 to join the Digital Media Management program at St. Edwards University. Spent the previous decade on the faculty at New York University teaching and researching information systems, digital economics, and global political economy

    You can reach me at:

    anthony.pennings@gmail.com
    apennings70@gmail.com
    anthony.pennings@sunykorea.ac.kr

    Follow apennings on X

  • About me

  • Writings by Category

  • Flag Counter
  • Pages

  • Calendar

    December 2025
    M T W T F S S
    1234567
    891011121314
    15161718192021
    22232425262728
    293031  
  • Disclaimer

    The opinions expressed here do not necessarily reflect the views of my employers, past or present. Articles are not meant as financial advice.

  • Verified by MonsterInsights