Anthony J. Pennings, PhD

WRITINGS ON DIGITAL ECONOMICS, ENERGY STRATEGIES, AND GLOBAL COMMUNICATIONS

AI and Remote Sensing for Monitoring Landslides and Flooding

Posted on | June 24, 2024 | No Comments

Remarks prepared for the 2024 United Nations Public Service Forum ‘Fostering Innovation Amid Global Challenges: A Public Sector Perspective.’ Songdo Convensia, the Republic of Korea 24 -26 June 2024. Organized by the Ministry of the Interior and Safety (MOIS), Republic of Korea.

Thank you very much for this opportunity to address how science and technology can address important issues of landslides and urban flooding. A few days after I was invited to this conference, a very unfortunate landslide occurred in Papua New Guinea. Fatalities are still being tallied but are likely to be between 700 and 1200 people.

Flooding is also a tragic sign of our times. As climate change has significantly increased the level of precipitation in the atmosphere, it increasingly resembles the turbulence of a child (or adult) playing in a filled bathtub. Some of the worst 2023 flooding occurred in Beijing, the Congo, Greece, Libya, Myanmar, and Pakistan. These floods took thousands of lives, displaced hundreds of thousands, and caused billions of dollars in property damage.

As requested, I will talk about the role of Artificial Intelligence (AI) and remote sensing of landslides and flooding. I will reference a model I use in my graduate course, EST 561 – Sensing Technologies for Disaster Risk Reduction, at the State University of New York, Korea, here in Songdo. The “Seven Processes of Remote Sensing” from the Canada Centre for Remote Sensing (CCRS) provides a useful framework for understanding how AI and remote sensing work together.[1] Additionally, AI can be implemented at several stages of the sensing process. I list the seven processes in this slide and below at [2].

The “Seven Processes of Remote Sensing” from
the Canada Centre for Remote Sensing (CCRS)

Remote sensing, the detection and monitoring of an area’s physical characteristics by using sensing technologies to measure the reflected and emitted radiation at a distance, generates vast amounts of data. This data needs to be accurately collected, categorized, and interpreted for information that can be used by first responders and other decision-makers, including policy-makers.

AI algorithms, particularly those involving machine learning (ML) and deep learning (DL) can be useful at several stages. They can compensate for atmospheric conditions, and automate the extraction and use of remote sensing data from target areas. They help identify characteristics of water bodies, soil moisture levels, vegetation health, and ground deformations. This intelligence can speed up analysis and increase accuracy in crucial situations. Just as AI has proven to be extremely useful in detecting cancerous cells, AI is increasingly able to interpret complex geographical and hydrological imagery.[3]

The primary sensing model involves an energy source, a platform for emitting or receiving the energy, the interaction of energy with the atmosphere, and the interaction of energy with the target. This information is then collected, processed, interpreted, and often applied in a resilence situation. Let me explain.

The Energy Source (A)

Sensing technologies rely on data from an energy source that is either passive or active. AI can analyze data from passive sources like sunlight or moonlight reflected off the Earth’s surface. For example, it can use satellite imagery from reflected sunlight to detect changes in land and water surfaces that may indicate flooding or landslides. AI can also process data from active sources such as radar and LiDAR (Light Detection and Ranging). LiDAR, which uses light instead of radio waves, can measure variations in ground height with high precision, helping to identify terrain changes that may precede a landslide and measure the mass of land that may have shifted in the event.

Synthetic Aperture Radar (SAR) sensors on satellites, such as Sentinel-1 and NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS), produce 8–12 GHz and 4–8 GHz electromagnetic waves that can penetrate cloud cover and provide high-resolution images of the Earth’s surface. This makes it possible to detect and map flooded areas even during heavy rains or at night. Also, high-resolution optical satellites like Landsat and Sentinel-2 capture passive visible and infrared imagery that AI can use to delineate flood boundaries by distinguishing water bodies and saturated soils.

Interaction of Energy with the Atmosphere (B)

Sensing from satellites in Earth orbit (and other space-based platforms) is highly structured by what’s in the atmosphere.[4] When analyzing remote sensing data, AI can make adjustments for atmospheric conditions such as clouds, smoke, dust, rain, fog, snow, and steam. Machine learning algorithms are trained to recognize and compensate for these atmospheric factors, improving the accuracy of flood and landslide detection.

Machine learning models can also simulate how different atmospheric conditions affect radiation, helping to better understand and interpret the data received during various weather scenarios. This monitoring is crucial for accurate flood and landslide detection.

Interaction of Energy with the Target (C)

AI can analyze how different surfaces absorb, reflect, or scatter energy. For example, water bodies have distinct reflective properties compared to dry land, which AI can use to detect and identify flooding. For example, “water loves red,” meaning that it absorbs the red electromagnetic rays and reflects the blue, giving us our beautiful blue oceans. Often, particulate material absorbs the blue rays too, resulting in greenish waters. AI can identify subtle vegetation or soil moisture changes that might indicate a potential landslide. Researchers in Japan are acutely aware of these possibilities given the often mountainous terrain and frequency of heavy rains.[5]

Water and vegetation may reflect similarly in the visible wavelengths but are almost always separable in the infrared. You can see that the reflectance starts to vary considerably at about 0.7 micrometers (µm), or microns wavelengths. (See image below) The spectral response can be quite variable, even for the same target type, and can also vary with time (e.g., “green-ness” of leaves) and location. These absorption characteristics allows for the identification and analysis of water bodies, moisture content in soil, and even snow and ice. This information can be used for monitoring lakes, rivers, reservoirs, and assessing soil moisture levels for irrigation management. See the more detailed explanation at [6].

Knowing where to “look” spectrally and understanding the factors which influence the spectral response of the features of interest are critical to correctly interpreting the interaction of electromagnetic radiation with the surface.

The Platform and Recording of Energy by the Sensor (D)

Platforms can be space-based, airborne, or mobile. Much of the early research was done with satellites, but drones, and moving robots (and automobiles) use the same model. After the energy has been scattered by, or emitted from the target, we require a sensor on a platform to collect and record the returned electromagnetic radiation. Remote sensing systems that measure energy naturally available are called passive sensors. Passive sensors can only be used to detect energy when the naturally occurring energy is available and makes it through the atmosphere.

Active sensors, like the LiDar mentioned before, provide their own energy source for illumination. These sensors emit radiation directed toward the target. The sensing platform detects and measures the radiation reflected from that target.

AI can analyze data from platforms like satellites for large-scale monitoring of land and water events. Satellite technology like SAR provides extensive coverage and can track changes over time, making them ideal for detecting floods and landslides. Aircraft and drones equipped with sensors can collect detailed local data, allowing AI to process this data in real time and provide immediate insights. Ground-based sensors from cell towers, IoT and mobile units such as Boston Dynamics’ SPOT robots can provide continuous monitoring at locations that may not be accessible to other platforms.

AI can integrate data from these platforms for a comprehensive view of an area, such as identifying landslide-prone areas by soil and vegetation analysis. High-resolution digital elevation models (DEMs) created from LiDAR or photogrammetry help identify areas with steep slopes and other topographic features associated with landslide risk. Multispectral scanning systems that collect data over a variety of different wavelengths and hyperspectral imagery that detect hundreds of very narrow spectral bands throughout the visible, near-infrared, and mid-infrared portions of the electromagnetic spectrum, can detect soil moisture levels and vegetation health, important indicators of landslide susceptibility.

Transmission, Reception, and Processing (E)

Energy recorded by the sensor is transmitted as data to a receiving and processing station and processed into an image (hardcopy and/or digital). Spy satellites were early adopters of digital imaging technology such as charge-coupled device (CCDs) and CMOS (complementary metal oxide semiconductor) image sensors that now used in smartphones and other cameras. CCDs were an early technology that is still used because of their superior image quality and successful attempts to reduce their energy consumption. CMOS sensors, meanwhile have seen major improvements in their image quality.

CCD CMOS and Digital Sensing Imagery

These technologies both recieve electromagetic energy immediately (unlike film that has to be developed) and convert them into images. In both cases, a photograph is represented and displayed in a digital format by subdividing the image into small equal-sized and shaped areas, called picture elements or pixels. The brightness of each area is represented with a numeric value or digital number. Processed images are interpreted, visually and/or digitally or electronically, to extract information about the target.

Interpretation and Analysis (F)

Making sense of information from technologies to understand changes in land formations and water flooding can benefit from several analytical approaches. One of the most important is monitoring geographical features over time and space and allowing AI to use techniques such as time-series analysis, river and stream gauging, and post-landslide assessment, especially after a catastrophic fire.

Remote sensing data over time allows for the monitoring of the temporal dynamics of floods, including the rise and fall of water levels and the progression of floodwaters across a landscape. The Landsat archives provide a rich library of imagery dating back to the 1970s that can be used. Having stored information is helpful in assessing the damage and impacts of a landslide after it has occurred. Post-event imagery helps assess the extent and impact of landslides on infrastructure, roads, and human settlements, aiding in disaster response and rehabilitation efforts.

Volume and area estimation after a fire, flood, or landslide can assess the geographic impact and support engineering and humanitarian responses. AI can help remote sensing quantify the volume of displaced material and the area affected by landslides, which is essential for understanding the scale of the event and planning recovery operations. Remote sensing supplements ground-based river and stream gauges by providing spatially extensive water surface elevation measurements and flow rates. This analysis often relies on structural geology and the study of faults, folds, synclines, anticlines, and contours. Understanding geological structures is often the key to mapping potential geohazards (e.g., landslides).[p 198].

AI can classify areas affected by floods or landslides, using deep learning to recognize patterns and changes in the landscape. Subsequently, AI can use predictive analytics to identify climate and geologic trends and provide forecasts for flood and landslide risks and analyzing historical and real-time data, giving early warnings and insights.

AI Integration and Applications (G)

Techniques such as data fusion can combine remote sensing data from multiple sensors (e.g., optical, radar, LiDAR) with ground-based observations to enhance the overall quality and resolution of the information. This integration allows for more accurate mapping of topography, better detection of water bodies, and detailed monitoring of environmental changes.

AI applications can analyze real-time data from sensors to detect rising water levels and predict potential flooding areas. Machine learning algorithms can recognize patterns in historical data, improving the prediction models for future flood events. AI can also incorporate data from social media and crowdsourced reports, providing a more comprehensive view of ongoing events. This information can allow policy makers and first responders to use AI systems to automatically generate alerts and warnings for authorities and the public, allowing for timely evacuations and preparations.

AI can analyze topographical data from LiDAR sensing technologis to detect ground movement and changes in terrain that precede landslides. AI can process data from ground-based sensors to monitor soil moisture levels, a critical factor in landslide risk. By learning from past landslide events, AI can predict areas at risk and suggest mitigation measures. By analyzing data from past landslide events, AI can identify risk factors and predict areas at risk. AI can suggest mitigation measures, such as reinforcing vulnerable slopes or adjusting land use planning.

Conclusion

The integration of AI with remote sensing technologies and ground-based observations enhances the monitoring and management of landslide and flooding disasters. By combining data from multiple sources, analyzing real-time sensor data, and learning from past events, AI can provide accurate predictions, timely alerts, and effective risk mitigation strategies. This approach not only improves disaster response but also aids in long-term planning and resilience building.

By integrating AI into each of these processes, remote sensing can become more accurate, efficient, and insightful, providing valuable data for a wide range of applications supporting climate resilence. AI can contribute at each stage of the remote sensing process. As a result, the detection, monitoring, and response to floods and landslides can be significantly improved, leading to better disaster risk management and mitigation strategies. Remote sensing technologies, when combined with ground-based river and stream gauges, provide a spatially extensive, and temporally rich dataset for monitoring water surface elevation and flow rates. This combination enhances the accuracy of hydrological models, improves early warning systems, and supports effective water resource management and disaster risk reduction efforts.

Citation APA (7th Edition)

Pennings, A.J. (2024, Jun 24). AI and Remote Sensing for Monitoring Landslides and Flooding. apennings.com https://apennings.com/space-systems/ai-and-remote-sensing-for-monitoring-landslides-and-flooding/

Notes

[1] Canada Centre for Remote Sensing. (n.d.). Fundamentals of Remote Sensing. Retrieved from https://natural-resources.canada.ca/maps-tools-and-publications/satellite-imagery-elevation-data-and-air-photos/tutorial-fundamentals-remote-sensing/introduction/9363 and Fundamentals of Remote Sensing
[2] The Canada Centre for Remote Sensing (CCRS) Model:
1. Energy Source or Illumination (A)
2. Radiation and the Atmosphere (B)
3. Interaction with the Target (C)
4. Recording of Energy by the Sensor (D)
5. Transmission, Reception, and Processing (E)
6. Interpretation and Analysis (F)
7. Application (G) – Information extracted from the imagery about the target in order to better understand it, reveal some new information, or assist in solving a particular problem.
[3] Zhang B, Shi H, Wang H. Machine Learning and AI in Cancer Prognosis, Prediction, and Treatment Selection: A Critical Approach. J Multidiscip Healthc. 2023 Jun 26;16:1779-1791. doi: 10.2147/JMDH.S410301. PMID: 37398894; PMCID: PMC10312208.
[4]A good illustration of what atmospheric conditions influence what electromagnetic emissions can be found at: NASA Earthdata. (n.d.). Remote sensing. NASA. Retrieved from https://www.earthdata.nasa.gov/learn/backgrounders/remote-sensing
[5] Asada H, Minagawa T. Impact of Vegetation Differences on Shallow Landslides: A Case Study in Aso, Japan. Water. 2023; 15(18):3193. https://doi.org/10.3390/w15183193
[6] Near-Infrared (NIR) and Short-Wave Infrared (SWIR) ranges of the infrared spectrum are highly effective for sensing water, while NIR (and to some extent the red edge) is better suited for sensing vegetation. These wavelengths are particularly effective for sensing water. Water strongly absorbs infrared radiation in these ranges, making it appear dark in NIR and SWIR imagery. This absorption characteristic allows for the identification and analysis of water bodies, moisture content in soil, and even snow and ice. This can be used for monitoring lakes, rivers, reservoirs, and assessing soil moisture levels for irrigation management. Vegetation strongly reflects NIR light due to the structure of plant leaves. This high reflectance makes NIR ideal for monitoring vegetation health and biomass. Healthy, chlorophyll-rich vegetation reflects more NIR light compared to stressed or diseased plants. The transition zone between the red and NIR part of the spectrum, known as the “red edge,” is particularly sensitive to changes in plant health and chlorophyll content. A commonly used index that combines red and NIR reflectance to assess vegetation health and coverage. NDVI is calculated as (NIR – Red) / (NIR + Red). Higher NDVI values indicate healthier and more dense vegetation.

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea teaching ICT for sustainable development and engineering economics. From 2002-2012 he was on the faculty of New York University where he taught digital economics and information systems management. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in South Korea.

AI and the Rise of Networked Robotics

Posted on | June 22, 2024 | No Comments

The 2004 movie I, Robot was quite prescient. Directed by Alex Proyas and named after the short story by science fiction legend Isaac Asimov, the cyberpunkish story set in the year 2035 revolves around a policeman, played by Will Smith. He is haunted by memories of being saved from drowning by a robot in a river after a car crash. His angst comes from seeing a young girl from the other car drown as he is being saved. The robot made a calculation that the girl could not be saved, but the policeman could. Consequently, the policeman develops a prejudice and hatred for robots, driving the movie’s narrative.

What was particularly striking about the movie was a relatively new vision of robots as networked, and in this case, connected subjects of a cloud-based artificial intelligence (AI) named VIKI (Virtual Interactive Kinetic Intelligence). VIKI is the central computer for U.S. Robotics (USR), a major manufacturer of robots. One of their newest models is the humanoid-looking NS-5 model, equipped with advanced artificial intelligence and speech recognition capabilities, allowing them to communicate fluently and naturally with humans and the AI. “She” has been communicating with the NS-5s and sending software updates via their persistent network connection outside the oversight of USR management.

In this post, I examine the transition from autonomous robotics to networked AI-enhanced robotics by revisiting Michio Kaku’s Physics of the Future (2012). We use the first two chapters on “Future of the Computer: Mind over Matter” and “Future of AI: Rise of the Machines” from Kaku’s book as part of my Introduction to Science, Technology, and Society course. Both chapters address robotics and are insightful in many ways, but they lacked focus on networked intelligence. The book was published on the verge of the AI and robotics explosion that is coming from crowdsourcing, webscraping, and other networking techniques that can gather information for machine learning (ML).

The book tends to see robotics and even AI as autonomous, stand-alone systems. A primary focus was on ASIMO (Advanced Step in Innovative Mobility), Honda’s humanoid-shaped robot, which was recently discontinued. But not without a storied history. ASIMO was animated to be very lifelike, but its actions were entirely prescribed by its programmers.

Beyond Turing

Kaku continues with concerns about AI’s common sense and consciousness issues, including discussions about reverse engineering animal and human brains to find ways to increase computerized intelligence. Below I recount some of Kaku’s important observations about AI and robotics, and go on to stress the importance of networked AI for robotics and the potential for the disruption of human labor practices in population-challenged societies.

One of the first distinctions Kaku made is the comparison between the traditional computing model based on Alan Turin’s conception of the general purpose computer (input, central processor, output) with the learning models that characterize AI. NYU’s DARPA-funded LAGR, for example, was guided by Hebb’s rule: whenever a correct decision is made, the network is reinforced.

Traditional computing is designed around developing a program to take data in, peform some function on the data, and output a result. LAGR’s (Long-Range Vision for Autonomous Off-Road Driving) convolutional neural networks (CNN) involved training the system to learn patterns and make decisions or preditions based on the data coming in. Unlike the Turing computing model, which focuses on the theoretical aspects of computation, AI aimed to develop practical systems that can exhibit intelligent behavior and adapt to new situations.

Pattern Recognition and Machine Learning

Kaku pointed to two problems with AI and robotics: “common sense” and pattern recognition. Both are needed for automated tasks such as Full Self-Driving (FSD). He predicted common sense would be solved with the “brute force” of computing power and by the development of a “encyclopedia of thought” by endeavors such as CYC, a long-term AI project by Douglas B. Lenat, who founded Cycorp, Inc. CYC sought to capture common sense by assembling a comprehensive knowledge base covering basic ontological concepts and rules. The Austin-based company focused on implicit knowledge like how to walk and ride a bicycle. CYC eventually developed a powerful reasoning engine and natural language interfaces for enterprise applications like medical services.

Kaku went to MIT to explore the challenge of pattern recognition. Poggio’s Machine at MIT researched “Immediate Recognition,” where an AI must quickly recognize a branch falling or a cat crossing the street. It was important to develop the ability to instantly recognize an object, even before registering it in our awareness. This ability was a great trait for humanity as it was evolving through its hunter stage. Life and death decisions are often made in milliseconds, and any AI operation driving our cars or other life-critical technology needs to operate within that timeframe. With some trepidation, Kaku recounts how the robot consistently scored higher than a human (and him) on a specific vision recognition test.

AI made significant advancements in solving the pattern recognition problem by developing and applying machine learning techniques roughly categorized into supervised, unsupervised, and reinforcement learning. These are briefly: learning from labeled data to make predictions, identifying patterns in unlabeled data, and learning to make decisions through rewards and penalties in an interactive environment. Labelled data “supervises” the machine to produce your desired information. Unsupervised learning is very helpful when you need to identify patterns and make decisions. Reinforced learning is similar to how humans learn where the algorithm interacts with its environment and gets a positive or negative reward.

The need for labeled data for training machine learning algorithms dates back to the early days of AI research. Researchers in pattern recognition, natural language processing, and computer vision have long relied on manually labeled datasets to develop and evaluate algorithms. Crowdsourcing platforms made obtaining labeled datasets for machine learning tasks easier at a relatively low cost and with quick turnaround times. Further improvements would improve the accuracy, efficiency, speed, and scalability of AI labeling.

Companies and startups emerged to provide AI developers and organizations with data labeling services. These companies employed teams of annotators who manually labeled or annotated data according to specific requirements and guidelines, ensuring high-quality labeled datasets for machine learning applications. Improvements included developing semi-automated labeling tools, active learning algorithms, and methods for handling ambiguous data.

Poggio’s machine at MIT represents an early example of machine learning and computer vision applied to autonomous driving. Subsequently, Tesla’s Full Self-Driving (FSD) system embodied a modern approach based on machine learning and real-world, networked data collection. Unlike Poggio’s earlier driving machine, which relied on handcrafted features and rule-based algorithms, Tesla’s FSD system utilizes a combination of neural networks, deep learning algorithms, and sensor data (e.g., cameras, radar, LiDAR) to enable autonomous driving capabilities, including automated lane-keeping, self-parking, and traffic-aware cruise control. One controversial move is that FSD is mainly relying on labeling video pixels from cameras as they have become the most cost-effective option.

Tesla’s approach to autonomous driving has emphasized real-world data collection and crowdsourcing by learning from millions of miles of driving data collected online from the fleet of Tesla vehicle owners. This information is used to train and refine the FSD system’s algorithms, although it still faces challenges related to safety, reliability, regulatory approval, and addressing edge cases. Telsa continues to leverage machine learning to acquire driving knowledge directly from the data and improve performance over time through continuous training and updates.

Reverse Engineering the Brain

Reverse engineering became a popular concept after Compaq reverse engineered the IBM BIOS in the late 1980s to bypass IBM intellectual property protections on its Personal Computer (PC). The movie Paycheck (2003) explored a hypothetical scenario of reverse engineering. MIT’s James DiCarlo describes how reverse engineering the brain can be used to understand vision better. Professor DiCarlo describes how convolutional neural networks (CNNs) mimic the human brain with networks that excel at finding patterns in images to recognize objects.

Kaku addresses reverse engineering by asking whether AI should proceed along lines of mimicking biological brain development or would it be more like James Martin’s Alien Intelligence? Kaku introduced IBM’s Blue Gene computer, as a “quarter acre” of rows of jet-black steel cabinets, each rack about 8 feet tall and 15 feet long. Housed at Lawrence Livermore National Laboratory in California, it was capable of a combined speed of 500 trillion operations per second. Kaku visited the site because he said he was interested in Blue Gene’s ability to simulate thinking processes. A few years later Blue Gene was operating at 428 Teraflops.

Blue Gene worked on the capability of a mouse brain, with its 2 million neurons, as compared to the 100 billion neurons of the average human. It was a difficult challenge because every neuron is connected to many other neurons. Together they make up a dense, interconnected web of neurons that takes a lot of computing power to replicate. Blue Gene was designed to simulate the firing of neurons found in a mouse, which it accomplished, but only for several seconds. It was Dawn, also based in Livermore, in 2007, which could simulate an entire rat’s brain (which contains 55 million neurons, much more than the mouse brain). Gene/L ran at a sustained speed of 36.01 teraflops, or trillions of calculations per second.

What is Robotic Consciousness?

Kaku suggests at least three issues be considered when analyzing AI robotic systems. One is self awarenes. Does the system recognize itself? Second, can it sense and recognize the environment around it. Boston Dynamic’s robotic “dog,” for example, now uses SLAM (Simultaneous Localization and Mapping) to recognize its surroundings and use algorithms map its location.[3] SPOT uses 360 degree cameras and Lidar to 3D sense the surrounding environment. It is being used in industrial environments to sense chemical and fire hazards. It uses Nvidia chips and a built-in 5G modem for network connections to get data from the digital canine.

Another is simulating the future and plotting strategy. Can the system predict the dimensions of causal relationships. If it recognizes the cat, can it predict what its next actions might be, including crossing into the street. Finally can it sense and ask “What if?”

Kaku and the Singularity

Lastly, Kaku was intrigued with the concept of “singularity.” He traces this idea to his area of expertise, relativistic physics, where the singularity represents a point of extreme gravity, where nothing can escape, not even light. “Singularity” was popularized by the mathematician and computer scientist Vernor Vinge in his 1993 essay “The Coming Technological Singularity.” Vinge argued that the creation of superintelligent AI would surpass human intellectual capacity and mark the end of the human era. The term has since been used by enthusiasts such as Ray Kurzweil, who believes that the exponential growth of Moore’s Law will deliver the needed computing power for the singularity around 2045. He believes that humans will eventually merge with machines, leading to a profound transformation of society.

Kaku is cautious and conservative about the more extreme predictions of the singularity, particularly those that suggest a rapid and uncontrollable explosion of superintelligent machines. He acknowledges that while computing power is growing exponentially, he doubts the trend will continue. There are also significant challenges to achieving true artificial general intelligence (AGI). He argues that replicating or surpassing human intelligence involves more than just increasing computational power.

Kaku believes that advancements in AI and related technologies will occur in incremental improvements that will enhance human life but not necessarily lead to a runaway intelligence explosion. Instead of envisioning a future dominated by superintelligent machines, Kaku imagines a more symbiotic relationship between humans and technology. He foresees humans enhancing their own cognitive and physical abilities through biotechnology and AI, leading to a more integrated coexistence.

But once again, he ignores a networked singularity that would involve interconnected AI systems, distributed intelligence, enhanced human-AI integration, and advanced data networking infrastructure. But could the networked robot become the nexus of singularity? Kaku believes this interconnected future holds immense potential for solving complex global problems and enhancing human capabilities, even though it raises issues of security, privacy, regulation, and social equity.

The Robotic Future

The proliferation of machine learning algorithms and cloud computing platforms in the 2000s accelerated the integration of AI and now robotics with networking technologies. Machine learning models, trained on large datasets, could be deployed and accessed over networked systems, enabling AI-powered applications in areas such as image recognition, natural language processing, and autonomous systems. Cloud computing allows these AI models and robotic machines to be updated, maintained, and scaled efficiently, ensuring widespread access and utilization across various sectors.

The rise of the Internet of Things (IoT) in recent years has further expanded the scope of AI and robot networking at the edges of the network. AI algorithms can now be deployed on networked devices at the edge, enabling real-time data processing, analytics, and decision-making in distributed environments. This real-time capability is crucial for applications such as autonomous vehicles, smart cities, and industrial automation, where immediate responses are necessary.

To advance AI-enhanced robots, robust data networking infrastructure is essential. High-speed, low-latency networks facilitate the rapid transmission of large datasets necessary for training and operating AI and robot data models. Networking infrastructure supports the integration of AI across various devices and platforms, allowing seamless communication and data sharing.

Cloud-based computing provides the computational power required for sophisticated AI algorithms. It offers scalable resources that can handle the intensive processing demands of AI, from training complex models to deploying them at scale. Cloud platforms also enable collaborative efforts in AI research and development by providing a centralized repository for data and models, fostering innovation and continuous improvement.

The development of AI is deeply intertwined with advancements in robotics in conjunction with data networking, networking infrastructure, and cloud-based computing capabilities. These technological advancements enable the deployment of robotics in real-time applications such as healthcare, finance, and manufacturing by supporting decision-making and enhancing operational efficiency across various sectors. The continued development of AI networking is essential for the ongoing integration and expansion of robotic technologies in our daily lives.

Kaku envisions a future where technology solves major challenges such as disease, poverty, and environmental degradation. Kaku advocates for ongoing research and innovation while remaining vigilant about potential risks and unintended consequences. He emphasizes the importance of a gradual, symbiotic relationship between humans and technology. Kaku highlighted the significance of Isaac Asimov’s Three Laws of Robotics, which are central to the plot of I, Robot. He praised the film for exploring these laws and their potential limitations. The Three Laws are designed to ensure that robots act safely and ethically, but the movie illustrates how these laws can be overridden in unexpected ways and are not to be trusted by themselves.

Citation APA (7th Edition)

Pennings, A.J. (2024, Jun 22). AI and the Rise of Networked Robotics. apennings.com https://apennings.com/technologies-of-meaning/the-value-of-science-technology-and-society-studies-sts/

Notes

[1] Javaid, S. (2024, Jan 3). Generative AI Data in 2024: Importance & 7 Methods. AIMultiple: High Tech Use Cases &Amp; Tools to Grow Your Business. https://research.aimultiple.com/generative-ai-data/#the-importance-of-private-data-in-generative-ai
[2] Kaku, M. (2011) Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100. New York: Doubleday.
[3] Lee, Yeong-Min. Seo, Yong-Deok. (2009), SLAM Vision-Based SLAM in Augmented / Mixed Reality. Korea Multimedia Society 13(3). 13-14.

Note: Chat GPT was used for parts of this post. Multiple prompts were used and parsed.

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea teaching broadband policy and ICT for sustainable development. From 2002-2012 he was on the faculty of New York University where he taught digital economics and information systems management. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

Ode to James Larson, SUNY Korea’s First Professor Emeritus

Posted on | June 4, 2024 | No Comments

Remarks at the Congratulatory Plaque Award Ceremony for Professor Emeritus June 5, 2024

Ladies and Gentlemen.

I’m pleased to say a few words as we celebrate SUNY Korea’s first Professor Emeritus.

I came to SUNY Korea eight years ago this past February as the Associate Chair of the Department of Technology and Society, while Professor Larson took on the position of Vice President of Academic Affairs.

So, I would visit his office every weekday morning at 10 am and get these extraordinary briefings. We talked about the history of SUNY Korea and the Songdo area, including international organizations like the United Nations office for Disaster Risk Reduction (UNDRR) that we have worked with to develop a number of courses for our master’s degree.

We talked a lot about the development of the DTS program, including SUNY Korea’s first graduates, the Master’s degree students in Technological Systems Managementor TSM as we call it. Later, the first undergraduates to get their degrees from SUNY Korea would have a Bachelor of Science in Technological Systems Management.

We had a common background in Communications Technology and Economic Development, so a lot of our focus was on the creation of the undergraduate specialization called ICT4D or Information and Communications Technologies for Sustainable Development, which was initially a hybrid program with the Computer Science Department and stresses 4 areas: Data Science, Networking, Mobility, and Entrepreneurship.

It was also created with the immense help of the late Dave Ferguson, the DTS chair and professor at Stony Brook, who was a regular visitor here in Songdo.

Probably the most important of our discussions was about the history of Korea and its ICT development, particularly Oh Myung’s role in Korea’s digital transformation. This collaboration with Dr. Oh, who got his PhD from Stony Brook in Electrical Engineering, led to many of Professor Larson’s publications, including Digital Development in Korea: Lessons for a Sustainable World that he co-authored with Dr. Oh in 2020.

So, to wrap up, let me just say that Professor Larson has a lot of knowledge about Korea; he has a strong passion for Korea and a strong passion for sharing its story with the world. That is why I’m particularly pleased that he has the platform of Professor Emeritus, so he can continue to research and share his knowledge about Korea’s ongoing digital and thus, social transformation. With the world, and with Korea.

Congratulations Professor Larson.

Citation APA (7th Edition)

Pennings, A.J. (2024, Jun 4) Ode to James Larson, SUNY Korea’s First Professor Emeritus. apennings.com https://apennings.com/sustainable-development/ode-to-james-larson-suny-koreas-first-professor-emeritus/

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea teaching broadband policy and ICT for sustainable development. From 2002-2012 he was on the faculty of New York University where he taught digital economics and information systems management. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in South Korea.

Analyzing the Market Structure of a Product

Posted on | May 20, 2024 | No Comments

These are class notes for my Engineering Economics class for their final assignments. Use the citation below.

What is a monopoly? What is an oligopoly? Or even more confusing – what is an oligopsony? These are terms used to describe the state of competition among firms buying or selling similar or related products. Firms seek to find an advantage to distinguish themselves from the competition when offering a specific set of products. But as we saw in a previous post, economic products themselves have certain characteristics that influence their selling conditions.

It consists of many buyers and sellers with none able to influence the price of a product. Here are some examples.

Oligopoly – several large sellers that have considerable control over the price of a product
Monopoly – one seller with considerable control over the supply and price of a product
Monospony – one buyer with considerable control over the demand and price of a product
Oligopsony – several large buyers have considerable control over the purchase price of a product.

Market structure has become a key focus of strategic thinking in modern firms. It refers to the environment for selling or buying a product or product series and influences key decisions about investments in production, people, and promotion. It is impacted by technological innovations, government regulations, customer behaviors, and costs. Market structure has an impact on the conduct of the firm and can influence their economic success.

Market structure is primarily about the state of competition for a product and how many rivals a company will have to deal with when introducing it. How easy is it to enter that market? Will the product be successful based on current designs and plans for it or will the product need to be changed? How will the product be priced?

How competitive are digital and tech environments? Due to technological innovation and globalization, competitive opportunities and restrictions are under scrutiny. The Internet and its World Wide Web (WWW) have introduced exciting new dynamics that have been subject of major research studies. A surge of platforms into the digital environment with the “Web 3.0” introduced disruptive features as e-commerce expanded beyond “dot.com” B2C and B2B connections to AI and blockchain.

Market Type and Number of Sellers

The concept of market structure has not only influenced microeconomics but also provided essential tools for managers.
This post examines different states of competition among firms supplying digital goods and services. It will look at the number of firms supplying a product and the importance of differentiation between products offered. An important factor is the barriers to entry (or competitive advantages) into the market for a particular product. Barriers to entry can help a digital media firm establish and hold market presence for its product and will be discussed at length in other posts.

Monopolies

Most people are familiar with the idea of a monopoly. It refers to one company with considerable control over the supply and price of a product. For a long time AT&T had a monopoly over the telephone system in the US. They supplied a black rotary phone that could connect to nearly every phone in the country. Some electric utility companies have a monopoly like HECO in Hawaii. Usually some government involvement is needed to maintain a monopoly. The term “natural monopoly” emerged to refer to a firm that can serve the entire market demand of a product at a lower cost than a combination of two or more smaller, more specialized firms.
A topic that will be discussed in more detail below are situations when an organization has strong buying power. These firms are called monopsonies.

Many companies do not control 100 percent of the market. Google controls some 75% of the global web search market. With Bing serving some 8 % of the market and Yahoo! Around 5%. Baidu has about 7% although they are dominant in the Chinese language market where they also benefit from government protection. Facebook is dominant in social media with a considerable lead over Google+ who has basically ceded to the friend to friend (F2F) social media market to the search engine giant.

Sometimes you will hear the term “duopoly” to refer to a situation where two companies dominate a market like Coke and Pepsi for cola drinks, Airbus and Boeing for commercial aircraft , Visa and Mastercharge for credit card authorization, Apple’s iOS and Google’s Android for mobile operating systems, and Apple and Microsoft for personal computer operating systems. These companies are more accurately referred to as oligopolies.

Oligopolies

A more useful term is oligopoly. This is a condition where several large sellers that have considerable control over the price of a product. Mobile services are a good example: AT&T, T-Mobile, and Verizon provide almost all the wireless services in US markets. The bar is a little lower as this type of market structure is where a small number of firms own more than 40% of the market.

Media companies have generally structured themselves this way. BMG, EMI, Universal Music, and Warner have been traditional powerhouses, although digital technologies continue to disrupt this media industry. Disney, CBS, Time Warner, NBC Universal, Viacom and Fox News Corporation dominant the mediasphere and are considered oligopolies? Over-the-top (OTT) media industries are a bit more competitive and often considered monopolistic competition due to many new entrants, product differentiation (different types of content and user interfaces), relatively low market entry, and market power due to customer captivity and some leeway over pricing.

Monopolistic Competition

Despite its confusing name, this category has quite a bit competition. In monopolistic competition market structure, firms achieve differentiation through various means, including product features, quality, branding, customer service, and marketing strategies. This differentiation allows companies to attract specific customer segments, build brand loyalty, and exert some control over pricing, despite the presence of many competitors. This environment encourages innovation and provides consumers with a diverse range of choices.

Think restaurants. Such that many producers sell products that are differentiated from one another (e.g. by branding or quality) and hence are not perfect substitutes. Imperfect competition such that many producers sell products that are differentiated from one another and hence are not perfect substitutes. There is an attempt to make the product unique, and thus a monopoly for that unique product. Other products are just not the same, and consequently different.

Perfect competition

Perfect competition is a theoretical ideal that provides a useful benchmark for understanding market dynamics. While no market perfectly fits all criteria, agricultural products, commodities, and certain financial markets come closest.

These markets feature numerous small producers, homogeneous products, and prices determined by overall supply and demand rather than individual firms’ actions. Understanding the characteristics of perfect competition helps in analyzing how real-world markets function and where they diverge from the ideal.
Perfect competition can emerge when a very large number of firms produce and distribute a homogeneous product. When I lived in New York City, I enjoyed going to farmer’s market at Union Square. It was a pretty good example of perfect competition.

Market structure analysis can give us insights into profitability, consumer price levels, innovation and research spending, as well as productivity levels. The key factors discussed in this type of analyis are the number of firms supplying product, the levels of differentiation between products, and the competitive advantages a company has to set up barriers to entry for other companies coming into the market.

Citation APA (7th Edition)

Pennings, A.J. (2024, May 21). Analyzing the Market Structure of a Product. apennings.com https://apennings.com/dystopian-economies/analyzing-the-market-structure-of-a-product/

Share

Note: Chat GPT was used for parts of this post but most came from my writings for the manuscript Digital Economies and Sustainable Strategies.

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea teaching engineering and financial economics as well as ICT for sustainable development. From 2002-2012 he was on the faculty of New York University where he taught digital economics and comparative political economy. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

Determining Competitive Advantages for Tech Firms, Part 2

Posted on | May 15, 2024 | No Comments

In a previous post on competitive advantages, I discussed some structural characteristics for digital media firms. Using the framework laid out in Curse of the Mogul: What’s Wrong with the World’s Leading Media Companies as a point of departure, I was able to extend their analysis of traditional media companies to the more dynamic realms of digital tech firms.

For digital tech companies to thrive, it’s crucial to grasp the strategic significance of fortifying barriers to entry. This understanding not only solidifies their positions but also paves the way for profitability. In the competitive landscape, it’s vital to comprehend how companies can fend off potential threats from others eyeing their market share. In this post, I delve into the analysis of competitive advantages, broadening the scope to encompass the dynamic world of “tech” companies.

The authors critiqued media moguls for not paying adequate attention to four general categories of competitive advantages: economies of scale, customer captivity, cost, and government protection. Previously, I covered economies of scale and customer captivity. I paid particular attention to network effects, one of the tech firms’ most critical determiners of success. Customer captivity in terms of habits, search costs, and switching costs are also important determinants of success for companies dealing with digital applications, media programming, and physical products.

In this post, I focus on innovation, cost, and government protection. Tech companies need to proactively develop and protect new technologies as well as instill a culture of rapid learning and implementation. They also need access to vital resources, whether raw minerals or refined human knowledge and skills. Lastly, government support can help a firm develop a competitive advantage.

Innovation involves developing, utilizing, and protecting technologies, implementing a climate of learning, and applying new knowledge to fundamental production and work processes. While the book puts these under the category of cost, I thought it might be more beneficial to examine these processes through the lens of innovation. This rationale is partially due to the changes in GDP measurement that now include many aspects of research and development – as well as media production – as capital expenditures and not expenses.

Tech and digital media firms need to develop key proprietary technologies that they can use and protect. This process increasingly involves software enhancements to core production techniques and digital innovations such as recommendation engines and other “big data” solutions, including new developments in AI.

Guarding the firm against cyber-espionage and techniques like reverse engineering have also become a high priority. By disassembling and studying competitors’ hardware or software products, companies can uncover design secrets, algorithms, and proprietary technologies. When startup Compaq reversed engineered IBM’s BIOS, it destroyed Big Blue’s major advantages in the personal computer (PC) industry, allowing many companies to use software designed for the IBM PC on other PCs with Microsoft’s operating system.

Utilizing intellectual property protections such as copyrights, trademarks, and the use of patents, including the business method patent can provide legal protection for a product and protect against encroaching companies. Patents, for example, give the owner the exclusive use of a technology for 14-20 years.

Tech firms should strive for constant improvements in production and efficiencies to separate themselves from the “pack” through organizational learning. They should also be cognizant of the opportunities inherent in disruptive innovations that may initially offer poorer performance, but that may improve or reach new audiences over time.[2] Disruptive innovations can redefine market leadership, create new value propositions, alter industry standards, impact business models, encourage agile strategies, and increase competitive pressure. Companies that can anticipate, adapt to, and leverage these innovations are better positioned to maintain and enhance their competitive advantages.

As digital media and tech companies traffic in various types of communication and content, it is crucial that they find new ways to produce, package and monetize media. The authors are wary of business models based on content “hits” and stress instead the importance of producing continuous media and a “long tail” of legacy content. The long tail refers to unique items that may individually have low demand but can generate significant cumulative market interest or web traffic. This may require innovations in digital media production, programming, and ways to utilize user-generated content. By acquiring and offering a vast library of legacy media content, streaming platforms like Amazon Prime, Hulu, and Netflix can attract a wide range of subscribers, including niche audiences who are fans of older or less mainstream content that might not be available on competing platforms.

Cost issues involve ensuring access to essential resources or what economists call “factors of production” (land, labor, capital, entrepreneurship). These might be cheap energy and other natural resources, talented labor, sources of investment as well as expertise in startups. Google’s Finland data center and the Green Mountain Data Center in Norway are good examples of attempts to use the cold waters in those areas to cool thousands of servers and reduce energy costs.

Raw materials are critical for the high tech sectors and are threatened by geopolitical factors. Rare earth elements (REEs) are especially critical in the manufacture of various high-tech products, renewable energy technologies, and defense systems. Products like EVs, headphones, smartphones, and windmills are reliant on a number of raw minerals including indium niobium, platinum, and titanium. Indium, for instance, is used in touchscreens, liquid crystal displays, and to manufacture microprocessors. Africa and China have been major supplies of critical raw materials for the high-tech sector but Australia, the US, and places like Greenland are increasing production. Ukraine and Russia used to collaborate on the production of neon, a major factor in lasers and semiconductor photolithography, but lately South Korea has successfully sourced locally produced neon.

Access to skilled labor and a climate of intellectual discussion are also important factors to consider. Richard Florida’s thesis that working talent congregates around creative clusters is instructive. He encourages areas interested in developing their creative economies to follow this advice: “To develop economically, Florida encourages nations and regions to support their universities, particularly faculties that do science and technology; cultivate new industries that capitalize on creativity; prepare people for a creative global economy, and foster openness and tolerance to attract the creative class.”[3]

Government protection can also impart benefits to a tech business or be a deterrent to its competitors.[4] From the perspective of an individual firm, it can benefit from outright subsidies, grants, or guaranteed loans. The National Telecommunications and Information Administration (NTIA) is the most supportive US agencies for digital enterprises. The Small Business Administration (SBA) provides investment capital and loans

Preferential purchase policies can give companies an edge. Governments often list specific advantages they are willing to provide smaller to medium-sized enterprises (SMEs), especially those related to specific sustainability, or gender/minority diversification programs. Often, these are advertised as support for specific products or services.

Exclusive licenses have been a historical reality in the media business, primarily due to the importance of a scarce resource – the electromagnetic spectrum. This key media resource has gone primarily to television and radio operators, but the interest in mobile services and Wi-Fi has opened up new frequencies for use. When we created PenBC (Pennings Broadcasting Corp. – seriously), the prime asset was the FCC license for microwave transmission from the satellite dishes to high rise buildings throughout Honolulu.

The 2015 FCC auction of low-frequency spectrum was interesting to watch as incumbents AT&T and Verizon fought off other mobile carriers such as T-Mobile and satellite TV provider Dish Network that have garnered US Justice Department support to achieve a more level playing field. Verizon was the only wireless operator to win a nationwide license in the 700MHz auction in 2008. The new spectrum it won with US$ 20 billion in the 2015 auction allowed it to offer faster speeds on its 4G LTE network, so customers to do more bandwidth-intensive like watching video on their smartphones and tablets.

A government may also erect barriers to entry in favor of domestic industries to support local media content and tech industries. It may utilize import tariffs and/or quotas such as President Biden’s extension of Trump’s tariffs on China, and the more one’s on EVs and semiconductors.

Regulations, whether environmental, safety-related, procedural, or otherwise, can significantly impact organizations. They often impose stricter burdens on some companies than others. These regulations are typically drafted by specific companies or related trade associations, often with the assistance of former government agency employees. They may advocate for government administrative support or legislation, and their authors often recommend the use of effective lobbying strategies.

In “Determining Competitive Advantages for Digital Media Firms, Part 1,” I discussed barriers to entry related to economies of scale such as fixed and marginal costs, as well as network effects. I also discussed how different forms of customer captivity can be beneficial for tech firms. Above, I looked at innovation, cost, and government regulation. It is also important to understand that two or more competitive advantages may be operating at the same time. Recognizing the potential of reinforcing multiple barriers to entry and planning strategies that involve several competitive advantages will increase a company’s odds of success. products or services.

Citation APA (7th Edition)

Pennings, A.J. (2024, May 15). Determining Competitive Advantages for Tech Companies, Part 2. apennings.com https://apennings.com/digital-media-economics/determining-competitive-advantages-for-tech-firms-part-2/

Notes

[1] Jonathan A. Knee, Bruce C. Greenwald, and Ava Seave, The Curse of the Mogul: What Wrong with the World’s Leading Media Companies. 2014.
[2] Christensen, Clayton M. The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail. Boston, MA: Harvard Business School, 1997.
[3] Pennings, A.J. (2011, April 30). Florida’s Creative Class Thesis and the Global Economy. apennings.com https://apennings.com/meaningful_play/floridas-creative-class-thesis-and-the-global-economy/
[4] The history of early digital innovation and development is a case study in government involvement. IBM got its start with the national census and social security tabulation. The microprocessor and the PC industry emerged through the Space Race and MAD (Mutually Assured Destruction) and the Internet can be said to have taken off after the Strategic Defense Initiative or “Star Wars” required supercomputers at different universities to use the NSFNET. National defense/security spending and other policies can help a company shore up its own defenses against competition.

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

The Division of Labor in Democratic Political Economies

Posted on | April 12, 2024 | No Comments

In this post, I examine some of the structural characteristics that make the success of the economy, a priority for government leadership in democratic political economies (DPEs). DPEs vary, but are generally republics that have intermediating politicians represent the populace in managing governments and the administration of public responsibilities. It expands on the notion that a division of labor has emerged in DPEs and examines the structural pressures that drive both the public and private sectors towards a common objective – economic success – despite differing approaches and competencies.[1]

Dividing the Labor to Ensure a Strong Economy

Neither the private nor the public sectors can ensure successful economic growth alone, but by recognizing this division of labor, DPEs can channel government and corporations toward mutually reinforcing successes. Attention to this division of labor and the structural properties that guide each sector can help achieve significant economic gains. Governments can work to create enabling political economy frameworks (like the Global Information Instrastructure/Internet) that are beyond the scope of private enterprises, yet significantly enhance economic opportunities. [2]

Companies drive economic activity by investing in potential profit-making activities while governments strive to provide enabling frameworks for economic prosperity. The corporation has emerged in modern times with a legally-shaped fiduciary duty to maximize shareholder value through return on investment (ROI). This legal stance tends to marginalize “ESG” (Environmental, Social, and Governance) concerns, including labor concerns such as fair wages, equal opportunity, sufficient benefits, and adherance to labor laws. The influence of ESG on investor decision-making continues to grow, including pressure to reduce environmental “externalities,” the costs paid by third parties when a product or service destroys or pollutes air, land, or water.

ESG

Democratically elected governments want to organize infrastructure, legal systems, and services to create economic value for voters and maintain political power for themselves and their party. Failure to enable and entice investment and produce economic success within a political boundary can raise significant difficulties for a government and its internal populace. Unstable economies can experience rapid de-investments due to the mobility of capital.

Globalization of commerce and finance since the 1970s has created new forms of competition and mobility for capital. This trend has challenged the economic base of national and local governments as they compete with each other to attract fluid multinational capital. Tax cuts facilitated US capital flows into China and other low-cost producers, reducing inflation but also jobs and infrastructure investments. At stake are jobs and investment returns.

While capitalists are often quite capable of success at the microeconomic level, they are not in a position to manage the economy as a whole. Towards procuring that success, corporations lobby governments and conduct other activities to influence government actions that will help their companies and industry.

Entrepreneurs and other people in business and professional services tend to be highly focused on their own profitability while spending only limited resources on community and civic affairs. Market activities are competitive and barriers to entry transitory. Private activities are insufficient and unable to maintain parks, libraries, roads, and other public goods that enhance the quality of life. And yet, these public goods are often responsible for attracting capital and talent needed for innovation and competitiveness.

As a result, democratic political economies tend to divide the responsibilities for modern economic life. Corporations focus on commercial and financial success. Governments provide, among other things, a judicial system to protect contracts, educational support to train workers, and administrative support to protect the populace from pollution and other dangers. Each shares an interest in robust commercial activities, albeit for differing reasons.

Perhaps most important is a monetary system that facilitates transactions and maintains price stability. DPEs primarily use a fractional reserve banking system that creates money through debt. This is capitalism’s “pedal to the metal” economic system that creates what what economists like Joseph Schumpeter and Werner Sombart called “creative destruction.” Modern Monetary Theory (MMT) has effectively argued that currency issuers like national governments play a crucial role in wealth production by supplying much-needed money and debt instruments. Governments spend money into the economy so companies and consumers have the liquidity to produce and consume.

When it comes to ensuring a successful and prosperous political economy, democratic societies have certain structural conditions that guide the emergence of their particular form of capitalism. Within limits, the political economy can take a variety of forms, such as highly exploitive and accumulation-oriented oligarchies or, on the other end of the scale, a highly redistributive society. Effective development strives for high integration strategies that balance accumulation and distribution strategies.[3]

Neither the public nor private sector in modern democratic societies have sufficient managerial or policy competencies to ensure a thriving economy. Yet, both rely on a vigorous economy for their success. Each needs economic success to satisfy their respective electoral and fiduciary constituencies. Despite the division and differing reasons, the goal is the same, a vibrant economy that will ensure both private profits and political triumph.

Governments look to the fruits of a growing economy to offset spending for debt interest, defense, and other services, including welfare. They aim to maintain a happy populace that will keep them in office. They want a prosperous economy to keep people employed, keep share prices high, and keep investment flowing into productive activities that will keep people feeling economically secure and provide tax revenues.

The private sector, in general, is unable to ensure overall capitalistic growth on its own. It lacks sufficient organizational capacity to ensure success at the macroeconomic level. That does not mean the private sector cannot infiltrate governance and the policy sphere. Donald Regan, the former CEO of Merrill Lynch, played a significant role in shaping the economic policies of the Reagan administration. As Secretary of the Treasury and Chief of Staff, he helped define and implement “Reaganomics,” emphasizing tax cuts, deregulation, and tight monetary policy. Along with Citicorp CEO Walter Wriston and others, they shaped a global framework based on capital mobility, fiat money, and credit markets. Still, it was not their roles as heads of major financial institutions but their participation in the US political administration that shaped a high accumulation, low distribution DPE with national and global implications.

While corporations are often quite capable of success at the microeconomic level, they are not in a position to manage the economy as a whole. The private sector wants growth and profits as well. Corporations strive to fulfill their primary fiduciary responsibilities – maintaining high profits for owners and shareholders. Towards procuring that success, they lobby governments and conduct other activities to influence government actions that will help their companies and industry. However, while these attempts may help individual companies or industries, they are insufficient to ensure the success of capitalism as a whole.

The Republic’s Interest in the Economy

In the first of the major structural mechanisms that Fred Block proposed to explain why government officials pursue policies that are in the general interest of capitalism. According to his view, government officials are, to some extent, dependent on the level of economic activity that 1) allows the state to finance itself through taxation or borrowing and 2) maintains popular support among the voting citizenry. Significant business investment, high employment levels, and minimal government competition for surplus capital are the most common strategies for ensuring high tax receipts while keeping the voting public relatively content.[4]

Governments require a monetary base to help fund their activities, whether meeting the bureaucracy’s payroll, building infrastructure, or funding defense activities, munitions, and personnel. According to MMT, governments also provide a monetary system to standardize the currency used in the collection of taxes. While MMT argues that national governments are currency issuers that create wealth when they legislate money. Taxes do not provide revenues for government spending, but provide a regulatory mechanism to limit inflation due to consumer and investment spending. These actions are often needed to reduce prices and motivate official economic activities that use the prescribed currency.

In the US, both Democrats and Republicans have spent liberally. The “Double Santa Claus” argument was set forward by Wall Street Journal editorial writer Jude Wanniski in 1976. In “Taxes and the Two Santa Claus Theory,” he argued that the Democrats should be the spending “Santa Claus” and redistribute wealth while the Republicans should be the tax reduction “Santa Claus” and help spur income growth. The Reagan administration institutionalized this approach with increased spending on anti-poverty programs such as Medicare, Social Security, and food assistance programs like the Supplemental Nutrition Assistance Program (SNAP). Military spending increased dramatically, including investments in new weapon systems, most notably, the Strategic Defense Initiative (SDI), commonly known as “Star Wars.” SDI proposed a space-based missile defense system designed to protect the United States from potential nuclear missile attacks and inadvertently laid the foundation for the Internet. Meanwhile, he drastically cut taxes with the Economic Recovery Tax Act of 1981 and the Tax Reform Act of 1986.[5]

Tax policies affect people and groups differently. They advantage different groups and disadvantage others. In the process, they make specific governmental trajectories possible. DPEs generally tax a combination of capital gains, income, sales of goods and services, etc. Inheritance taxes, for example, are meant to not only collect revenues but also impose a cost on the transfer of wealth and limit familial privilege and class divisions. The makeup of these tax policy decisions helps dictate an economic direction, so taxation policies should focus on what they want to diminish or limit.

Administrations also produce debt instruments that help offset government spending. In the global digital financial economy, government expenditures increasingly fund a significant amount of education, healthcare, military, research and other expenditures.

Taxation and borrowing offset their spending activities and programs, and help ensure a robust commercial sphere. Excess spending in the US is limited legislatively as part of the Reagan administration’s major changes to the financial sphere.

These instruments also provide safe collateral and an important hedge for the financial sectors. The US dollar is also produced as a global currency called the “Eurodollar.” International banks produce this version of the US dollar through lending and is not regulated by the US administration. Since over 80% of global trade is facilitated by the US dollar, Eurodollars bring important liquidity to international trade. But these banks often require high quality collateral like US Treasury bonds or blue chip corporate debt to ease any hesitancy to lend.

The global trading environment is complex and requires constant trading in various financial instruments. Government debt allows traders to increase their trading activities by allowing them to hold government securities in their portfolios as a hedge against other speculative losses. Government bonds are also traded constantly in high-frequency markets for arbitrage opportunities, debt rollover, income opportunities, and as a store of potential liquidity.

Common economic doctrine argues that governments compete with the private sector for capital. Still, in reality, government spending increases the commercial and financial spheres by expanding the trading environment, facilitating transactions, and providing instruments for risk reduction. These expenditures are why the US dollar has become the dominant global reserve and transaction currency. The volumes needed are huge, and the US has been willing to go into fiscal and trade deficits to provide the currency to the world.

Elected officials also need to keep the voting populace materially happy to stay in office. Economic indicators play a vital role in the public’s perception of the economy. These indexes provide numerical representations of various states of the economy, from consumer confidence to price levels and the latest unemployment rates. In an age when pensions and retirement accounts are invested in the financial markets, the public also follows such indicators as the Dow Jones Industrial Average (DJIA) and NASDAQ to gauge their personal wealth. Many older voters see policies that increase corporate wealth, such as tax cuts, as more valuable than government expenditures on food stamps or other forms of personal welfare as they increase stock prices for mutual funds and retirement accounts.

Significant structural relationships make the business of the economy, the business of government. For one, modern democratic governments have significant fiscal determinants that compel them to establish a major stake in the economy. Voters expect sufficient government services from military, regulatory agencies, and some degree of welfare support for the disadvanged. These desires are tempered by the “taxpayer’s money” myth that says that government needs to tax voters before public money can be spent. But governments are “currency issuers” that tax and borrow for other reasons. In order to obtain the needed financing to run the government, provide for the national defense, monitor the economy, and conduct special programs.

Influence Channels and Cultural Constraints

The business class is acutely aware of the effect government has on their interests and work towards shaping that influence, whether it be depressing the minimum wage, alleviating environmental restrictions, or shaping tax policy. Many critics of democratic political economies argue that influence gives capital concerns sufficient control over the state. For Block however, it is the first of several reasons, the “icing on the cake.” Other structural factors are at work and need to be considered.

Two “subsidiary structural mechanisms” according to Fred Block are also important when it comes to shaping the actions of public administrators towards enhancing economic growth. These are influence channels and cultural hegemony.

The first of the subsidiary structural mechanisms are the influence channels. The private sector can exert significant pressure on the state through its ability influence politicians, especially in a media age requiring significant expenditures on TV and other mediums for advertising. The aims of this influence has generally been oriented towards the procurement of government contracts, favorable economic legislation, tax cuts, regulatory relief, labor control, and specific spending in certain areas. They are most often campaign contributions, lobbying activities, and other favors.

Undoubtedly, issues related to bribery, coercion, and the revolving door into higher paying jobs may be factors that influence policy actions, however, this does not discount larger structural factors at work, particularly the high costs of elections, and procuring media buys for competitive elections and public relations. These have tied government officials to the influence of economic concerns.

Cultural hegemony was cited as a second subsidiary structural mechanism. Unwritten rules infiltrate democratic political economies, which tend to indicate what is, and what is not acceptable state activity. “While these rules change over time, a government that violates the unwritten rules of a particular period would stand to lose a great deal of its popular support. This acts as a powerful constraint in discouraging certain types of state action that might conflict with the interests of capital.”[6]

A contemporary example is the cultural divide over immigration. Issues related to race, including systemic racism, police brutality, racial inequality, immigration policy, and affirmative action, continue to be sources of contention and polarization in American society. Several major cultural divide issues have become prominent in political discourse, such as fundamental values, beliefs, and identities. “Culture wars” over social and cultural issues such as abortion, LGBTQ+ rights, same-sex marriage, religious freedom, and gender identity are particularly important in the age of social media and shape public opinion, electoral dynamics, policy debates, and social movements.

One potent issue is climate change. President Trump withdrew the US from the Paris Climate Accords because of a growing cultural backlash against concerns about climate pollution influencing weather effects worldwide. Many of his “Make America Great Again” (MAGA) members were convinced that such actions would be too expensive, hurt economic progress, and threaten a lifestyle centered on oil-based products, technologies, and transportation. Others refused to believe the scientific discourse and labeled it “elite” science. But mostly, vital interests in petrochemical-related industries drive the discussion on climate change through media practices such as astroturfing to avoid a significant “carbon bubble” collapse. For the most part, liberal progressive movements have embraced sustainable technologies and renewable energies such as hybrid cars, solar panels, and low-carbon food systems.

Summary

While sharing broad common objectives for a robust political economy, the government and the private corporate sectors have differing motivations and strategies for reaching these aims. Despite the division and differing reasons, the goal is the same, a robust economy that will ensure both profits and political success. Neither can, by themselves, ensure successful economic growth, but by recognizing this division of labor and the structural properties that guide each sector, democratic political economies can guide government policies and corporations toward mutually reinforcing successes. [7]

Citation APA (7th Edition)

Pennings, A.J. (2024, Apr 12). The Division of Labor in Democratic Political Economies. apennings.com https://apennings.com/democratic-political-economies/the-division-of-labor-in-democratic-political-economies//

Notes

[1] When I was in graduate school studying public administration and political economy, one the authors that interested me was the sociologist Fred Block. In debates with instrumentalists about “ruling classes,” he delineated the set of structural mechanisms that I primarily use here to determine the relationship between governments and the private sector in modern political economies. In this Jacobin article he provides a 2020 epilogue on his classic work.

[2] An interesting situation about enabling frameworks emerged with President Obama’s “You didn’t build that” statement during the 2012 presidential election campaign.

    “If you were successful, somebody along the line gave you some help. There was a great teacher somewhere in your life. Somebody helped to create this unbelievable American system that we have that allowed you to thrive. Somebody invested in roads and bridges. If you’ve got a business — you didn’t build that. Somebody else made that happen. The Internet didn’t get invented on its own. Government research created the Internet so that all the companies could make money off the Internet.”

The statement quickly received criticism by Governor Romney, a successful businessman, and others as an example of government encroachment in the private sector. The criticism echoed a similar critique against Vice-President Al Gore’s “I took the initiative to create the Internet.” Certainly, the Internet has progressed to be a major medium of global commerce due to entrepreneurial initiatives and accomplishments. However, much of the initial research and development, as well as the policy framework, was created by a wide range of government actions that transformed what was essentially military technology into commercial products and services.

[3] Tehranian, Majid. (1990). Technologies of power : information machines and democratic prospects / Majid Tehranian ; foreword by Johan Galtung. Norwood, NJ : Ablex Pub. p.184.

[4] This is basically a rewrite of my 2018 post that I wrote after Trump was elected president. I started with a disussion of whether a president with business experience is more important than a president with good understanding of administration and politics. Fred Block’s work was particularly useful and much of the ideas of a structural division of labor is based on his work, including this quote on p. 14.

[5] In the US, the “double Santa Claus” argument was set forward in “Taxes and the Two Santa Claus Theory” by Wall Street Journal editorial writer Jude Wanniski. He argued that the Democrats should be the spending “Santa Claus” and redistribute wealth while the Republicans should be the tax reduction “Santa Claus” and help spur income growth.

[6] This is basically a rewrite of my 2018 post that I wrote after Trump was elected president. I started with a disussion of whether a president with business experience is more important than a president with good understanding of administration and politics. Fred Block’s work was particularly useful and much of the ideas of a structural division of labor is based on his work, including this quote on p. 14.

[7] This blog is dedicated to my brother, Richard Pennings, who died on April 12, far too young.

Share

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor at the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he was on the faculty of New York University where he taught digital economics and comparative political economy. He also taught at St. Edward’s University in Austin, Texas, Marist College in New York, and Victoria University in Wellington, New Zealand. He has also been a Fellow at the East-West Center in Hawaii.

Digital Disruption in the Film Industry – Gains and Losses – Part 3: Digital FX Emerges

Posted on | March 17, 2024 | No Comments

“To succeed predictably, disruptors must be good theorists.” – Clayton Christensen

I had a chance to attend a special showing of The Wrath of Khan (1982), the second Star Trek movie, with my daughter a few years ago at the University of Texas in Austin. It included a live appearance by William Shatner, who starred as the infamous Captain Kirk in the movie as well as the original series. Shatner told the story of how Paramount executives were jealous of the success of Star Wars (1977) and how that led to the resurgence of the Star Trek franchise and incidently, the first use of digital special effects in a movie.

This post discusses the beginning of the digital or computer-generated imagery (CGI) revolution. Previously I wrote about the emergence of the digital camera and the digital disruption caused by non-linear digital editing. Incidently, I happened to be one of the first academics to teach non-linear editing with the University of Hawaii obtained the first Avid.

It seems appropriate that Star Trek would make both film as well as computer history. Its first attempt, Star Trek: The Motion Picture (1979), was moderately successful, but very expensive due to its grandiose sets. The second movie was given over to Paramount’s television studios who tightened the script and economized on the sets. They also hired George Lucas’ Industrial Light and Magic (ILM) to produce some of the effects for the second movie. ILM created an entirely computer-generated sequence for a movie when it demonstrated the effects of the Genesis Device on a barren planet in what turned out to be the Wrath of Khan.

But was it the first? Or was it Westworld (1973) Going back in history another case emerges that might lay claim to the first digital scene.

But first some background on the move from analog film to digital visual media. Previously, most special effects in films were done by artists using various analog methods. Animation was mainly drawn by hand, frame by frame. Even another futuristic 1982 movie, Tron, displayed results that were stunning for the time, but they were painstakingly done frame by frame.

The origin story for digital FX goes back to 1964 when NASA was directing the first flyby of Mars. NASA was working with its Jet Propulsion Lab (JPL) to develop an imaging system for Mariner 4. They needed to code the shading of 40,000 dots to construct the first image of Mars. Numbers were sent back to Earth from the spacecraft and the first images were actually colored in a “paint by hand” project based on the digital numbers. Some 240,000 bits were aggregated into a series of numbers on a globe.

John Whitney Jr. wrote in the American Cinematographer (November, 1973) that Brent Sellstrom struggled with a problem of representing a robot’s point-of-view (POV) on film. The script of Westworld called for a way to show how the evil robot cowboy, played by bald 70s icon Yul Brynner saw the world. The post-production supervisor for Westworld had to find a way to get the audience’s viewpoint into the head and eyes of the evil robot, the way the mechanical device was seeing the world. The POV shot takes the audience into a character’s head to give them a first-person, or subjective experience. [1]

Sellstrom suspected that JBL’s digital scanning methods might be used to construct the robot’s point-of-view in Westworld. JBL’s estimate to do the job for two minutes of animation would take nine months and cost $200,000. This price was way over their budget so they hired another company Information International, Inc. to scan footage of the robot’s POV and convert it to numerical data with similar techniques to the ones developed at JBL. It used a series of 3600 rectangles. They had to make sure that clothes of the actors and other items were contrasted to other items on the set. It took a minute for each frame and eight hours of processing for 10 seconds of film footage. The scene provided needed POV shot that brought the audience into the robot’s experience and movie went on to be a major hit. In 1976, a sequel called Futureworld scanned and animated its star, Peter Fonda’s head, for the first appearance of 3D computer graphics in a movie. Obviously, a precursor to Max Headroom.[2]

Throughout the 1990s, advancements in computer hardware and software, particularly in rendering and animation technologies, enabled more realistic and sophisticated digital effects. Films like Jurassic Park (1993) and Terminator 2: Judgment Day (1991) showcased groundbreaking CGI that blurred the line between reality and computer-generated imagery. The rise of dedicated visual effects studios, such as Digital Domain, Industrial Light & Magic (ILM), Pixar, and Weta Digital, played a crucial role in driving innovation in digital FX. These studios employed teams of talented artists, technicians, and engineers to push the boundaries of what was possible with digital technology.

Filmmakers began integrating live-action footage with CGI elements seamlessly, allowing for the creation of fantastical worlds, creatures, and visual sequences. Films like The Matrix (1999) and The Lord of the Rings trilogy (2001-2003) pushed the boundaries of digital FX, setting new standards for realism and spectacle. The development of digital character animation techniques, exemplified by films like Toy Story (1995) and Shrek (2001), revolutionized the animation industry and paved the way for the creation of lifelike digital characters that display complex emotions and personalities.

Technologically, ILM’s Renderman, that was spun off to Pixar, has been particularly noteworthy. RenderMan was one of the first rendering software packages to enable the creation of photorealistic images in CGI. Its advanced rendering algorithms and shading techniques allowed filmmakers to achieve lifelike lighting, textures, and reflections, enhancing the realism of digital environments and characters. RenderMan’s impact on digital FX has been recognized with numerous awards, including Academy Awards in 27 of the 30 films to win the Oscar for Best Visual Effects by 2018. Its contributions to the field of computer graphics have been instrumental in advancing the art and technology of filmmaking.

Finally, a note on digital disruption from Clayton M. Christensen talking about the corresponding changes in the computing industry. Christensen argues that the tendency of good customers to always listen to their best customers and improve their products leave them open disruptive innovations. The early digital cameras for example completely surprised film supplier Kodak. More currently, the digital camera has made possible DIY streaming services like YouTube.com.

In my next post on this series, I intend to explore the introduction of artificial intelligence (AI) such as SORA and VIDU to the digital televisual world.

Citation APA (7th Edition)

Pennings, A.J. (2024, Mar 17). Digital Disruption in the Film Industry – Gains and Losses – Part 3: Digital FX Emerges. apennings.com https://apennings.com/technologies-of-meaning/digital-disruption-in-the-film-industry-gains-and-losses-part-3-digital-fx-emerges/

Share

Notes

[1] Background on the role of JPL on digital movie-making from American Cinematographer 54(11):1394–1397, 1420–1421, 1436–1437. November 1973.

[2] Frances Bonner. Fiction 2000: Cyberpunk and the Future of the Narrative (1992) in Slusser, G. and Shippey, T. eds. (Athens: University of Georgia Press)

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

Four Futures and the S-Curve

Posted on | March 13, 2024 | No Comments

One of my favorite professors in graduate school was Jim Dator, a professor at the University of Hawaii and Director of the Hawaii Research Center for Futures Studies at Manoa. One of Dator’s major strategies for thinking about the future was an exercise discussing four types of potential scenarios for the future of humanity: Continued Growth, Transformation, Limits and Discipline, Decline and Collapse.

I include this approach in discussions about different futures strategies in my Introduction to Science, Technology, and Society Studies (STS) course to get students to think more about the trajectories of new technologies and social developments, and what they may mean for the world they are inheriting.

Dators Scenarios on an S Curve

I also include a discussion of the S curve initiated by futurist John Smart’s interpretation of Dator’s four scenario exercises, as illustrated above. S-curves, also known as sigmoid curves, are mathematical models often used to describe the adoption or growth rate of various phenomena over time. Examples would be the adoption of Artificial Intelligence (AI) or the growth rate of bacteria in a lab sample. This representation is based on living systems theory by James Miller but seems to fit well with the futures writing exercise. However, Dator saw the scenarios more as four generic, separate alternative futures rather than naturalistic growth phases.

Scenarios are narratives or ‘stories’ illustrating possible visions of a future. These scenarios provide a structured way to consider the components of alternative futures and their potential developments. It presents four broad scenarios or perspectives on the future that can help individuals and organizations think about and plan for different possible outcomes. They are not strictly predictions but rather help generate ideas of some possible futures.

Combining an understanding of S-curve dynamics with futures scenario can be useful in projecting a trajectory, isolating trends, and constructing a vision of likely outcomes. It also marks inflection points (IP) where the curvature changes suggesting the beginning of a major change. Also important are tipping points (TP) critical thresholds when a tiny perturbation can qualitatively alter the state or development of a system or society, indicating dramatic change. DP marks the decline or de-acceleration phase. GP (growth point), IP (inflection point) and SP (saturation point) are also key indicators of curve’s dynamics.

S-curves are commonly used to predict the adoption and lifecycle of technologies or products. Innovations such as personal computers, smartphones, and social media platforms have been analyzed using S-curves to predict their growth and market saturation. As they move through stages of introduction, growth, maturity, and decline, S-curves can provide insights into when these stages are likely to occur and their duration. Researchers like Everett Rogers used S-curves to explain the “diffusion of innovations,” describing how new ideas or technologies are adopted by a population over time. For example, understanding the adoption patterns of electric vehicles can help policymakers develop incentives, infrastructure, and safety standards.

The categories below expand on the four scenarios mentioned above.

Continued Growth projects the current emphasis on economic development and its social and environmental implications into the near future. In this scenario, the future is seen as an extension of the present. It assumes existing trends, systems, and patterns will continue without significant disruption. This business-as-usual (BAU) trajectory is represented in the upward orange curve.

Limits and Discipline emphasize the importance of rules, regulations, and control. In this perspective, the future is shaped by enforcing strict discipline and adhering to established norms and principles. It is a scenario that focuses on order, authority, and conformity. It suggests a society that highly values places, processes, or values that are threatened by the existing economic and social trajectory. In this scenario, it is often believed that society has “limits to growth” and should be “disciplined” around a set of fundamental cultural, ideological, scientific, or religious values. These will likely involve environmental concerns, including “green” solutions such as recycling, social distancing, and mask-wearing in pandemic times. Understanding where this saturation point lies in the S-curve can help predict when growth will likely slow down or stabilize. It is represented by the blue line that reaches a plateau after the tipping point. S-curves often reach a plateau, indicating that the phenomenon is saturating or approaching its maximum potential.

Decline and Collapse is represented by the descending green line on the right. This scenario envisions a future characterized by the breakdown of existing systems, institutions, or structures. It often involves a significant crisis or disruption that leads to a reevaluation of the way things are done. It is a scenario that encourages preparedness for unexpected challenges and the need for adaptability. It suggests a catastrophic turnaround or reversal of fortunes due to natural or human-made disasters. Will climate change create such a decline? Is nuclear war a possibility? Pollution and changes associated with massive carbon dioxide and methane releases are current concerns as they are linked with dramatic weather changes influencing droughts, floods, and wildfires. The challenge to US leadership in the world by China and Russia could lead to a dramatic escalation of war in the world as witnessed in Ukraine.

Finally, a Transformative society envisions a future marked by radical change, innovation, and the emergence of entirely new paradigms. It challenges individuals and organizations to think creatively, embrace innovation, and be open to transformative possibilities. It emphasizes the need to adapt and thrive in a rapidly changing world. It anticipates a radical makeover of society based on biological, spiritual, or technological revolutions. For example, the creation of new genetically reconfigured “posthuman” bodies is a possibility, perhaps due to the viral innovations of COVID-19 research or rapid adaption to environmental changes. A “singularity” of network-connected humans and AI is another projected scenario. A global set of religious revivals is also considered by many to be a possibility. These scenarios posit entirely redesigned global culture, economic, and political structures.

Dator emphasizes that the purpose of scenario visioning is to determine preferable futures and work towards them rather than prophesizing a specific future. While S-Curves add a temporal trajectory and can indicate future activities, they lack information about time-frames. It is difficult to use them to suggest the number of months, years, decades, or even centuries before they might take shape and play out.

This historical context can be useful for predicting future trends.By analyzing historical data and fitting an S-curve to the data points, it may be possible to gain an understanding of how a particular phenomenon has evolved over time. S-curves can then be used to extrapolate future growth. By extending the curve into the future, you can estimate when a particular phenomenon is likely to reach a certain level of adoption, maturity, or impact. Policymakers can use this information to predict future developments, allowing for better long-term planning and resource allocation.

Citation APA (7th Edition)

Pennings, A.J. (2024, Mar 13). Four Futures and the S-Curve. apennings.com https://apennings.com/political-economies-in-sf/jim-dators-four-futures-and-the-s-curve/

Notes

[1] I was working on my PhD on cyberspace and electric money and found the futures approach interesting. He dissuaded his students of the idea of a one true future whose probability could be calculated with positivistic certainty, and suggested we use a futures visioning process to envision and develop several alternative scenarios.

[2] The notion of ideal types comes primarily from Max Weber.

[3] Dator’s Four Futures is a framework developed by futurist and educator Jim Dator. It presents four broad scenarios or perspectives on the future that can help individuals and organizations think about and plan for different possible outcomes. These scenarios provide a structured way to consider alternative futures and potential developments. The four generic alternative scenarios are: four generic alternative futures” (continuation, collapse, discipline, transformation) Dator, Jim. (2009). Alternative futures at the Manoa School. Journal of Futures Studies. 14.

These scenarios are not meant to predict specific outcomes but to provide a structured way to consider different possibilities and their implications. By exploring these scenarios, individuals and organizations can better prepare for a range of future developments and make informed decisions about their strategies, policies, and actions. Dator’s Four Futures framework is a valuable tool for futures thinking and scenario planning.

[4] Alvin Toffler’s “Future Shock” is a book that explores the concept of rapid change and the challenges it poses to individuals and societies. While Toffler introduced the idea of future shock, he did not specifically outline “four scenarios of the future” in that book. Instead, he discussed various scenarios and trends related to technological, social, and economic changes.

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, Ph.D. is a Professor in the Department of Technology and Society, State University of New York, Korea. From 2002-2012 was on the faculty of New York University. Previously, he taught at Hannam University in South Korea, Marist College in New York, Victoria University in New Zealand. He keeps his American home in Austin, Texas and has taught there in the Digital Media MBA program atSt. Edwards University He joyfully spent 9 years at the East-West Center in Honolulu, Hawaii.

keep looking »
  • Referencing this Material

    Copyrights apply to all materials on this blog but fair use conditions allow limited use of ideas and quotations. Please cite the permalinks of the articles/posts.
    Citing a post in APA style would look like:
    Pennings, A. (2015, April 17). Diffusion and the Five Characteristics of Innovation Adoption. Retrieved from https://apennings.com/characteristics-of-digital-media/diffusion-and-the-five-characteristics-of-innovation-adoption/
    MLA style citation would look like: "Diffusion and the Five Characteristics of Innovation Adoption." Anthony J. Pennings, PhD. Web. 18 June 2015. The date would be the day you accessed the information. View the Writing Criteria link at the top of this page to link to an online APA reference manual.

  • About Me

    Professor at State University of New York (SUNY) Korea since 2016. Moved to Austin, Texas in August 2012 to join the Digital Media Management program at St. Edwards University. Spent the previous decade on the faculty at New York University teaching and researching information systems, digital economics, and strategic communications.

    You can reach me at:

    apennings70@gmail.com
    anthony.pennings@sunykorea.ac.kr

    Follow apennings on Twitter

  • About me

  • Writings by Category

  • Flag Counter
  • Pages

  • Calendar

    July 2024
    M T W T F S S
    1234567
    891011121314
    15161718192021
    22232425262728
    293031  
  • Disclaimer

    The opinions expressed here do not necessarily reflect the views of my employers, past or present.