ICT4D and the Global Network Transformation
Posted on | August 14, 2024 | No Comments
Edited remarks from my talk at the IGC Research Showcase, May 22, 2023 at the Incheon Global Campus, Songdo, South Korea.
This talk is a part of a more extensive discussion on ICT4D (Information and Communications Technology for Development) and Global Governance, but as I only have 8 minutes, I’m going to focus on changes in national network infrastructures worldwide and why we now have global data transfer and very cheap international voice and video calls. I want to discuss the transition in network architecture technology and telecommunications organizational models and how these led to the global Internet we have today.
In what was termed “liberalization,” “deregulation,” and “privatization” of national telecommunication systems, global pressures led to a radical transition in network architecture and organizational models. Often called “PTTs” for “Post, Telegraph, and Telephone,” these government entities underwent a transition to state-owned enterprises (SOEs) that were placed in competitive environments (liberalization), deregulated, and then sold off to private investors (privatization), in whole or part.[1] In this process, the global Internet emerged.
My analysis has been developed in the context of a larger set of ICT4D developments since 1945 that I examine elsewhere in detail, but because of limited time, I will focus primarily on networks. I got involved in ICT4D in the mid-1980s when I interned at the East-West Center (EWC) in Honolulu. The EWC was noted for its research on Communication for Development (C4D), including work on satellites for development such as India’s INSAT and Indonesia’s Palapa satellites.[2]
I arrived when they were putting together a project on National Computerization Policies, based on France’s Nora-Minc Report: The Computerization of Society (1982). We wanted to make the turn to computer technologies and their role in development (primarily agriculture, education, and health).
Because we had a good relationship with the Pacific Telecommunications Council (PTC), where I did my first internship, network infrastructure was a strong part of the East-West Center’s agenda on development. Also located in Honolulu, PTC brought a wide range of telecom professionals to Hawaii for their annual conference. This meant government representatives, corporate executives, academic researchers, etc. I remember that my first presentation on the national computerization policy project had International Telecommunications Union (ITU) Secretary-General Dr Pekka Tarjanne in the front row.
How did the transformation occur? Pressures started to build in the 1960s as global companies wanted better telecommunications, including more computer communications. Along with transportation, these were seen as “permissive” technologies, allowing expanded financial, manufacturing, and marketing capabilities. Undersea communications increased at a rate of one a year since 1956, and the Space Race put satellites in geosynchronous space orbits to facilitate international connectivity and the usability of earth stations. By the time of the Moon landing, NASA realized Arthur C. Clarke’s vision of “rocket stations” providing global radio coverage.
The dynamics of the global economy changed when President Nixon took the US off the Bretton Woods’ gold-dollar standard in 1971, ultimately leading to significant changes in the world’s telecommunications networks. Nixon ended the convertibility between the dollar and gold, figuratively “closing the gold window” and stopping the bleeding of gold from reserves at Fort Knox and the Federal Reserve Bank in New York. The dollar subsequently crashed in value, incentivizing OPEC to raise oil prices and creating havoc in global currency and debt markets.
At the same time, new technologies were emerging with the commercialization of the Cold War’s semiconductor and telecommunication technology. Intel released the first microprocessor in 1971. Reuters historically provided a news service but created a new virtual marketplace for foreign exchange trading (FX). The monetary volatility of the 1970s oil crises made Reuters Money Monitor Rates for FX quite profitable. SWIFT provided international messaging of money information between banks.[3]
Banks also used network technologies to create syndicated loans to recycle OPEC money, soon to be called “petrodollars.” Where did they recycle them? Primarily countries that needed the money to buy oil, but also development projects worldwide borrowed the dollars. Thus grew the “Third World Debt Crisis.”
The debt crisis became the lever to “structurally adjust” countries towards a more open and globalized system. The Reagan administration tasked the IMF with ensuring that countries looking for debt relief started to follow a specific agenda that included the liberalization and privatization of their PTTs. Liberalization meant encouraging new companies to compete against the national and international incumbents, while privatization meant the process of transitioning from public to private ownership.
Sometimes called “spreadsheet capitalism,” this process often involved inventorying and valuing assets such as maintenance vehicles, telephone poles, digital lines, etc., so the company could become a State-owned Enterprise (SEO), valued by investment banks, and eventually sold off to private investors. These changes started to open up the PTT telecom structure to the introduction of new technologies, including the fiber optic lines and packet-switching routers needed for the emerging World Wide Web.
The 1980s was a decade of significant changes in the global political economy, particularly in the US and Great Britain and their relationship with the rest of the world. Both Ronald Reagan and Margeret Thatcher wanted to counter the growing criticisms from the South countries while moving their economies out of the “stagflation” that rocked the oil crises-riden 1970s. Both were influenced by “Austrian” economists Frederick Hayek and Ludwig Von Mises. Reagan was also influenced by George Gilder’s Wealth and Poverty, that promoted a potlatch “big man” theory that partly inspired his tax cuts. Both wanted to reduce the influence of unions.
Reagan was hesitant about breaking up the AT&T telecommunications company, but Thatcher was quite aggressive about privatizing the British PTT. US’ “Ma Bell” telephone monopoly had been under various forms of anti-trust attacks in the previous decades, especially to make more spectrum available and add terminal equipment. Spurred on by other companies like MCI and Sprint, AT&T was eventually broken up by having to divest its local service to the “Baby Bells” and Bell Labs, inventor of the transistor. The new ATT got to hold on to a lot of cash and its long-distance business, and was finally allowed into the computer business.
I moved to New Zealand in 1992 to study the transition of the country’s PTT to a State-Owned Enterprise (SOE) and then the privately owned “Telco.” The government had started a process of organizing and valuing the “Post Office” into a SOE in the early 80s, following the Reagan-Thatcher preference for private ownership. Then, it sold off 49% of its shares to two Baby Bells, Ameritech and Bell Atlantic (later integrated into Verizon), partially to pay off one-third of the debt it acquired as part of the Third World Debt Crisis. The majority of shares were meant for domestic control.
The “financial revolution” of the 1980s was based on dramatic changes in the 1970s and continued into the 1990s with the formation of the World Trade Organization (WTO). Headed first by Renato Ruggiero from Italy, and later Michael Moore, the former PM of New Zealand, the WTO opened up the trade of all types of communications and information products and services with the ITA in 1996 in Singapore and pushed for further privatization of the telecommunications networks the following year in Geneva.
A speech by Vice President Al Gore to the International Telecommunications Union (ITU) on March 21, 1994, signaled the importance of building a “Global Information Infrastructure” (GII) But even more important was Gore’s participation the next month during the final negotiations of the GATT’s Uruguay Round that led to the establishment of the World Trade Organization (WTO).
The General Agreement on Tariffs and Trade (GATT) originated with the International Monetary Fund and the World Bank at the 1944 Bretton Woods Conference in New Hampshire. This event laid the foundation for the post-World War II financial system. It established the US dollar–gold link that Nixon severed in 1971. It had more trouble trying to ratify the International Trade Organization (ITO), which the US Senate rejected. Negotiations continued, and the GATT avoided the ITO’s fate in 1947 and entered into force on January 1, 1948, but it lacked a solid institutional structure.
It did, however, sponsor eight rounds of multilateral trade negotiations from 1987 to 1994, including the Uruguay Round that integrated services in 1987. International trade negotiations historically concentrated on physical goods, while services were only seriously considered in the November 1982 GATT ministerial meeting. The Uruguay’s Round of trade negotiations led to the General Agreement on Trade in Services (GATS) as part of the World Trade Organization (WTO) mandate. The GATS extended the WTO into unprecedented areas never previously recognized as coming under the scrutiny of trade policy.
The WTO would shape the Internet and its World Wide Web. While the Clinton-Gore administration was initially hesitant about the “Multilateral Trade Organization,” it saw the World Trade Organization as a way of enforcing key trade priorities and policies on global communications and e-commerce.
For networks, it meant replacing telecom monopolies with a more liberalized environment that included outside equipment vendors and service providers and replacing PTT public ownership with private enterprises that would be more competitive and friendly to outside investment. For e-commerce, it meant replacing detailed bureaucratic regulations with a legal environment for fairer and more effective competition. It also meant eliminating cross-subsidies between profitable and unprofitable services that hindered corporate expansion with non-market pricing and subsidies based on social goals rather than market activities. The WTO would shape the Internet and, precisely, why it could globalize and become so cheap.
Summary and Conclusion
The 1970s and 1980s saw significant technological advancements (like undersea cables, satellites, and microprocessors) and economic changes (such as the end of the gold standard and the oil crises) that reshaped global telecommunications. These factors, combined with the financial revolution and the restructuring of global trade policies, pushed countries toward a more open and competitive telecommunication environment.
The global transition from government-controlled Post, Telegraph, and Telephone (PTT) systems to privatized and liberalized telecommunications led to the creation of the global Internet. This shift involved deregulation and the sale of state-owned enterprises, as well as opening the market to competition and new technologies. The network structure is important for the development of ICT4D.[4]
The establishment of the World Trade Organization (WTO) in the 1990s played a crucial role in furthering the liberalization of global telecommunications. This included the introduction of the General Agreement on Trade in Services (GATS) and the promotion of a legal environment that favored competition, which helped globalize the Internet and reduce costs for international communication.
The liberalization and privatization of national telecommunication systems, driven by technological advancements, economic shifts, and global trade policies, were key factors in the development of the global Internet. These changes not only facilitated the creation of a more interconnected world but also made international communication more accessible and affordable.
Citation APA (7th Edition)
Pennings, A.J. (2024, Aug 14). ICT4D and the Global Network Transformation. apennings.com https://apennings.com/telecom-policy/ict4d-and-the-global-network-transformation/
Notes
[1] Herb Dordick and Deane Neubauer, “Information as Currency: Organizational Restructuring under the Impact of the Information Revolution.” Keio Review. No. 25 (1985)
[2] Meheroo Jussawalla was our resident communications development economist at the East-West Center, who also worked closely with Marcellus Snow of the University of Hawaii. Norm Abramson, the creator of ALOHANET, would also walk across the street from the Engineering School at the University of Hawaii. Herb Dordick and Deane Neubauer made a substantial contribution with “Information as Currency: Organizational Restructuring under the Impact of the Information Revolution.” This paper became very influential in my graduate studies.
[3] For my graduate work I swiched focus to financial technology, particularly the telecommuniations regulatory framework for international banking. I was intrigued by the emergence of new networks such as CHIPS, SWIFT, Reuters, and the shadowy world of eurodollars. Reuters was very innovative with its Stockmaster and particularly its Money Monitor Rates. By the 1980s, SWIFT was pioneering the use packet-switching, but the earlier X.25 network and X.75 gateway protocols developed by the ITU and adopted by most of the PTTs around the world at the time.
[4] Information and Communication Technologies for Development (ICT4D) is one of the specializations for the undergraduate B.Sci. degree in Technological Systems Management here at SUNY Korea and part of my research agenda.
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is a professor at the Department of Technology and Society, State University of New York, Korea teaching broadband policy and ICT for sustainable development. From 2002-2012 he was on the faculty of New York University where he taught digital economics and information systems management. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: AT&T > Money Monitor Rates > PTTs > SWIFT > The Society for Worldwide Interbank Financial Telecommunications (SWIFT) > WTO agreement on basic telecommunication services
Digital Borders and Authoritarianism
Posted on | July 27, 2024 | No Comments
Despite cyberspace’s early promise of a world without digital borders, nationalistic concerns started to re-emerge in the new millennium. This essay explores the mechanisms, implications, and broader consequences of digital authoritarianism in the modern era. In an era where information is a powerful tool, authoritarian regimes have increasingly leveraged digital borders to enforce their control, limit dissent, and maintain power. In the contemporary international landscape, the intersection of digital borders and authoritarianism presents a complex and dangerous dynamic for modern nation-states and global order.
In the post-colonial era, national powers were often keen to limit economic and financial information flows. These communications and data movements were often seen by newly independent countries as a type of “trojan horse,” bypassing national boundaries without administrative scrutiny. Before the leverage of petro-dollar debt opened up the networked flows of data and capital that characterized neo-liberal global financialization, nations around the world were known to police information borders, both technologically and politically. Technological innovations like Deep Pack Inspection (DPI) would continue to supply nation-states with tools to monitor populations, and even provide a “kill switch” to shut down a nation’s Internet access.
Post, Telephone, and Telegraph (PTTs) monopolies operated as a type of electronic moat that restricted data communications. Other ministries also restricted capital and sometimes news flows. The push to deregulate and privatize the telecommmunications environment initially liberalized the transnational information flows. Those unrestricted flows would not go uncontested for long though.
Digital Governance and Administrative Power
Anthony Giddens in Nation-State and Violence (1985) described nation-states as “power containers” whose effective functioning relies on the interplay between administrative power and surveillance within a “territorial delimitation.”[1] The nation-state is a quintessentially modern institution that is characterized by a centralized bureaucratic authority, defined territory, and the ability to mobilize resources and populations. They collect power from two major sources:
Administrative (Allocative) Power I: Communication and information storage giving control over space and time and, with it, material resources;
Administrative (Authoritative) Power II: Internal pacification of populations through ideology, surveillance, and the monopoly over violence, incarceration, and physical force.
Administrative power provides the framework for governance and has gotten increasingly more sophisticated. This power involves overt monitoring, such as policing and public surveillance systems with CCTV and smart recognition systems. It increasingly uses what he called “dataveillance,” collecting and analyzing data about individuals and groups. Targeted surveillance that is precise and aims to gather intelligence on specific people with methods can include wiretapping, geo-locational and GPS tracking, online monitoring, and physical observation.
Also, big data techniques that gather knowledge from sources such as census information, social security data, and digital footprints are powerful individual and group tracking techniques. Both enables the state to respond to internal and external challenges. Both can be used to ensure compliance and help construct narratives of legitimation.
The organization Freedom House produces an annual Freedom on the Net report that monitors countries. The biggest contributing factor are repressive laws governing the digital civic space and surveillance. Internet censorship, the deployment of spyware and video surveillance, as well as network control technologies such as Border Gateway Protocols (BGP) monitor digital authoritarianism and systems that undermine democracy. They strive to improve governance and laws that respect internationally recognised human rights and the use of a free, open, and safe Internet standards.
Giddens noted that administrative power is not inherently authoritarian. In democratic contexts, it can be used to manage society effectively and uphold the rule of law. Census information is often important for allocating political representation in democratic societies. However, authoritarian regimes can co-opt the same structures to consolidate power and suppress dissent.
Authoritarianism is mostly characterized by a concentration of power in a single authority or a small group of individuals who can exercise significant control over various aspects of life, including political, social, and economic spheres.
Authoritarian nation-states usually emerge when a crisis leads to a domestic group taking power and capturing the state apparatus. Then, they use the power of the state to perpetuate themselves through the pacification of the domestic population. This control is achieved through various combinations ideological persuasion (usually grievance-based), economic dominance, and violence. It is important to keep a population in a state of crisis. This then presents opportunities for the ruling groups to frame the crises in ways that support them. Xenophobic appeals are quite effective, such as fears about immigration and foreign religions. Sanctions by the global community are another tool to play the population against a foreign threat. Then, oppressive control comes down to how many people can be fooled or manipulated, and the strength of their policing resources.
Digital borders refer to the territorial limitations and controls imposed on the flow of digital information across national boundaries. This digital control has often resulted in isolation from global information, suppression of free speech, separation from the global economy and supply chains, and the erosion of trust in the democratic potential of digital media as a valid information and news source. Globally, this digital isolation has led to human rights concerns, geopolitical tensions, and technological fragmentation.
Mechanisms of Digital Borders
Authoritarian regimes employ various strategies to create and enforce digital borders. These methods are sophisticated and evolve with technological advancements to ensure comprehensive control over the digital sphere. These include Internet censorship and filtering, surveillance and data collection, social media manipulation, and control of the digital infrastructure.
One of the most direct methods of enforcing is using firewalls and filtering technologies to block access to certain websites and online services. China’s Great Firewall is a prominent example, preventing access to selected foreign news websites, social media platforms, and content deemed subversive by the state. By controlling what information citizens can access, authoritarian regimes shape public perception and suppress dissenting views.
Mass surveillance is another key component of digital authoritarianism. By monitoring mass media and online activities, governments can track and intimidate academics, activists, dissidents, and journalists. Advanced algorithms and artificial intelligence facilitate real-time monitoring of social media and other digital communications, identifying and targeting individuals and media outlets who threaten the regime’s narrative.
Mentioned above, Deep Packet Inspection (DPI) technology allows for detailed monitoring and filtering of Internet traffic, enabling regimes to block specific content and identify users accessing prohibited material.
Social Credit Systems (SoCS) have also been conceived and implemented to build evaluative ratings for citizens, businesses, and other organizations. They use big data to monitor behaviors and assign scores based on compliance with government standards. Predictive policing technologies employ AI to analyze data and predict potential criminal activity, leading to pre-emptive actions against perceived threats.
Authoritarian governments also manipulate social media to spread propaganda and disinformation. This interference includes the use of bots, trolls, memes, and state-sponsored media to flood the digital space with content that supports the regime’s objectives while drowning out opposition voices.
By owning or heavily regulating Internet Service Providers (ISPs) and telecommunications companies, authoritarian regimes can ensure they have the ultimate say in who can access the Internet and how it can be used. This control extends to shutting down the Internet entirely during periods of unrest, as seen in countries like Bangladesh, Eygpt, Iran, and Myanmar.
Implications of Digital Borders
The implementation and enforcment of digital borders has profound implications for the political, social, and economic landscapes of affected countries. Digital borders can greatly limit freedom of expression. Digital repression means citizens cannot freely share information, discuss political matters, or criticize the government without fear of reprisal. This control suppresses public discourse and hinders the development of a healthy, diverse society.
By limiting access to international news and perspectives, authoritarian regimes isolate their populations from the global flow of information. This hinderance fosters controlled narratives and an insular worldview, which can be manipulated to maintain nationalistic or xenophobic sentiments.
Digital borders can also impede economic development. Flows of information are crucial for innovation and global business operations. Restrictions on Internet access and online services can discourage foreign investment, hinder technological progress, and reduce competitiveness in the global market.
Pervasive surveillance and control also erodes public trust in digital technologies. People become wary of expressing themselves online or using digital services, knowing their activities are being monitored. This mistrust can stymie the adoption of new technologies and hinder digital literacy and public discourse.
Broader Consequences
The intersection of digital borders and authoritarianism extends beyond individual nations, affecting global politics and international relations.
The suppression of digital freedoms raises significant human rights concerns. International organizations and civil society face challenges in addressing these violations, as authoritarian regimes often justify their actions under the guise of national security and sovereignty. Civil society, consisting of dense and diverse networks of community groups, often stand between the individual and the authoritarian state. Citizen groups, cooperating with community-based groups and associations, strengthen civic freedoms and rights such as fair elections, the freedom to associate, expression of speech, and free media.
Digital borders contribute to geopolitical tensions, particularly between authoritarian and democratic states (which also have to guard against the perils of digital control). Conflicts over cyber espionage, digital trade barriers, and information warfare are increasingly common. Democracies advocate for open Internet principles such as net neutrality. At the same time, authoritarian regimes push for cyber-sovereignty and centralized control over network management, including using “kill switches” that can immediately shut down Internet transmissions through the digital border.
The imposition of digital borders can lead to a fragmented global Internet, where different regions operate under vastly different rules and restrictions. This fragmentation threatens the foundational concept of a unified, open global Internet, complicating international collaboration and digital interoperability.
Conclusion
Authoritarian regimes’ enforcement of digital borders in the modern era represents a significant challenge to global norms of free expression, access to information, and human rights. As regimes continue to develop and refine their methods of control, the civil and international communities must navigate the delicate balance between respecting national sovereignty and advocating for digital freedoms. The future of the Internet as a space for the free exchange of ideas and information hinges on the global response to these authoritarian practices and the collective effort to preserve an open and inclusive digital world.
Citation APA (7th Edition)
Pennings, A.J. (2024, July 27). Digital Borders and Authoritarianism. apennings.com https://apennings.com/dystopian-economies/digital-borders-and-authoritarianism/
Notes
[1] Giddens, A. (1985). The Nation-State and Violence. University of California Press. p. 172.
[2] Shahbaz, A. (2018). The Rise of Digital Authoritarianism. https://freedomhouse.org/sites/default/files/2020-02/10192018_FOTN_2018_Final_Booklet.pdf
Note: Chat GPT was referenced for parts of this post. Several prompts were used and parsed.
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is a professor at the Department of Technology and Society, State University of New York, Korea teaching broadband policy and sustainable development. From 2002-2012 he was on the faculty of New York University and taught digital economics and information systems management. He also taught in the Digital media management at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Administrative power > nation-state > surveillance
AI and Remote Sensing for Monitoring Landslides and Flooding
Posted on | June 24, 2024 | No Comments
Invited remarks prepared for the 2024 United Nations Public Service Forum ‘Fostering Innovation Amid Global Challenges: A Public Sector Perspective.’ Songdo Convensia, the Republic of Korea 24 -26 June 2024. Organized by the Ministry of the Interior and Safety (MOIS), Republic of Korea.
Thank you very much for this opportunity to address how science and technology can address important issues of landslides and urban flooding. A few days after I was invited to this conference, a very unfortunate landslide occurred in Papua New Guinea. Fatalities are still being tallied but are likely to be between 700 and 1200 people.
Flooding is also a tragic sign of our times. As climate change has significantly increased the level of precipitation in the atmosphere, it increasingly resembles the turbulence of a child (or adult) playing in a filled bathtub. Some of the worst 2023 flooding occurred in Beijing, the Congo, Greece, Libya, Myanmar, and Pakistan. These floods took thousands of lives, displaced hundreds of thousands, and caused billions of dollars in property damage.
As requested, I will talk about the role of Artificial Intelligence (AI) and remote sensing of landslides and flooding. I will reference a model I use in my graduate course, EST 561 – Sensing Technologies for Disaster Risk Reduction, at the State University of New York, Korea, here in Songdo. The “Seven Processes of Remote Sensing” from the Canada Centre for Remote Sensing (CCRS) provides a useful framework for understanding how AI and remote sensing work together.[1] Additionally, AI can be implemented at several stages of the sensing process. I list the seven processes in this slide and below at [2].
Remote sensing, the detection and monitoring of an area’s physical characteristics by using sensing technologies to measure the reflected and emitted radiation at a distance, generates vast amounts of data. This data needs to be accurately collected, categorized, and interpreted for information that can be used by first responders and other decision-makers, including policy-makers.
AI algorithms, particularly those involving machine learning (ML) and deep learning (DL) can be useful at several stages. They can compensate for atmospheric conditions, and automate the extraction and use of remote sensing data from target areas. They help identify characteristics of water bodies, soil moisture levels, vegetation health, and ground deformations. This intelligence can speed up analysis and increase accuracy in crucial situations. Just as AI has proven to be extremely useful in detecting cancerous cells, AI is increasingly able to interpret complex geographical and hydrological imagery.[3]
The primary sensing model involves an energy source, a platform for emitting or receiving the energy, the interaction of energy with the atmosphere, and the interaction of energy with the target. This information is then collected, processed, interpreted, and often applied in a resilence situation. Let me explain.
The Energy Source (A)
Sensing technologies rely on data from an energy source that is either passive or active. AI can analyze data from passive sources like sunlight or moonlight reflected off the Earth’s surface. For example, it can use satellite imagery from reflected sunlight to detect changes in land and water surfaces that may indicate flooding or landslides. AI can also process data from active sources such as radar and LiDAR (Light Detection and Ranging). LiDAR, which uses light instead of radio waves, can measure variations in ground height with high precision, helping to identify terrain changes that may precede a landslide and measure the mass of land that may have shifted in the event.
Synthetic Aperture Radar (SAR) sensors on satellites, such as Sentinel-1 and NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS), produce 8–12 GHz and 4–8 GHz electromagnetic waves that can penetrate cloud cover and provide high-resolution images of the Earth’s surface. This makes it possible to detect and map flooded areas even during heavy rains or at night. Also, high-resolution optical satellites like Landsat and Sentinel-2 capture passive visible and infrared imagery that AI can use to delineate flood boundaries by distinguishing water bodies and saturated soils.
Interaction of Energy with the Atmosphere (B)
Sensing from satellites in Earth orbit (and other space-based platforms) is highly structured by what’s in the atmosphere.[4] When analyzing remote sensing data, AI can make adjustments for atmospheric conditions such as clouds, smoke, dust, rain, fog, snow, and steam. Machine learning algorithms are trained to recognize and compensate for these atmospheric factors, improving the accuracy of flood and landslide detection.
Machine learning models can also simulate how different atmospheric conditions affect radiation, helping to better understand and interpret the data received during various weather scenarios. This monitoring is crucial for accurate flood and landslide detection.
Interaction of Energy with the Target (C)
AI can analyze how different surfaces absorb, reflect, or scatter energy. For example, water bodies have distinct reflective properties compared to dry land, which AI can use to detect and identify flooding. For example, “water loves red,” meaning that it absorbs the red electromagnetic rays and reflects the blue, giving us our beautiful blue oceans. Often, particulate material absorbs the blue rays too, resulting in greenish waters. AI can identify subtle vegetation or soil moisture changes that might indicate a potential landslide. Researchers in Japan are acutely aware of these possibilities given the often mountainous terrain and frequency of heavy rains.[5]
Water and vegetation may reflect similarly in the visible wavelengths but are almost always separable in the infrared. You can see that the reflectance starts to vary considerably at about 0.7 micrometers (µm), or microns wavelengths. (See image below) The spectral response can be quite variable, even for the same target type, and can also vary with time (e.g., “green-ness” of leaves) and location. These absorption characteristics allows for the identification and analysis of water bodies, moisture content in soil, and even snow and ice. This information can be used for monitoring lakes, rivers, reservoirs, and assessing soil moisture levels for irrigation management. See the more detailed explanation at [6].
Knowing where to “look” spectrally and understanding the factors which influence the spectral response of the features of interest are critical to correctly interpreting the interaction of electromagnetic radiation with the surface.
The Platform and Recording of Energy by the Sensor (D)
Platforms can be space-based, airborne, or mobile. Much of the early research was done with satellites, but drones, and moving robots (and automobiles) use the same model. After the energy has been scattered by, or emitted from the target, we require a sensor on a platform to collect and record the returned electromagnetic radiation. Remote sensing systems that measure energy naturally available are called passive sensors. Passive sensors can only be used to detect energy when the naturally occurring energy is available and makes it through the atmosphere.
Active sensors, like the LiDar mentioned before, provide their own energy source for illumination. These sensors emit radiation directed toward the target. The sensing platform detects and measures the radiation reflected from that target.
AI can analyze data from platforms like satellites for large-scale monitoring of land and water events. Satellite technology like SAR provides extensive coverage and can track changes over time, making them ideal for detecting floods and landslides. Aircraft and drones equipped with sensors can collect detailed local data, allowing AI to process this data in real time and provide immediate insights. Ground-based sensors from cell towers, IoT and mobile units such as Boston Dynamics’ SPOT robots can provide continuous monitoring at locations that may not be accessible to other platforms.
AI can integrate data from these platforms for a comprehensive view of an area, such as identifying landslide-prone areas by soil and vegetation analysis. High-resolution digital elevation models (DEMs) created from LiDAR or photogrammetry help identify areas with steep slopes and other topographic features associated with landslide risk. Multispectral scanning systems that collect data over a variety of different wavelengths and hyperspectral imagery that detect hundreds of very narrow spectral bands throughout the visible, near-infrared, and mid-infrared portions of the electromagnetic spectrum, can detect soil moisture levels and vegetation health, important indicators of landslide susceptibility.
Transmission, Reception, and Processing (E)
Energy recorded by the sensor is transmitted as data to a receiving and processing station and processed into an image (hardcopy and/or digital). Spy satellites were early adopters of digital imaging technology such as charge-coupled device (CCDs) and CMOS (complementary metal oxide semiconductor) image sensors that now used in smartphones and other cameras. CCDs were an early technology that is still used because of their superior image quality and successful attempts to reduce their energy consumption. CMOS sensors, meanwhile have seen major improvements in their image quality.
These technologies both recieve electromagetic energy immediately (unlike film that has to be developed) and convert them into images. In both cases, a photograph is represented and displayed in a digital format by subdividing the image into small equal-sized and shaped areas, called picture elements or pixels. The brightness of each area is represented with a numeric value or digital number. Processed images are interpreted, visually and/or digitally or electronically, to extract information about the target.
Interpretation and Analysis (F)
Making sense of information from technologies to understand changes in land formations and water flooding can benefit from several analytical approaches. One of the most important is monitoring geographical features over time and space and allowing AI to use techniques such as time-series analysis, river and stream gauging, and post-landslide assessment, especially after a catastrophic fire.
Remote sensing data over time allows for the monitoring of the temporal dynamics of floods, including the rise and fall of water levels and the progression of floodwaters across a landscape. The Landsat archives provide a rich library of imagery dating back to the 1970s that can be used. Having stored information is helpful in assessing the damage and impacts of a landslide after it has occurred. Post-event imagery helps assess the extent and impact of landslides on infrastructure, roads, and human settlements, aiding in disaster response and rehabilitation efforts.
Volume and area estimation after a fire, flood, or landslide can assess the geographic impact and support engineering and humanitarian responses. AI can help remote sensing quantify the volume of displaced material and the area affected by landslides, which is essential for understanding the scale of the event and planning recovery operations. Remote sensing supplements ground-based river and stream gauges by providing spatially extensive water surface elevation measurements and flow rates. This analysis often relies on structural geology and the study of faults, folds, synclines, anticlines, and contours. Understanding geological structures is often the key to mapping potential geohazards (e.g., landslides).[p 198].
AI can classify areas affected by floods or landslides, using deep learning to recognize patterns and changes in the landscape. Subsequently, AI can use predictive analytics to identify climate and geologic trends and provide forecasts for flood and landslide risks and analyzing historical and real-time data, giving early warnings and insights.
AI Integration and Applications (G)
Techniques such as data fusion can combine remote sensing data from multiple sensors (e.g., optical, radar, LiDAR) with ground-based observations to enhance the overall quality and resolution of the information. This integration allows for more accurate mapping of topography, better detection of water bodies, and detailed monitoring of environmental changes.
AI applications can analyze real-time data from sensors to detect rising water levels and predict potential flooding areas. Machine learning algorithms can recognize patterns in historical data, improving the prediction models for future flood events. AI can also incorporate data from social media and crowdsourced reports, providing a more comprehensive view of ongoing events. This information can allow policy makers and first responders to use AI systems to automatically generate alerts and warnings for authorities and the public, allowing for timely evacuations and preparations.
AI can analyze topographical data from LiDAR sensing technologis to detect ground movement and changes in terrain that precede landslides. AI can process data from ground-based sensors to monitor soil moisture levels, a critical factor in landslide risk. By learning from past landslide events, AI can predict areas at risk and suggest mitigation measures. By analyzing data from past landslide events, AI can identify risk factors and predict areas at risk. AI can suggest mitigation measures, such as reinforcing vulnerable slopes or adjusting land use planning.
Conclusion
The integration of AI with remote sensing technologies and ground-based observations enhances the monitoring and management of landslide and flooding disasters. By combining data from multiple sources, analyzing real-time sensor data, and learning from past events, AI can provide accurate predictions, timely alerts, and effective risk mitigation strategies. This approach not only improves disaster response but also aids in long-term planning and resilience building.
By integrating AI into each of these processes, remote sensing can become more accurate, efficient, and insightful, providing valuable data for a wide range of applications supporting climate resilence. AI can contribute at each stage of the remote sensing process. As a result, the detection, monitoring, and response to floods and landslides can be significantly improved, leading to better disaster risk management and mitigation strategies. Remote sensing technologies, when combined with ground-based river and stream gauges, provide a spatially extensive, and temporally rich dataset for monitoring water surface elevation and flow rates. This combination enhances the accuracy of hydrological models, improves early warning systems, and supports effective water resource management and disaster risk reduction efforts.
Citation APA (7th Edition)
Pennings, A.J. (2024, Jun 24). AI and Remote Sensing for Monitoring Landslides and Flooding. apennings.com https://apennings.com/space-systems/ai-and-remote-sensing-for-monitoring-landslides-and-flooding/
Notes
[1] Canada Centre for Remote Sensing. (n.d.). Fundamentals of Remote Sensing. Retrieved from https://natural-resources.canada.ca/maps-tools-and-publications/satellite-imagery-elevation-data-and-air-photos/tutorial-fundamentals-remote-sensing/introduction/9363 and Fundamentals of Remote Sensing
[2] The Canada Centre for Remote Sensing (CCRS) Model:
1. Energy Source or Illumination (A)
2. Radiation and the Atmosphere (B)
3. Interaction with the Target (C)
4. Recording of Energy by the Sensor (D)
5. Transmission, Reception, and Processing (E)
6. Interpretation and Analysis (F)
7. Application (G) – Information extracted from the imagery about the target in order to better understand it, reveal some new information, or assist in solving a particular problem.
[3] Zhang B, Shi H, Wang H. Machine Learning and AI in Cancer Prognosis, Prediction, and Treatment Selection: A Critical Approach. J Multidiscip Healthc. 2023 Jun 26;16:1779-1791. doi: 10.2147/JMDH.S410301. PMID: 37398894; PMCID: PMC10312208.
[4]A good illustration of what atmospheric conditions influence what electromagnetic emissions can be found at: NASA Earthdata. (n.d.). Remote sensing. NASA. Retrieved from https://www.earthdata.nasa.gov/learn/backgrounders/remote-sensing
[5] Asada H, Minagawa T. Impact of Vegetation Differences on Shallow Landslides: A Case Study in Aso, Japan. Water. 2023; 15(18):3193. https://doi.org/10.3390/w15183193
[6] Near-Infrared (NIR) and Short-Wave Infrared (SWIR) ranges of the infrared spectrum are highly effective for sensing water, while NIR (and to some extent the red edge) is better suited for sensing vegetation. These wavelengths are particularly effective for sensing water. Water strongly absorbs infrared radiation in these ranges, making it appear dark in NIR and SWIR imagery. This absorption characteristic allows for the identification and analysis of water bodies, moisture content in soil, and even snow and ice. This can be used for monitoring lakes, rivers, reservoirs, and assessing soil moisture levels for irrigation management. Vegetation strongly reflects NIR light due to the structure of plant leaves. This high reflectance makes NIR ideal for monitoring vegetation health and biomass. Healthy, chlorophyll-rich vegetation reflects more NIR light compared to stressed or diseased plants. The transition zone between the red and NIR part of the spectrum, known as the “red edge,” is particularly sensitive to changes in plant health and chlorophyll content. A commonly used index that combines red and NIR reflectance to assess vegetation health and coverage. NDVI is calculated as (NIR – Red) / (NIR + Red). Higher NDVI values indicate healthier and more dense vegetation.
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea teaching ICT for sustainable development and engineering economics. From 2002-2012 he was on the faculty of New York University where he taught digital economics and information systems management. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in South Korea.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Boston Dynamics > deep learning > LiDar > Machine Learning > remote sensing > SPOT robot
AI and the Rise of Networked Robotics
Posted on | June 22, 2024 | No Comments
The 2004 movie I, Robot was quite prescient. Directed by Alex Proyas and named after the short story by science fiction legend Isaac Asimov, the cyberpunkish tale set in the year 2035 revolves around a policeman, played by Will Smith. He is haunted by memories of being saved from drowning by a robot in a river after a car crash. His angst comes from seeing a young girl from the other car drown as he is being saved. The robot calculated that the girl could not be saved, but the policeman could. Consequently, the policeman develops a prejudice and hatred for robots, driving the movie’s narrative.
What was particularly striking about the movie was a relatively new vision of robots as networked, and in this case, connected subjects of a cloud-based artificial intelligence (AI) named VIKI (Virtual Interactive Kinetic Intelligence). VIKI is the central computer for U.S. Robotics (USR), a major manufacturer of robots. One of their newest models is the humanoid-looking NS-5 model, equipped with advanced artificial intelligence and speech recognition capabilities, allowing them to communicate fluently and naturally with humans and the AI. “She” has been communicating with the NS-5s and sending software updates via their persistent network connection outside the oversight of USR management.
In this post, I examine the transition from autonomous robotics to networked AI-enhanced robotics by revisiting Michio Kaku’s Physics of the Future (2012). We use the first two chapters on “Future of the Computer: Mind over Matter” and “Future of AI: Rise of the Machines” from Kaku’s book as part of my Introduction to Science, Technology, and Society course. Both chapters address robotics and are insightful in many ways, but they lacked focus on networked intelligence. The book was published on the verge of the AI and robotics explosion that is coming from crowdsourcing, webscraping, and other networked data collection techniques that can gather information for machine learning (ML).
The book tends to see robotics and even AI as autonomous, stand-alone systems. A primary focus was on ASIMO (Advanced Step in Innovative Mobility), Honda’s humanoid-shaped robot, that was recently discontinued, but not without a storied history. ASIMO was animated to be very lifelike, but its actions were entirely prescribed by its programmers.
Beyond Turing
Kaku continues with concerns about AI’s common sense and consciousness issues, including discussions about reverse engineering animal and human brains to find ways to increase computerized intelligence. Below I recount some of Kaku’s important observations about AI and robotics, and go on to stress the importance of networked AI for robotics and the potential for the disruption of human labor practices, especially in population-challenged societies such as Italy and South Korea.
One of the first distinctions Kaku made is the comparison between the traditional computing models based on Alan Turin’s conception of the general purpose computer (input, central processor, output) with the learning models that characterize AI. NYU’s DARPA-funded LAGR, for example, was guided by Hebb’s rule: whenever a correct decision is made, the network is reinforced.
Traditional computing is designed around developing a program to take data in, peform some function on the data, and output a result. LAGR’s (Long-Range Vision for Autonomous Off-Road Driving) convolutional neural networks (CNN) involved training the system to learn patterns and make decisions or predictions based on the data coming in. Unlike the Turing computing model, which focuses on the theoretical aspects of computation, AI aimed to develop practical systems that can exhibit intelligent behavior and adapt to new situations.
Pattern Recognition and Machine Learning
Kaku pointed to two problems with AI and robotics: “common sense” and pattern recognition. Both are needed for autonomous tasks such as Full Self-Driving (FSD) and household tasks. He predicted common sense would be solved with the “brute force” of computing power and by the development of a “encyclopedia of thought” by endeavors such as CYC, a long-term AI project by Douglas B. Lenat, who founded Cycorp, Inc. CYC sought to capture common sense by assembling a comprehensive knowledge base covering basic ontological concepts and rules. The Austin-based company focused on implicit knowledge like how to walk and ride a bicycle. CYC eventually developed a powerful reasoning engine and natural language interfaces for enterprise applications like medical services.
Kaku went to MIT to explore the challenge of pattern recognition. Poggio’s Machine at MIT researched “Immediate Recognition,” where an AI must quickly recognize a branch falling or a cat crossing the street. It is important to develop the ability to instantly recognize an object, even before registering it in our awareness. This ability was a great trait for humanity as it was evolving through its hunter stage. Life and death decisions are often made in milliseconds, and any AI operation driving our cars or other life-critical technology needs to operate within that timeframe. With some trepidation, Kaku recounts how the robot consistently scored higher than a human (and him) on a specific vision recognition test.
AI made significant advancements in solving the pattern recognition problem by developing and applying machine learning techniques roughly categorized into supervised, unsupervised, and reinforcement learning. These are briefly: learning from labeled data to make predictions, identifying patterns in unlabeled data, and learning to make decisions through rewards and penalties in an interactive environment. Labeled data “supervises” the machine to produce your desired information. Unsupervised learning is beneficial when you need to identify patterns from large amounts of scattered data and make decisions. Reinforced learning is similar to human learning, where the algorithm interacts with its environment and gets a positive or negative reward.
The need for labeled data for training machine learning algorithms dates back to the early days of AI research. Researchers in pattern recognition, natural language processing, and computer vision have long relied on manually labeled datasets to develop and evaluate algorithms. Crowdsourcing platforms made obtaining labeled datasets for machine learning tasks easier at a relatively low cost and with quick turnaround times. Further improvements would improve the accuracy, efficiency, speed, and scalability of AI labeling.
Companies and startups emerged to provide AI developers and organizations with data labeling services. These companies employed teams of annotators who manually labeled or annotated data according to specific requirements and guidelines, ensuring high-quality labeled datasets for machine learning applications. Improvements included developing semi-automated labeling tools, active learning algorithms, and methods for handling ambiguous data.
Poggio’s machine at MIT represents an early example of machine learning and computer vision applied to autonomous driving. Subsequently, Tesla’s Full Self-Driving (FSD) system embodied a modern approach based on machine learning and real-world, networked data collection. Unlike Poggio’s earlier driving machine, which relied on handcrafted features and rule-based algorithms, Tesla’s FSD system utilizes a combination of neural networks, deep learning algorithms, and sensor data (e.g., cameras, radar, LiDAR) to enable and improve autonomous driving capabilities, including automated lane-keeping, self-parking, and traffic-aware cruise control. One controversial move is that FSD is mainly relying on labeling video pixels from cameras as they have become the most cost-effective option. Lidar, infrared, and radar are currently considered too expensive.
Tesla’s approach to autonomous driving has emphasized real-world data collection and crowdsourcing by learning from millions of miles of driving data collected online from the fleet of Tesla vehicle owners. This information is used to train and refine the FSD system’s algorithms, although it still faces challenges related to safety, reliability, regulatory approval, and addressing edge cases. Telsa continues to leverage machine learning to acquire driving knowledge directly from the data and improve performance over time through continuous training and updates.
Reverse Engineering the Brain
Reverse engineering became a popular concept after Compaq reverse engineered the IBM BIOS in the late 1980s to bypass IBM intellectual property protections on its Personal Computer (PC). The movie Paycheck (2003) explored a similar but hypothetical scenario of reverse engineering. MIT’s James DiCarlo describes how reverse engineering the brain can be used to understand vision better. Professor DiCarlo describes how convolutional neural networks (CNNs) mimic the human brain with networks that excel at finding patterns in images to recognize objects.
Kaku addresses reverse engineering by asking whether AI should proceed along lines of mimicking biological brain development or would it be more like James Martin’s “Alien Intelligence”? Kaku introduced IBM’s Blue Gene computer as a “quarter acre” of rows of jet-black steel cabinets, each rack about 8 feet tall and 15 feet long addressing this issue. Housed at Lawrence Livermore National Laboratory in California, it was capable of a combined speed of 500 trillion operations per second. Kaku visited the site because he said he was interested in Blue Gene’s ability to simulate thinking processes. A few years later Blue Gene was operating at 428 Teraflops.
Blue Gene worked on the capability of a mouse brain, with its 2 million neurons, as compared to the 100 billion neurons of the average human. It was a difficult challenge because every neuron is connected to many other neurons. Together they make up a dense, interconnected web of neurons that takes a lot of computing power to replicate. Blue Gene was designed to simulate the firing of neurons found in a mouse, which it accomplished, but only for several seconds. It was Dawn, also based in Livermore, in 2007, which could simulate an entire rat’s brain (which contains 55 million neurons, much more than the mouse brain). Gene/L ran at a sustained speed of 36.01 teraflops, or trillions of calculations per second.
What is Robotic Consciousness?
Kaku suggests at least three issues be considered when analyzing AI robotic systems. One is self awarenes. Does the system recognize itself? Second, can it sense and recognize the environment around it. Boston Dynamic’s robotic “dog,” for example, now uses SLAM (Simultaneous Localization and Mapping) to recognize its surroundings and uses algorithms to map its location.[3] SPOT uses 360 degree cameras and Lidar to 3D sense the surrounding environment. It is being used in industrial environments to sense chemical and fire hazards. It uses Nvidia-designed GPU chips and a built-in 5G modem for network connections to get data from the digital canine.
Another issue to determine consciousness is simulating the future and plotting strategy. Can the system predict the dimensions of causal relationships? If it recognizes the cat on the side of the road, can it predict what its next actions might be, including crossing into the street. Finally can it sense and ask “What if?” From that, can it develop sufficient scenarios that extrapolate into the future? And develop strategies for obtaining that future outcome?
Kaku and the Singularity
Lastly, Kaku was intrigued with the concept of “singularity.” He traces this idea to his area of expertise, relativistic physics. Singularity represents a point of extreme gravity, where nothing can escape, not even light. “Singularity” was popularized by the mathematician and computer scientist Vernor Vinge in his 1993 essay “The Coming Technological Singularity.” Vinge argued that the creation of superintelligent AI would surpass human intellectual capacity and mark the end of the human era. The term has since been used by enthusiasts such as Ray Kurzweil, who believes that the exponential growth of Moore’s Law will deliver the needed computing power for the singularity around 2045. He believes that humans will eventually merge with machines, leading to a profound transformation of humans and society.
Kaku is cautious and conservative about the more extreme predictions of the singularity, particularly those that suggest a rapid and uncontrollable explosion of superintelligent machines. He acknowledges that while computing power is growing exponentially, he doubts the trend will continue. There are also significant challenges to achieving true artificial general intelligence (AGI). He argues that replicating or surpassing human intelligence involves more than just increasing computational power.
Kaku believes that advancements in AI and related technologies will occur in incremental improvements that will enhance human life but not necessarily lead to a runaway intelligence explosion. Instead of envisioning a future dominated by superintelligent machines, Kaku imagines a more symbiotic relationship between humans and technology. He foresees humans enhancing their own cognitive and physical abilities through biotechnology and AI, leading to a more integrated coexistence.
But once again, he ignores a networked singularity that would involve interconnected AI systems, distributed intelligence, enhanced human-AI integration, and advanced data networking infrastructure. But could the networked robot become the nexus of singularity? Kaku believes this interconnected future holds immense potential for solving complex global problems and enhancing human capabilities, even though it raises issues of security, privacy, regulation, and social equity.
The Robotic Future
The proliferation of machine learning algorithms and cloud computing platforms since the 2000s accelerated the integration of AI and now robotics with networking technologies. Machine learning models, trained on large datasets, can be deployed and accessed over networked systems, enabling AI-powered applications in areas such as image recognition, natural language processing, and autonomous systems. Cloud computing allows these AI models and robotic machines to be updated, maintained, and scaled efficiently, ensuring widespread access and utilization across various sectors.
Cloud-based computing provides the computational power required for sophisticated AI algorithms. It offers scalable resources that can handle the intensive processing demands of AI, from training complex models to deploying them at scale. Cloud platforms also enable collaborative efforts in AI research and development by providing a centralized repository for data and models, fostering innovation and continuous improvement.
The development of AI is deeply intertwined with advancements in robotics in conjunction with data networking, networking infrastructure, and cloud-based computing capabilities. These technological advancements enable the deployment of robotics in real-time applications such as healthcare, finance, and manufacturing by supporting decision-making and enhancing operational efficiency across various sectors. The continued development of AI networking is essential for the ongoing integration and expansion of robotic technologies in our daily lives.
Kaku envisions a future where technology solves major challenges such as disease, poverty, and environmental degradation. He advocates for ongoing research and innovation while remaining vigilant about potential risks and unintended consequences. He emphasizes the importance of a gradual, symbiotic relationship between humans and technology. Kaku also highlighted the significance of Isaac Asimov’s Three Laws of Robotics, which are central to the plot of I, Robot. He praised the film for exploring these laws and their potential limitations. The Three Laws are designed to ensure that robots act safely and ethically, but the movie illustrates how these laws can be overridden in unexpected ways and are not to be trusted by themselves.
Citation APA (7th Edition)
Pennings, A.J. (2024, Jun 22). AI and the Rise of Networked Robotics. apennings.com https://apennings.com/technologies-of-meaning/the-value-of-science-technology-and-society-studies-sts/
Notes
[1] Javaid, S. (2024, Jan 3). Generative AI Data in 2024: Importance & 7 Methods. AIMultiple: High Tech Use Cases &Amp; Tools to Grow Your Business. https://research.aimultiple.com/generative-ai-data/#the-importance-of-private-data-in-generative-ai
[2] Kaku, M. (2011) Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100. New York: Doubleday.
[3] Lee, Yeong-Min. Seo, Yong-Deok. (2009), SLAM Vision-Based SLAM in Augmented / Mixed Reality. Korea Multimedia Society 13(3). 13-14.
Note: Chat GPT was used for parts of this post. Multiple prompts were used and parsed.
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea teaching broadband policy and ICT for sustainable development. From 2002-2012 he was on the faculty of New York University where he taught digital economics and information systems management. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Alan Turing > artificial general intelligence (AGI) > CYC > full self-driving (FSD) > I > neural networks > Poggio’s Machine > Singularity
Ode to James Larson, SUNY Korea’s First Professor Emeritus
Posted on | June 4, 2024 | No Comments
Remarks at the Congratulatory Plaque Award Ceremony for Professor Emeritus June 5, 2024
Ladies and Gentlemen.
I’m pleased to say a few words as we celebrate SUNY Korea’s first Professor Emeritus.
I came to SUNY Korea eight years ago this past February as the Associate Chair of the Department of Technology and Society, while Professor Larson took on the position of Vice President of Academic Affairs.
So, I would visit his office every weekday morning at 10 am and get these extraordinary briefings. We talked about the history of SUNY Korea and the Songdo area, including international organizations like the United Nations office for Disaster Risk Reduction (UNDRR) that we have worked with to develop a number of courses for our master’s degree.
We talked a lot about the development of the DTS program, including SUNY Korea’s first graduates, the Master’s degree students in Technological Systems Managementor TSM as we call it. Later, the first undergraduates to get their degrees from SUNY Korea would have a Bachelor of Science in Technological Systems Management.
We had a common background in Communications Technology and Economic Development, so a lot of our focus was on the creation of the undergraduate specialization called ICT4D or Information and Communications Technologies for Sustainable Development, which was initially a hybrid program with the Computer Science Department and stresses 4 areas: Data Science, Networking, Mobility, and Entrepreneurship.
It was also created with the immense help of the late Dave Ferguson, the DTS chair and professor at Stony Brook, who was a regular visitor here in Songdo.
Probably the most important of our discussions was about the history of Korea and its ICT development, particularly Oh Myung’s role in Korea’s digital transformation. This collaboration with Dr. Oh, who got his PhD from Stony Brook in Electrical Engineering, led to many of Professor Larson’s publications, including Digital Development in Korea: Lessons for a Sustainable World that he co-authored with Dr. Oh in 2020.
So, to wrap up, let me just say that Professor Larson has a lot of knowledge about Korea; he has a strong passion for Korea and a strong passion for sharing its story with the world. That is why I’m particularly pleased that he has the platform of Professor Emeritus, so he can continue to research and share his knowledge about Korea’s ongoing digital and thus, social transformation. With the world, and with Korea.
Congratulations Professor Larson.
Citation APA (7th Edition)
Pennings, A.J. (2024, Jun 4) Ode to James Larson, SUNY Korea’s First Professor Emeritus. apennings.com https://apennings.com/sustainable-development/ode-to-james-larson-suny-koreas-first-professor-emeritus/
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea teaching broadband policy and ICT for sustainable development. From 2002-2012 he was on the faculty of New York University where he taught digital economics and information systems management. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in South Korea.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Analyzing the Market Structure of a Product
Posted on | May 20, 2024 | No Comments
These are class notes for my Engineering Economics class for their final assignments. Use the citation below.
What is a monopoly? What is an oligopoly? Or even more confusing – what is an oligopsony? These are terms used to describe the state of competition among firms buying or selling similar or related products. Firms seek to find an advantage to distinguish themselves from the competition when offering a specific set of products. But as we saw in a previous post, economic products themselves have certain characteristics that influence their selling conditions.
It consists of many buyers and sellers with none able to influence the price of a product. Here are some examples.
Oligopoly – several large sellers that have considerable control over the price of a product
Monopoly – one seller with considerable control over the supply and price of a product
Monospony – one buyer with considerable control over the demand and price of a product
Oligopsony – several large buyers have considerable control over the purchase price of a product.
Market structure has become a key focus of strategic thinking in modern firms. It refers to the environment for selling or buying a product or product series and influences key decisions about investments in production, people, and promotion. It is impacted by technological innovations, government regulations, customer behaviors, and costs. Market structure has an impact on the conduct of the firm and can influence their economic success.
Market structure is primarily about the state of competition for a product and how many rivals a company will have to deal with when introducing it. How easy is it to enter that market? Will the product be successful based on current designs and plans for it or will the product need to be changed? How will the product be priced?
How competitive are digital and tech environments? Due to technological innovation and globalization, competitive opportunities and restrictions are under scrutiny. The Internet and its World Wide Web (WWW) have introduced exciting new dynamics that have been subject of major research studies. A surge of platforms into the digital environment with the “Web 3.0” introduced disruptive features as e-commerce expanded beyond “dot.com” B2C and B2B connections to AI and blockchain.
Market Type and Number of Sellers
The concept of market structure has not only influenced microeconomics but also provided essential tools for managers.
This post examines different states of competition among firms supplying digital goods and services. It will look at the number of firms supplying a product and the importance of differentiation between products offered. An important factor is the barriers to entry (or competitive advantages) into the market for a particular product. Barriers to entry can help a digital media firm establish and hold market presence for its product and will be discussed at length in other posts.
Monopolies
Most people are familiar with the idea of a monopoly. It refers to one company with considerable control over the supply and price of a product. For a long time AT&T had a monopoly over the telephone system in the US. They supplied a black rotary phone that could connect to nearly every phone in the country. Some electric utility companies have a monopoly like HECO in Hawaii. Usually some government involvement is needed to maintain a monopoly. The term “natural monopoly” emerged to refer to a firm that can serve the entire market demand of a product at a lower cost than a combination of two or more smaller, more specialized firms.
A topic that will be discussed in more detail below are situations when an organization has strong buying power. These firms are called monopsonies.
Many companies do not control 100 percent of the market. Google controls some 75% of the global web search market. With Bing serving some 8 % of the market and Yahoo! Around 5%. Baidu has about 7% although they are dominant in the Chinese language market where they also benefit from government protection. Facebook is dominant in social media with a considerable lead over Google+ who has basically ceded to the friend to friend (F2F) social media market to the search engine giant.
Sometimes you will hear the term “duopoly” to refer to a situation where two companies dominate a market like Coke and Pepsi for cola drinks, Airbus and Boeing for commercial aircraft , Visa and Mastercharge for credit card authorization, Apple’s iOS and Google’s Android for mobile operating systems, and Apple and Microsoft for personal computer operating systems. These companies are more accurately referred to as oligopolies.
Oligopolies
A more useful term is oligopoly. This is a condition where several large sellers that have considerable control over the price of a product. Mobile services are a good example: AT&T, T-Mobile, and Verizon provide almost all the wireless services in US markets. The bar is a little lower as this type of market structure is where a small number of firms own more than 40% of the market.
Media companies have generally structured themselves this way. BMG, EMI, Universal Music, and Warner have been traditional powerhouses, although digital technologies continue to disrupt this media industry. Disney, CBS, Time Warner, NBC Universal, Viacom and Fox News Corporation dominant the mediasphere and are considered oligopolies? Over-the-top (OTT) media industries are a bit more competitive and often considered monopolistic competition due to many new entrants, product differentiation (different types of content and user interfaces), relatively low market entry, and market power due to customer captivity and some leeway over pricing.
Monopolistic Competition
Despite its confusing name, this category has quite a bit competition. In monopolistic competition market structure, firms achieve differentiation through various means, including product features, quality, branding, customer service, and marketing strategies. This differentiation allows companies to attract specific customer segments, build brand loyalty, and exert some control over pricing, despite the presence of many competitors. This environment encourages innovation and provides consumers with a diverse range of choices.
Think restaurants. Such that many producers sell products that are differentiated from one another (e.g. by branding or quality) and hence are not perfect substitutes. Imperfect competition such that many producers sell products that are differentiated from one another and hence are not perfect substitutes. There is an attempt to make the product unique, and thus a monopoly for that unique product. Other products are just not the same, and consequently different.
Perfect competition
Perfect competition is a theoretical ideal that provides a useful benchmark for understanding market dynamics. While no market perfectly fits all criteria, agricultural products, commodities, and certain financial markets come closest.
These markets feature numerous small producers, homogeneous products, and prices determined by overall supply and demand rather than individual firms’ actions. Understanding the characteristics of perfect competition helps in analyzing how real-world markets function and where they diverge from the ideal.
Perfect competition can emerge when a very large number of firms produce and distribute a homogeneous product. When I lived in New York City, I enjoyed going to farmer’s market at Union Square. It was a pretty good example of perfect competition.
Market structure analysis can give us insights into profitability, consumer price levels, innovation and research spending, as well as productivity levels. The key factors discussed in this type of analyis are the number of firms supplying product, the levels of differentiation between products, and the competitive advantages a company has to set up barriers to entry for other companies coming into the market.
Citation APA (7th Edition)
Pennings, A.J. (2024, May 21). Analyzing the Market Structure of a Product. apennings.com https://apennings.com/dystopian-economies/analyzing-the-market-structure-of-a-product/
Note: Chat GPT was used for parts of this post but most came from my writings for the manuscript Digital Economies and Sustainable Strategies.
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea teaching engineering and financial economics as well as ICT for sustainable development. From 2002-2012 he was on the faculty of New York University where he taught digital economics and comparative political economy. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Monopoly > Oligopsony
Determining Competitive Advantages for Tech Firms, Part 2
Posted on | May 15, 2024 | No Comments
In a previous post on competitive advantages, I discussed some structural characteristics for digital media firms. Using the framework laid out in Curse of the Mogul: What’s Wrong with the World’s Leading Media Companies as a point of departure, I was able to extend their analysis of traditional media companies to the more dynamic realms of digital tech firms.
For digital tech companies to thrive, it’s crucial to grasp the strategic significance of fortifying barriers to entry. This understanding not only solidifies their positions but also paves the way for profitability. In the competitive landscape, it’s vital to comprehend how companies can fend off potential threats from others eyeing their market share. In this post, I delve into the analysis of competitive advantages, broadening the scope to encompass the dynamic world of “tech” companies.
The authors critiqued media moguls for not paying adequate attention to four general categories of competitive advantages: economies of scale, customer captivity, cost, and government protection. Previously, I covered economies of scale and customer captivity. I paid particular attention to network effects, one of the tech firms’ most critical determiners of success. Customer captivity in terms of habits, search costs, and switching costs are also important determinants of success for companies dealing with digital applications, media programming, and physical products.
In this post, I focus on innovation, cost, and government protection. Tech companies need to proactively develop and protect new technologies as well as instill a culture of rapid learning and implementation. They also need access to vital resources, whether raw minerals or refined human knowledge and skills. Lastly, government support can help a firm develop a competitive advantage.
Innovation involves developing, utilizing, and protecting technologies, implementing a climate of learning, and applying new knowledge to fundamental production and work processes. While the book puts these under the category of cost, I thought it might be more beneficial to examine these processes through the lens of innovation. This rationale is partially due to the changes in GDP measurement that now include many aspects of research and development – as well as media production – as capital expenditures and not expenses.
Tech and digital media firms need to develop key proprietary technologies that they can use and protect. This process increasingly involves software enhancements to core production techniques and digital innovations such as recommendation engines and other “big data” solutions, including new developments in AI.
Guarding the firm against cyber-espionage and techniques like reverse engineering have also become a high priority. By disassembling and studying competitors’ hardware or software products, companies can uncover design secrets, algorithms, and proprietary technologies. When startup Compaq reversed engineered IBM’s BIOS, it destroyed Big Blue’s major advantages in the personal computer (PC) industry, allowing many companies to use software designed for the IBM PC on other PCs with Microsoft’s operating system.
Utilizing intellectual property protections such as copyrights, trademarks, and the use of patents, including the business method patent can provide legal protection for a product and protect against encroaching companies. Patents, for example, give the owner the exclusive use of a technology for 14-20 years.
Tech firms should strive for constant improvements in production and efficiencies to separate themselves from the “pack” through organizational learning. They should also be cognizant of the opportunities inherent in disruptive innovations that may initially offer poorer performance, but that may improve or reach new audiences over time.[2] Disruptive innovations can redefine market leadership, create new value propositions, alter industry standards, impact business models, encourage agile strategies, and increase competitive pressure. Companies that can anticipate, adapt to, and leverage these innovations are better positioned to maintain and enhance their competitive advantages.
As digital media and tech companies traffic in various types of communication and content, it is crucial that they find new ways to produce, package and monetize media. The authors are wary of business models based on content “hits” and stress instead the importance of producing continuous media and a “long tail” of legacy content. The long tail refers to unique items that may individually have low demand but can generate significant cumulative market interest or web traffic. This may require innovations in digital media production, programming, and ways to utilize user-generated content. By acquiring and offering a vast library of legacy media content, streaming platforms like Amazon Prime, Hulu, and Netflix can attract a wide range of subscribers, including niche audiences who are fans of older or less mainstream content that might not be available on competing platforms.
Cost issues involve ensuring access to essential resources or what economists call “factors of production” (land, labor, capital, entrepreneurship). These might be cheap energy and other natural resources, talented labor, sources of investment as well as expertise in startups. Google’s Finland data center and the Green Mountain Data Center in Norway are good examples of attempts to use the cold waters in those areas to cool thousands of servers and reduce energy costs.
Raw materials are critical for the high tech sectors and are threatened by geopolitical factors. Rare earth elements (REEs) are especially critical in the manufacture of various high-tech products, renewable energy technologies, and defense systems. Products like EVs, headphones, smartphones, and windmills are reliant on a number of raw minerals including indium niobium, platinum, and titanium. Indium, for instance, is used in touchscreens, liquid crystal displays, and to manufacture microprocessors. Africa and China have been major supplies of critical raw materials for the high-tech sector but Australia, the US, and places like Greenland are increasing production. Ukraine and Russia used to collaborate on the production of neon, a major factor in lasers and semiconductor photolithography, but lately South Korea has successfully sourced locally produced neon.
Access to skilled labor and a climate of intellectual discussion are also important factors to consider. Richard Florida’s thesis that working talent congregates around creative clusters is instructive. He encourages areas interested in developing their creative economies to follow this advice: “To develop economically, Florida encourages nations and regions to support their universities, particularly faculties that do science and technology; cultivate new industries that capitalize on creativity; prepare people for a creative global economy, and foster openness and tolerance to attract the creative class.”[3]
Government protection can also impart benefits to a tech business or be a deterrent to its competitors.[4] From the perspective of an individual firm, it can benefit from outright subsidies, grants, or guaranteed loans. The National Telecommunications and Information Administration (NTIA) is the most supportive US agencies for digital enterprises. The Small Business Administration (SBA) provides investment capital and loans
Preferential purchase policies can give companies an edge. Governments often list specific advantages they are willing to provide smaller to medium-sized enterprises (SMEs), especially those related to specific sustainability, or gender/minority diversification programs. Often, these are advertised as support for specific products or services.
Exclusive licenses have been a historical reality in the media business, primarily due to the importance of a scarce resource – the electromagnetic spectrum. This key media resource has gone primarily to television and radio operators, but the interest in mobile services and Wi-Fi has opened up new frequencies for use. When we created PenBC (Pennings Broadcasting Corp. – seriously), the prime asset was the FCC license for microwave transmission from the satellite dishes to high rise buildings throughout Honolulu.
The 2015 FCC auction of low-frequency spectrum was interesting to watch as incumbents AT&T and Verizon fought off other mobile carriers such as T-Mobile and satellite TV provider Dish Network that have garnered US Justice Department support to achieve a more level playing field. Verizon was the only wireless operator to win a nationwide license in the 700MHz auction in 2008. The new spectrum it won with US$ 20 billion in the 2015 auction allowed it to offer faster speeds on its 4G LTE network, so customers to do more bandwidth-intensive like watching video on their smartphones and tablets.
A government may also erect barriers to entry in favor of domestic industries to support local media content and tech industries. It may utilize import tariffs and/or quotas such as President Biden’s extension of Trump’s tariffs on China, and the more one’s on EVs and semiconductors.
Regulations, whether environmental, safety-related, procedural, or otherwise, can significantly impact organizations. They often impose stricter burdens on some companies than others. These regulations are typically drafted by specific companies or related trade associations, often with the assistance of former government agency employees. They may advocate for government administrative support or legislation, and their authors often recommend the use of effective lobbying strategies.
In “Determining Competitive Advantages for Digital Media Firms, Part 1,” I discussed barriers to entry related to economies of scale such as fixed and marginal costs, as well as network effects. I also discussed how different forms of customer captivity can be beneficial for tech firms. Above, I looked at innovation, cost, and government regulation. It is also important to understand that two or more competitive advantages may be operating at the same time. Recognizing the potential of reinforcing multiple barriers to entry and planning strategies that involve several competitive advantages will increase a company’s odds of success. products or services.
Citation APA (7th Edition)
Pennings, A.J. (2024, May 15). Determining Competitive Advantages for Tech Companies, Part 2. apennings.com https://apennings.com/digital-media-economics/determining-competitive-advantages-for-tech-firms-part-2/
Notes
[1] Jonathan A. Knee, Bruce C. Greenwald, and Ava Seave, The Curse of the Mogul: What Wrong with the World’s Leading Media Companies. 2014.
[2] Christensen, Clayton M. The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail. Boston, MA: Harvard Business School, 1997.
[3] Pennings, A.J. (2011, April 30). Florida’s Creative Class Thesis and the Global Economy. apennings.com https://apennings.com/meaningful_play/floridas-creative-class-thesis-and-the-global-economy/
[4] The history of early digital innovation and development is a case study in government involvement. IBM got its start with the national census and social security tabulation. The microprocessor and the PC industry emerged through the Space Race and MAD (Mutually Assured Destruction) and the Internet can be said to have taken off after the Strategic Defense Initiative or “Star Wars” required supercomputers at different universities to use the NSFNET. National defense/security spending and other policies can help a company shore up its own defenses against competition.
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: barriers to entry > competitive advantages > data centers > neon lasers > Rare earth elements (REEs)
The Division of Labor in Democratic Political Economies
Posted on | April 12, 2024 | No Comments
In this post, I examine some of the structural characteristics that make the success of the economy, a priority for government leadership in democratic political economies (DPEs). DPEs vary, but are generally republics that have intermediating politicians represent the populace in managing governments and the administration of public responsibilities. It expands on the notion that a division of labor has emerged in DPEs and examines the structural pressures that drive both the public and private sectors towards a common objective – economic success – despite differing approaches and competencies.[1]
Dividing the Labor to Ensure a Strong Economy
Neither the private nor the public sectors can ensure successful economic growth alone, but by recognizing this division of labor, DPEs can channel government and corporations toward mutually reinforcing successes. Attention to this division of labor and the structural properties that guide each sector can help achieve significant economic gains. Governments can work to create enabling political economy frameworks (like the Global Information Instrastructure/Internet) that are beyond the scope of private enterprises, yet significantly enhance economic opportunities. [2]
Companies drive economic activity by investing in potential profit-making activities while governments strive to provide enabling frameworks for economic prosperity. The corporation has emerged in modern times with a legally-shaped fiduciary duty to maximize shareholder value through return on investment (ROI). This legal stance tends to marginalize “ESG” (Environmental, Social, and Governance) concerns, including labor concerns such as fair wages, equal opportunity, sufficient benefits, and adherance to labor laws. The influence of ESG on investor decision-making continues to grow, including pressure to reduce environmental “externalities,” the costs paid by third parties when a product or service destroys or pollutes air, land, or water.
Democratically elected governments want to organize infrastructure, legal systems, and services to create economic value for voters and maintain political power for themselves and their party. Failure to enable and entice investment and produce economic success within a political boundary can raise significant difficulties for a government and its internal populace. Unstable economies can experience rapid de-investments due to the mobility of capital.
Globalization of commerce and finance since the 1970s has created new forms of competition and mobility for capital. This trend has challenged the economic base of national and local governments as they compete with each other to attract fluid multinational capital. Tax cuts facilitated US capital flows into China and other low-cost producers, reducing inflation but also jobs and infrastructure investments. At stake are jobs and investment returns.
While capitalists are often quite capable of success at the microeconomic level, they are not in a position to manage the economy as a whole. Towards procuring that success, corporations lobby governments and conduct other activities to influence government actions that will help their companies and industry.
Entrepreneurs and other people in business and professional services tend to be highly focused on their own profitability while spending only limited resources on community and civic affairs. Market activities are competitive and barriers to entry transitory. Private activities are insufficient and unable to maintain parks, libraries, roads, and other public goods that enhance the quality of life. And yet, these public goods are often responsible for attracting capital and talent needed for innovation and competitiveness.
As a result, democratic political economies tend to divide the responsibilities for modern economic life. Corporations focus on commercial and financial success. Governments provide, among other things, a judicial system to protect contracts, educational support to train workers, and administrative support to protect the populace from pollution and other dangers. Each shares an interest in robust commercial activities, albeit for differing reasons.
Perhaps most important is a monetary system that facilitates transactions and maintains price stability. DPEs primarily use a fractional reserve banking system that creates money through debt. This is capitalism’s “pedal to the metal” economic system that creates what what economists like Joseph Schumpeter and Werner Sombart called “creative destruction.” Modern Monetary Theory (MMT) has effectively argued that currency issuers like national governments play a crucial role in wealth production by supplying much-needed money and debt instruments. Governments spend money into the economy so companies and consumers have the liquidity to produce and consume.
When it comes to ensuring a successful and prosperous political economy, democratic societies have certain structural conditions that guide the emergence of their particular form of capitalism. Within limits, the political economy can take a variety of forms, such as highly exploitive and accumulation-oriented oligarchies or, on the other end of the scale, a highly redistributive society. Effective development strives for high integration strategies that balance accumulation and distribution strategies.[3]
Neither the public nor private sector in modern democratic societies have sufficient managerial or policy competencies to ensure a thriving economy. Yet, both rely on a vigorous economy for their success. Each needs economic success to satisfy their respective electoral and fiduciary constituencies. Despite the division and differing reasons, the goal is the same, a vibrant economy that will ensure both private profits and political triumph.
Governments look to the fruits of a growing economy to offset spending for debt interest, defense, and other services, including welfare. They aim to maintain a happy populace that will keep them in office. They want a prosperous economy to keep people employed, keep share prices high, and keep investment flowing into productive activities that will keep people feeling economically secure and provide tax revenues.
The private sector, in general, is unable to ensure overall capitalistic growth on its own. It lacks sufficient organizational capacity to ensure success at the macroeconomic level. That does not mean the private sector cannot infiltrate governance and the policy sphere. Donald Regan, the former CEO of Merrill Lynch, played a significant role in shaping the economic policies of the Reagan administration. As Secretary of the Treasury and Chief of Staff, he helped define and implement “Reaganomics,” emphasizing tax cuts, deregulation, and tight monetary policy. Along with Citicorp CEO Walter Wriston and others, they shaped a global framework based on capital mobility, fiat money, and credit markets. Still, it was not their roles as heads of major financial institutions but their participation in the US political administration that shaped a high accumulation, low distribution DPE with national and global implications.
While corporations are often quite capable of success at the microeconomic level, they are not in a position to manage the economy as a whole. The private sector wants growth and profits as well. Corporations strive to fulfill their primary fiduciary responsibilities – maintaining high profits for owners and shareholders. Towards procuring that success, they lobby governments and conduct other activities to influence government actions that will help their companies and industry. However, while these attempts may help individual companies or industries, they are insufficient to ensure the success of capitalism as a whole.
The Republic’s Interest in the Economy
In the first of the major structural mechanisms that Fred Block proposed to explain why government officials pursue policies that are in the general interest of capitalism. According to his view, government officials are, to some extent, dependent on the level of economic activity that 1) allows the state to finance itself through taxation or borrowing and 2) maintains popular support among the voting citizenry. Significant business investment, high employment levels, and minimal government competition for surplus capital are the most common strategies for ensuring high tax receipts while keeping the voting public relatively content.[4]
Governments require a monetary base to help fund their activities, whether meeting the bureaucracy’s payroll, building infrastructure, or funding defense activities, munitions, and personnel. According to MMT, governments also provide a monetary system to standardize the currency used in the collection of taxes. While MMT argues that national governments are currency issuers that create wealth when they legislate money. Taxes do not provide revenues for government spending, but provide a regulatory mechanism to limit inflation due to consumer and investment spending. These actions are often needed to reduce prices and motivate official economic activities that use the prescribed currency.
In the US, both Democrats and Republicans have spent liberally. The “Double Santa Claus” argument was set forward by Wall Street Journal editorial writer Jude Wanniski in 1976. In “Taxes and the Two Santa Claus Theory,” he argued that the Democrats should be the spending “Santa Claus” and redistribute wealth while the Republicans should be the tax reduction “Santa Claus” and help spur income growth. The Reagan administration institutionalized this approach with increased spending on anti-poverty programs such as Medicare, Social Security, and food assistance programs like the Supplemental Nutrition Assistance Program (SNAP). Military spending increased dramatically, including investments in new weapon systems, most notably, the Strategic Defense Initiative (SDI), commonly known as “Star Wars.” SDI proposed a space-based missile defense system designed to protect the United States from potential nuclear missile attacks and inadvertently laid the foundation for the Internet. Meanwhile, he drastically cut taxes with the Economic Recovery Tax Act of 1981 and the Tax Reform Act of 1986.[5]
Tax policies affect people and groups differently. They advantage different groups and disadvantage others. In the process, they make specific governmental trajectories possible. DPEs generally tax a combination of capital gains, income, sales of goods and services, etc. Inheritance taxes, for example, are meant to not only collect revenues but also impose a cost on the transfer of wealth and limit familial privilege and class divisions. The makeup of these tax policy decisions helps dictate an economic direction, so taxation policies should focus on what they want to diminish or limit.
Administrations also produce debt instruments that help offset government spending. In the global digital financial economy, government expenditures increasingly fund a significant amount of education, healthcare, military, research and other expenditures.
Taxation and borrowing offset their spending activities and programs, and help ensure a robust commercial sphere. Excess spending in the US is limited legislatively as part of the Reagan administration’s major changes to the financial sphere.
These instruments also provide safe collateral and an important hedge for the financial sectors. The US dollar is also produced as a global currency called the “Eurodollar.” International banks produce this version of the US dollar through lending and is not regulated by the US administration. Since over 80% of global trade is facilitated by the US dollar, Eurodollars bring important liquidity to international trade. But these banks often require high quality collateral like US Treasury bonds or blue chip corporate debt to ease any hesitancy to lend.
The global trading environment is complex and requires constant trading in various financial instruments. Government debt allows traders to increase their trading activities by allowing them to hold government securities in their portfolios as a hedge against other speculative losses. Government bonds are also traded constantly in high-frequency markets for arbitrage opportunities, debt rollover, income opportunities, and as a store of potential liquidity.
Common economic doctrine argues that governments compete with the private sector for capital. Still, in reality, government spending increases the commercial and financial spheres by expanding the trading environment, facilitating transactions, and providing instruments for risk reduction. These expenditures are why the US dollar has become the dominant global reserve and transaction currency. The volumes needed are huge, and the US has been willing to go into fiscal and trade deficits to provide the currency to the world.
Elected officials also need to keep the voting populace materially happy to stay in office. Economic indicators play a vital role in the public’s perception of the economy. These indexes provide numerical representations of various states of the economy, from consumer confidence to price levels and the latest unemployment rates. In an age when pensions and retirement accounts are invested in the financial markets, the public also follows such indicators as the Dow Jones Industrial Average (DJIA) and NASDAQ to gauge their personal wealth. Many older voters see policies that increase corporate wealth, such as tax cuts, as more valuable than government expenditures on food stamps or other forms of personal welfare as they increase stock prices for mutual funds and retirement accounts.
Significant structural relationships make the business of the economy, the business of government. For one, modern democratic governments have significant fiscal determinants that compel them to establish a major stake in the economy. Voters expect sufficient government services from military, regulatory agencies, and some degree of welfare support for the disadvanged. These desires are tempered by the “taxpayer’s money” myth that says that government needs to tax voters before public money can be spent. But governments are “currency issuers” that tax and borrow for other reasons. In order to obtain the needed financing to run the government, provide for the national defense, monitor the economy, and conduct special programs.
Influence Channels and Cultural Constraints
The business class is acutely aware of the effect government has on their interests and work towards shaping that influence, whether it be depressing the minimum wage, alleviating environmental restrictions, or shaping tax policy. Many critics of democratic political economies argue that influence gives capital concerns sufficient control over the state. For Block however, it is the first of several reasons, the “icing on the cake.” Other structural factors are at work and need to be considered.
Two “subsidiary structural mechanisms” according to Fred Block are also important when it comes to shaping the actions of public administrators towards enhancing economic growth. These are influence channels and cultural hegemony.
The first of the subsidiary structural mechanisms are the influence channels. The private sector can exert significant pressure on the state through its ability influence politicians, especially in a media age requiring significant expenditures on TV and other mediums for advertising. The aims of this influence has generally been oriented towards the procurement of government contracts, favorable economic legislation, tax cuts, regulatory relief, labor control, and specific spending in certain areas. They are most often campaign contributions, lobbying activities, and other favors.
Undoubtedly, issues related to bribery, coercion, and the revolving door into higher paying jobs may be factors that influence policy actions, however, this does not discount larger structural factors at work, particularly the high costs of elections, and procuring media buys for competitive elections and public relations. These have tied government officials to the influence of economic concerns.
Cultural hegemony was cited as a second subsidiary structural mechanism. Unwritten rules infiltrate democratic political economies, which tend to indicate what is, and what is not acceptable state activity. “While these rules change over time, a government that violates the unwritten rules of a particular period would stand to lose a great deal of its popular support. This acts as a powerful constraint in discouraging certain types of state action that might conflict with the interests of capital.”[6]
A contemporary example is the cultural divide over immigration. Issues related to race, including systemic racism, police brutality, racial inequality, immigration policy, and affirmative action, continue to be sources of contention and polarization in American society. Several major cultural divide issues have become prominent in political discourse, such as fundamental values, beliefs, and identities. “Culture wars” over social and cultural issues such as abortion, LGBTQ+ rights, same-sex marriage, religious freedom, and gender identity are particularly important in the age of social media and shape public opinion, electoral dynamics, policy debates, and social movements.
One potent issue is climate change. President Trump withdrew the US from the Paris Climate Accords because of a growing cultural backlash against concerns about climate pollution influencing weather effects worldwide. Many of his “Make America Great Again” (MAGA) members were convinced that such actions would be too expensive, hurt economic progress, and threaten a lifestyle centered on oil-based products, technologies, and transportation. Others refused to believe the scientific discourse and labeled it “elite” science. But mostly, vital interests in petrochemical-related industries drive the discussion on climate change through media practices such as astroturfing to avoid a significant “carbon bubble” collapse. For the most part, liberal progressive movements have embraced sustainable technologies and renewable energies such as hybrid cars, solar panels, and low-carbon food systems.
Summary
While sharing broad common objectives for a robust political economy, the government and the private corporate sectors have differing motivations and strategies for reaching these aims. Despite the division and differing reasons, the goal is the same, a robust economy that will ensure both profits and political success. Neither can, by themselves, ensure successful economic growth, but by recognizing this division of labor and the structural properties that guide each sector, democratic political economies can guide government policies and corporations toward mutually reinforcing successes. [7]
Citation APA (7th Edition)
Pennings, A.J. (2024, Apr 12). The Division of Labor in Democratic Political Economies. apennings.com https://apennings.com/democratic-political-economies/the-division-of-labor-in-democratic-political-economies//
Notes
[1] When I was in graduate school studying public administration and political economy, one the authors that interested me was the sociologist Fred Block. In debates with instrumentalists about “ruling classes,” he delineated the set of structural mechanisms that I primarily use here to determine the relationship between governments and the private sector in modern political economies. In this Jacobin article he provides a 2020 epilogue on his classic work.
[2] An interesting situation about enabling frameworks emerged with President Obama’s “You didn’t build that” statement during the 2012 presidential election campaign.
- “If you were successful, somebody along the line gave you some help. There was a great teacher somewhere in your life. Somebody helped to create this unbelievable American system that we have that allowed you to thrive. Somebody invested in roads and bridges. If you’ve got a business — you didn’t build that. Somebody else made that happen. The Internet didn’t get invented on its own. Government research created the Internet so that all the companies could make money off the Internet.”
The statement quickly received criticism by Governor Romney, a successful businessman, and others as an example of government encroachment in the private sector. The criticism echoed a similar critique against Vice-President Al Gore’s “I took the initiative to create the Internet.” Certainly, the Internet has progressed to be a major medium of global commerce due to entrepreneurial initiatives and accomplishments. However, much of the initial research and development, as well as the policy framework, was created by a wide range of government actions that transformed what was essentially military technology into commercial products and services.
[3] Tehranian, Majid. (1990). Technologies of power : information machines and democratic prospects / Majid Tehranian ; foreword by Johan Galtung. Norwood, NJ : Ablex Pub. p.184.
[4] This is basically a rewrite of my 2018 post that I wrote after Trump was elected president. I started with a disussion of whether a president with business experience is more important than a president with good understanding of administration and politics. Fred Block’s work was particularly useful and much of the ideas of a structural division of labor is based on his work, including this quote on p. 14.
[5] In the US, the “double Santa Claus” argument was set forward in “Taxes and the Two Santa Claus Theory” by Wall Street Journal editorial writer Jude Wanniski. He argued that the Democrats should be the spending “Santa Claus” and redistribute wealth while the Republicans should be the tax reduction “Santa Claus” and help spur income growth.
[6] This is basically a rewrite of my 2018 post that I wrote after Trump was elected president. I started with a disussion of whether a president with business experience is more important than a president with good understanding of administration and politics. Fred Block’s work was particularly useful and much of the ideas of a structural division of labor is based on his work, including this quote on p. 14.
[7] This blog is dedicated to my brother, Richard Pennings, who died on April 12, far too young.
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is Professor at the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he was on the faculty of New York University where he taught digital economics and comparative political economy. He also taught at St. Edward’s University in Austin, Texas, Marist College in New York, and Victoria University in Wellington, New Zealand. He has also been a Fellow at the East-West Center in Hawaii.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: "taxpayer's money" myth > Donald Regan > Modern Monetary Theory (MMT) > Reaganomics