Determining Competitive Advantages for Tech Firms, Part 2
Posted on | May 15, 2024 | No Comments
In a previous post on competitive advantages, I discussed some structural characteristics for digital media firms. Using the framework laid out in Curse of the Mogul: What’s Wrong with the World’s Leading Media Companies as a point of departure, I was able to extend their analysis of traditional media companies to the more dynamic realms of digital tech firms.
For digital tech companies to thrive, it’s crucial to grasp the strategic significance of fortifying barriers to entry. This understanding not only solidifies their positions but also paves the way for profitability. In the competitive landscape, it’s vital to comprehend how companies can fend off potential threats from others eyeing their market share. In this post, I delve into the analysis of competitive advantages, broadening the scope to encompass the dynamic world of “tech” companies.
The authors critiqued media moguls for not paying adequate attention to four general categories of competitive advantages: economies of scale, customer captivity, cost, and government protection. Previously, I covered economies of scale and customer captivity. I paid particular attention to network effects, one of the tech firms’ most critical determiners of success. Customer captivity in terms of habits, search costs, and switching costs are also important determinants of success for companies dealing with digital applications, media programming, and physical products.
In this post, I focus on innovation, cost, and government protection. Tech companies need to proactively develop and protect new technologies as well as instill a culture of rapid learning and implementation. They also need access to vital resources, whether raw minerals or refined human knowledge and skills. Lastly, government support can help a firm develop a competitive advantage.
Innovation involves developing, utilizing, and protecting technologies, implementing a climate of learning, and applying new knowledge to fundamental production and work processes. While the book puts these under the category of cost, I thought it might be more beneficial to examine these processes through the lens of innovation. This rationale is partially due to the changes in GDP measurement that now include many aspects of research and development – as well as media production – as capital expenditures and not expenses.
Tech and digital media firms need to develop key proprietary technologies that they can use and protect. This process increasingly involves software enhancements to core production techniques and digital innovations such as recommendation engines and other “big data” solutions, including new developments in AI.
Guarding the firm against cyber-espionage and techniques like reverse engineering have also become a high priority. By disassembling and studying competitors’ hardware or software products, companies can uncover design secrets, algorithms, and proprietary technologies. When startup Compaq reversed engineered IBM’s BIOS, it destroyed Big Blue’s major advantages in the personal computer (PC) industry, allowing many companies to use software designed for the IBM PC on other PCs with Microsoft’s operating system.
Utilizing intellectual property protections such as copyrights, trademarks, and the use of patents, including the business method patent can provide legal protection for a product and protect against encroaching companies. Patents, for example, give the owner the exclusive use of a technology for 14-20 years.
Tech firms should strive for constant improvements in production and efficiencies to separate themselves from the “pack” through organizational learning. They should also be cognizant of the opportunities inherent in disruptive innovations that may initially offer poorer performance, but that may improve or reach new audiences over time.[2] Disruptive innovations can redefine market leadership, create new value propositions, alter industry standards, impact business models, encourage agile strategies, and increase competitive pressure. Companies that can anticipate, adapt to, and leverage these innovations are better positioned to maintain and enhance their competitive advantages.
As digital media and tech companies traffic in various types of communication and content, it is crucial that they find new ways to produce, package and monetize media. The authors are wary of business models based on content “hits” and stress instead the importance of producing continuous media and a “long tail” of legacy content. The long tail refers to unique items that may individually have low demand but can generate significant cumulative market interest or web traffic. This may require innovations in digital media production, programming, and ways to utilize user-generated content. By acquiring and offering a vast library of legacy media content, streaming platforms like Amazon Prime, Hulu, and Netflix can attract a wide range of subscribers, including niche audiences who are fans of older or less mainstream content that might not be available on competing platforms.
Cost issues involve ensuring access to essential resources or what economists call “factors of production” (land, labor, capital, entrepreneurship). These might be cheap energy and other natural resources, talented labor, sources of investment as well as expertise in startups. Google’s Finland data center and the Green Mountain Data Center in Norway are good examples of attempts to use the cold waters in those areas to cool thousands of servers and reduce energy costs.
Raw materials are critical for the high tech sectors and are threatened by geopolitical factors. Rare earth elements (REEs) are especially critical in the manufacture of various high-tech products, renewable energy technologies, and defense systems. Products like EVs, headphones, smartphones, and windmills are reliant on a number of raw minerals including indium niobium, platinum, and titanium. Indium, for instance, is used in touchscreens, liquid crystal displays, and to manufacture microprocessors. Africa and China have been major supplies of critical raw materials for the high-tech sector but Australia, the US, and places like Greenland are increasing production. Ukraine and Russia used to collaborate on the production of neon, a major factor in lasers and semiconductor photolithography, but lately South Korea has successfully sourced locally produced neon.
Access to skilled labor and a climate of intellectual discussion are also important factors to consider. Richard Florida’s thesis that working talent congregates around creative clusters is instructive. He encourages areas interested in developing their creative economies to follow this advice: “To develop economically, Florida encourages nations and regions to support their universities, particularly faculties that do science and technology; cultivate new industries that capitalize on creativity; prepare people for a creative global economy, and foster openness and tolerance to attract the creative class.”[3]
Government protection can also impart benefits to a tech business or be a deterrent to its competitors.[4] From the perspective of an individual firm, it can benefit from outright subsidies, grants, or guaranteed loans. The National Telecommunications and Information Administration (NTIA) is the most supportive US agencies for digital enterprises. The Small Business Administration (SBA) provides investment capital and loans
Preferential purchase policies can give companies an edge. Governments often list specific advantages they are willing to provide smaller to medium-sized enterprises (SMEs), especially those related to specific sustainability, or gender/minority diversification programs. Often, these are advertised as support for specific products or services.
Exclusive licenses have been a historical reality in the media business, primarily due to the importance of a scarce resource – the electromagnetic spectrum. This key media resource has gone primarily to television and radio operators, but the interest in mobile services and Wi-Fi has opened up new frequencies for use. When we created PenBC (Pennings Broadcasting Corp. – seriously), the prime asset was the FCC license for microwave transmission from the satellite dishes to high rise buildings throughout Honolulu.
The 2015 FCC auction of low-frequency spectrum was interesting to watch as incumbents AT&T and Verizon fought off other mobile carriers such as T-Mobile and satellite TV provider Dish Network that have garnered US Justice Department support to achieve a more level playing field. Verizon was the only wireless operator to win a nationwide license in the 700MHz auction in 2008. The new spectrum it won with US$ 20 billion in the 2015 auction allowed it to offer faster speeds on its 4G LTE network, so customers to do more bandwidth-intensive like watching video on their smartphones and tablets.
A government may also erect barriers to entry in favor of domestic industries to support local media content and tech industries. It may utilize import tariffs and/or quotas such as President Biden’s extension of Trump’s tariffs on China, and the more one’s on EVs and semiconductors.
Regulations, whether environmental, safety-related, procedural, or otherwise, can significantly impact organizations. They often impose stricter burdens on some companies than others. These regulations are typically drafted by specific companies or related trade associations, often with the assistance of former government agency employees. They may advocate for government administrative support or legislation, and their authors often recommend the use of effective lobbying strategies.
In “Determining Competitive Advantages for Digital Media Firms, Part 1,” I discussed barriers to entry related to economies of scale such as fixed and marginal costs, as well as network effects. I also discussed how different forms of customer captivity can be beneficial for tech firms. Above, I looked at innovation, cost, and government regulation. It is also important to understand that two or more competitive advantages may be operating at the same time. Recognizing the potential of reinforcing multiple barriers to entry and planning strategies that involve several competitive advantages will increase a company’s odds of success. products or services.
Citation APA (7th Edition)
Pennings, A.J. (2024, May 15). Determining Competitive Advantages for Tech Companies, Part 2. apennings.com https://apennings.com/digital-media-economics/determining-competitive-advantages-for-tech-firms-part-2/
Notes
[1] Jonathan A. Knee, Bruce C. Greenwald, and Ava Seave, The Curse of the Mogul: What Wrong with the World’s Leading Media Companies. 2014.
[2] Christensen, Clayton M. The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail. Boston, MA: Harvard Business School, 1997.
[3] Pennings, A.J. (2011, April 30). Florida’s Creative Class Thesis and the Global Economy. apennings.com https://apennings.com/meaningful_play/floridas-creative-class-thesis-and-the-global-economy/
[4] The history of early digital innovation and development is a case study in government involvement. IBM got its start with the national census and social security tabulation. The microprocessor and the PC industry emerged through the Space Race and MAD (Mutually Assured Destruction) and the Internet can be said to have taken off after the Strategic Defense Initiative or “Star Wars” required supercomputers at different universities to use the NSFNET. National defense/security spending and other policies can help a company shore up its own defenses against competition.
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: barriers to entry > competitive advantages > data centers > neon lasers > Rare earth elements (REEs)
The Division of Labor in Democratic Political Economies
Posted on | April 12, 2024 | No Comments
In this post, I examine some of the structural characteristics that make the success of the economy, a priority for government leadership in democratic political economies (DPEs). DPEs vary, but are generally republics that have intermediating politicians represent the populace in managing governments and the administration of public responsibilities. It expands on the notion that a division of labor has emerged in DPEs and examines the structural pressures that drive both the public and private sectors towards a common objective – economic success – despite differing approaches and competencies.[1]
Dividing the Labor to Ensure a Strong Economy
Neither the private nor the public sectors can ensure successful economic growth alone, but by recognizing this division of labor, DPEs can channel government and corporations toward mutually reinforcing successes. Attention to this division of labor and the structural properties that guide each sector can help achieve significant economic gains. Governments can work to create enabling political economy frameworks (like the Global Information Instrastructure/Internet) that are beyond the scope of private enterprises, yet significantly enhance economic opportunities. [2]
Companies drive economic activity by investing in potential profit-making activities while governments strive to provide enabling frameworks for economic prosperity. The corporation has emerged in modern times with a legally-shaped fiduciary duty to maximize shareholder value through return on investment (ROI). This legal stance tends to marginalize “ESG” (Environmental, Social, and Governance) concerns, including labor concerns such as fair wages, equal opportunity, sufficient benefits, and adherance to labor laws. The influence of ESG on investor decision-making continues to grow, including pressure to reduce environmental “externalities,” the costs paid by third parties when a product or service destroys or pollutes air, land, or water.
Democratically elected governments want to organize infrastructure, legal systems, and services to create economic value for voters and maintain political power for themselves and their party. Failure to enable and entice investment and produce economic success within a political boundary can raise significant difficulties for a government and its internal populace. Unstable economies can experience rapid de-investments due to the mobility of capital.
Globalization of commerce and finance since the 1970s has created new forms of competition and mobility for capital. This trend has challenged the economic base of national and local governments as they compete with each other to attract fluid multinational capital. Tax cuts facilitated US capital flows into China and other low-cost producers, reducing inflation but also jobs and infrastructure investments. At stake are jobs and investment returns.
While capitalists are often quite capable of success at the microeconomic level, they are not in a position to manage the economy as a whole. Towards procuring that success, corporations lobby governments and conduct other activities to influence government actions that will help their companies and industry.
Entrepreneurs and other people in business and professional services tend to be highly focused on their own profitability while spending only limited resources on community and civic affairs. Market activities are competitive and barriers to entry transitory. Private activities are insufficient and unable to maintain parks, libraries, roads, and other public goods that enhance the quality of life. And yet, these public goods are often responsible for attracting capital and talent needed for innovation and competitiveness.
As a result, democratic political economies tend to divide the responsibilities for modern economic life. Corporations focus on commercial and financial success. Governments provide, among other things, a judicial system to protect contracts, educational support to train workers, and administrative support to protect the populace from pollution and other dangers. Each shares an interest in robust commercial activities, albeit for differing reasons.
Perhaps most important is a monetary system that facilitates transactions and maintains price stability. DPEs primarily use a fractional reserve banking system that creates money through debt. This is capitalism’s “pedal to the metal” economic system that creates what what economists like Joseph Schumpeter and Werner Sombart called “creative destruction.” Modern Monetary Theory (MMT) has effectively argued that currency issuers like national governments play a crucial role in wealth production by supplying much-needed money and debt instruments. Governments spend money into the economy so companies and consumers have the liquidity to produce and consume.
When it comes to ensuring a successful and prosperous political economy, democratic societies have certain structural conditions that guide the emergence of their particular form of capitalism. Within limits, the political economy can take a variety of forms, such as highly exploitive and accumulation-oriented oligarchies or, on the other end of the scale, a highly redistributive society. Effective development strives for high integration strategies that balance accumulation and distribution strategies.[3]
Neither the public nor private sector in modern democratic societies have sufficient managerial or policy competencies to ensure a thriving economy. Yet, both rely on a vigorous economy for their success. Each needs economic success to satisfy their respective electoral and fiduciary constituencies. Despite the division and differing reasons, the goal is the same, a vibrant economy that will ensure both private profits and political triumph.
Governments look to the fruits of a growing economy to offset spending for debt interest, defense, and other services, including welfare. They aim to maintain a happy populace that will keep them in office. They want a prosperous economy to keep people employed, keep share prices high, and keep investment flowing into productive activities that will keep people feeling economically secure and provide tax revenues.
The private sector, in general, is unable to ensure overall capitalistic growth on its own. It lacks sufficient organizational capacity to ensure success at the macroeconomic level. That does not mean the private sector cannot infiltrate governance and the policy sphere. Donald Regan, the former CEO of Merrill Lynch, played a significant role in shaping the economic policies of the Reagan administration. As Secretary of the Treasury and Chief of Staff, he helped define and implement “Reaganomics,” emphasizing tax cuts, deregulation, and tight monetary policy. Along with Citicorp CEO Walter Wriston and others, they shaped a global framework based on capital mobility, fiat money, and credit markets. Still, it was not their roles as heads of major financial institutions but their participation in the US political administration that shaped a high accumulation, low distribution DPE with national and global implications.
While corporations are often quite capable of success at the microeconomic level, they are not in a position to manage the economy as a whole. The private sector wants growth and profits as well. Corporations strive to fulfill their primary fiduciary responsibilities – maintaining high profits for owners and shareholders. Towards procuring that success, they lobby governments and conduct other activities to influence government actions that will help their companies and industry. However, while these attempts may help individual companies or industries, they are insufficient to ensure the success of capitalism as a whole.
The Republic’s Interest in the Economy
In the first of the major structural mechanisms that Fred Block proposed to explain why government officials pursue policies that are in the general interest of capitalism. According to his view, government officials are, to some extent, dependent on the level of economic activity that 1) allows the state to finance itself through taxation or borrowing and 2) maintains popular support among the voting citizenry. Significant business investment, high employment levels, and minimal government competition for surplus capital are the most common strategies for ensuring high tax receipts while keeping the voting public relatively content.[4]
Governments require a monetary base to help fund their activities, whether meeting the bureaucracy’s payroll, building infrastructure, or funding defense activities, munitions, and personnel. According to MMT, governments also provide a monetary system to standardize the currency used in the collection of taxes. While MMT argues that national governments are currency issuers that create wealth when they legislate money. Taxes do not provide revenues for government spending, but provide a regulatory mechanism to limit inflation due to consumer and investment spending. These actions are often needed to reduce prices and motivate official economic activities that use the prescribed currency.
In the US, both Democrats and Republicans have spent liberally. The “Double Santa Claus” argument was set forward by Wall Street Journal editorial writer Jude Wanniski in 1976. In “Taxes and the Two Santa Claus Theory,” he argued that the Democrats should be the spending “Santa Claus” and redistribute wealth while the Republicans should be the tax reduction “Santa Claus” and help spur income growth. The Reagan administration institutionalized this approach with increased spending on anti-poverty programs such as Medicare, Social Security, and food assistance programs like the Supplemental Nutrition Assistance Program (SNAP). Military spending increased dramatically, including investments in new weapon systems, most notably, the Strategic Defense Initiative (SDI), commonly known as “Star Wars.” SDI proposed a space-based missile defense system designed to protect the United States from potential nuclear missile attacks and inadvertently laid the foundation for the Internet. Meanwhile, he drastically cut taxes with the Economic Recovery Tax Act of 1981 and the Tax Reform Act of 1986.[5]
Tax policies affect people and groups differently. They advantage different groups and disadvantage others. In the process, they make specific governmental trajectories possible. DPEs generally tax a combination of capital gains, income, sales of goods and services, etc. Inheritance taxes, for example, are meant to not only collect revenues but also impose a cost on the transfer of wealth and limit familial privilege and class divisions. The makeup of these tax policy decisions helps dictate an economic direction, so taxation policies should focus on what they want to diminish or limit.
Administrations also produce debt instruments that help offset government spending. In the global digital financial economy, government expenditures increasingly fund a significant amount of education, healthcare, military, research and other expenditures.
Taxation and borrowing offset their spending activities and programs, and help ensure a robust commercial sphere. Excess spending in the US is limited legislatively as part of the Reagan administration’s major changes to the financial sphere.
These instruments also provide safe collateral and an important hedge for the financial sectors. The US dollar is also produced as a global currency called the “Eurodollar.” International banks produce this version of the US dollar through lending and is not regulated by the US administration. Since over 80% of global trade is facilitated by the US dollar, Eurodollars bring important liquidity to international trade. But these banks often require high quality collateral like US Treasury bonds or blue chip corporate debt to ease any hesitancy to lend.
The global trading environment is complex and requires constant trading in various financial instruments. Government debt allows traders to increase their trading activities by allowing them to hold government securities in their portfolios as a hedge against other speculative losses. Government bonds are also traded constantly in high-frequency markets for arbitrage opportunities, debt rollover, income opportunities, and as a store of potential liquidity.
Common economic doctrine argues that governments compete with the private sector for capital. Still, in reality, government spending increases the commercial and financial spheres by expanding the trading environment, facilitating transactions, and providing instruments for risk reduction. These expenditures are why the US dollar has become the dominant global reserve and transaction currency. The volumes needed are huge, and the US has been willing to go into fiscal and trade deficits to provide the currency to the world.
Elected officials also need to keep the voting populace materially happy to stay in office. Economic indicators play a vital role in the public’s perception of the economy. These indexes provide numerical representations of various states of the economy, from consumer confidence to price levels and the latest unemployment rates. In an age when pensions and retirement accounts are invested in the financial markets, the public also follows such indicators as the Dow Jones Industrial Average (DJIA) and NASDAQ to gauge their personal wealth. Many older voters see policies that increase corporate wealth, such as tax cuts, as more valuable than government expenditures on food stamps or other forms of personal welfare as they increase stock prices for mutual funds and retirement accounts.
Significant structural relationships make the business of the economy, the business of government. For one, modern democratic governments have significant fiscal determinants that compel them to establish a major stake in the economy. Voters expect sufficient government services from military, regulatory agencies, and some degree of welfare support for the disadvanged. These desires are tempered by the “taxpayer’s money” myth that says that government needs to tax voters before public money can be spent. But governments are “currency issuers” that tax and borrow for other reasons. In order to obtain the needed financing to run the government, provide for the national defense, monitor the economy, and conduct special programs.
Influence Channels and Cultural Constraints
The business class is acutely aware of the effect government has on their interests and work towards shaping that influence, whether it be depressing the minimum wage, alleviating environmental restrictions, or shaping tax policy. Many critics of democratic political economies argue that influence gives capital concerns sufficient control over the state. For Block however, it is the first of several reasons, the “icing on the cake.” Other structural factors are at work and need to be considered.
Two “subsidiary structural mechanisms” according to Fred Block are also important when it comes to shaping the actions of public administrators towards enhancing economic growth. These are influence channels and cultural hegemony.
The first of the subsidiary structural mechanisms are the influence channels. The private sector can exert significant pressure on the state through its ability influence politicians, especially in a media age requiring significant expenditures on TV and other mediums for advertising. The aims of this influence has generally been oriented towards the procurement of government contracts, favorable economic legislation, tax cuts, regulatory relief, labor control, and specific spending in certain areas. They are most often campaign contributions, lobbying activities, and other favors.
Undoubtedly, issues related to bribery, coercion, and the revolving door into higher paying jobs may be factors that influence policy actions, however, this does not discount larger structural factors at work, particularly the high costs of elections, and procuring media buys for competitive elections and public relations. These have tied government officials to the influence of economic concerns.
Cultural hegemony was cited as a second subsidiary structural mechanism. Unwritten rules infiltrate democratic political economies, which tend to indicate what is, and what is not acceptable state activity. “While these rules change over time, a government that violates the unwritten rules of a particular period would stand to lose a great deal of its popular support. This acts as a powerful constraint in discouraging certain types of state action that might conflict with the interests of capital.”[6]
A contemporary example is the cultural divide over immigration. Issues related to race, including systemic racism, police brutality, racial inequality, immigration policy, and affirmative action, continue to be sources of contention and polarization in American society. Several major cultural divide issues have become prominent in political discourse, such as fundamental values, beliefs, and identities. “Culture wars” over social and cultural issues such as abortion, LGBTQ+ rights, same-sex marriage, religious freedom, and gender identity are particularly important in the age of social media and shape public opinion, electoral dynamics, policy debates, and social movements.
One potent issue is climate change. President Trump withdrew the US from the Paris Climate Accords because of a growing cultural backlash against concerns about climate pollution influencing weather effects worldwide. Many of his “Make America Great Again” (MAGA) members were convinced that such actions would be too expensive, hurt economic progress, and threaten a lifestyle centered on oil-based products, technologies, and transportation. Others refused to believe the scientific discourse and labeled it “elite” science. But mostly, vital interests in petrochemical-related industries drive the discussion on climate change through media practices such as astroturfing to avoid a significant “carbon bubble” collapse. For the most part, liberal progressive movements have embraced sustainable technologies and renewable energies such as hybrid cars, solar panels, and low-carbon food systems.
Summary
While sharing broad common objectives for a robust political economy, the government and the private corporate sectors have differing motivations and strategies for reaching these aims. Despite the division and differing reasons, the goal is the same, a robust economy that will ensure both profits and political success. Neither can, by themselves, ensure successful economic growth, but by recognizing this division of labor and the structural properties that guide each sector, democratic political economies can guide government policies and corporations toward mutually reinforcing successes. [7]
Citation APA (7th Edition)
Pennings, A.J. (2024, Apr 12). The Division of Labor in Democratic Political Economies. apennings.com https://apennings.com/democratic-political-economies/the-division-of-labor-in-democratic-political-economies//
Notes
[1] When I was in graduate school studying public administration and political economy, one the authors that interested me was the sociologist Fred Block. In debates with instrumentalists about “ruling classes,” he delineated the set of structural mechanisms that I primarily use here to determine the relationship between governments and the private sector in modern political economies. In this Jacobin article he provides a 2020 epilogue on his classic work.
[2] An interesting situation about enabling frameworks emerged with President Obama’s “You didn’t build that” statement during the 2012 presidential election campaign.
- “If you were successful, somebody along the line gave you some help. There was a great teacher somewhere in your life. Somebody helped to create this unbelievable American system that we have that allowed you to thrive. Somebody invested in roads and bridges. If you’ve got a business — you didn’t build that. Somebody else made that happen. The Internet didn’t get invented on its own. Government research created the Internet so that all the companies could make money off the Internet.”
The statement quickly received criticism by Governor Romney, a successful businessman, and others as an example of government encroachment in the private sector. The criticism echoed a similar critique against Vice-President Al Gore’s “I took the initiative to create the Internet.” Certainly, the Internet has progressed to be a major medium of global commerce due to entrepreneurial initiatives and accomplishments. However, much of the initial research and development, as well as the policy framework, was created by a wide range of government actions that transformed what was essentially military technology into commercial products and services.
[3] Tehranian, Majid. (1990). Technologies of power : information machines and democratic prospects / Majid Tehranian ; foreword by Johan Galtung. Norwood, NJ : Ablex Pub. p.184.
[4] This is basically a rewrite of my 2018 post that I wrote after Trump was elected president. I started with a disussion of whether a president with business experience is more important than a president with good understanding of administration and politics. Fred Block’s work was particularly useful and much of the ideas of a structural division of labor is based on his work, including this quote on p. 14.
[5] In the US, the “double Santa Claus” argument was set forward in “Taxes and the Two Santa Claus Theory” by Wall Street Journal editorial writer Jude Wanniski. He argued that the Democrats should be the spending “Santa Claus” and redistribute wealth while the Republicans should be the tax reduction “Santa Claus” and help spur income growth.
[6] This is basically a rewrite of my 2018 post that I wrote after Trump was elected president. I started with a disussion of whether a president with business experience is more important than a president with good understanding of administration and politics. Fred Block’s work was particularly useful and much of the ideas of a structural division of labor is based on his work, including this quote on p. 14.
[7] This blog is dedicated to my brother, Richard Pennings, who died on April 12, far too young.
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is Professor at the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he was on the faculty of New York University where he taught digital economics and comparative political economy. He also taught at St. Edward’s University in Austin, Texas, Marist College in New York, and Victoria University in Wellington, New Zealand. He has also been a Fellow at the East-West Center in Hawaii.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: "taxpayer's money" myth > Donald Regan > Modern Monetary Theory (MMT) > Reaganomics
Digital Disruption in the Film Industry – Gains and Losses – Part 3: Digital FX Emerges
Posted on | March 17, 2024 | No Comments
“To succeed predictably, disruptors must be good theorists.” – Clayton Christensen
I had a chance to attend a special showing of The Wrath of Khan (1982), the second Star Trek movie, with my daughter a few years ago at the University of Texas in Austin. It included a live appearance by William Shatner, who starred as the infamous Captain Kirk in the movie as well as the original series. Shatner told the story of how Paramount executives were jealous of the success of Star Wars (1977) and how that led to the resurgence of the Star Trek franchise and incidently, the first use of digital special effects in a movie.
This post discusses the beginning of the digital or computer-generated imagery (CGI) revolution. Previously I wrote about the emergence of the digital camera and the digital disruption caused by non-linear digital editing. Incidently, I happened to be one of the first academics to teach non-linear editing with the University of Hawaii obtained the first Avid.
It seems appropriate that Star Trek would make both film as well as computer history. Its first attempt, Star Trek: The Motion Picture (1979), was moderately successful, but very expensive due to its grandiose sets. The second movie was given over to Paramount’s television studios who tightened the script and economized on the sets. They also hired George Lucas’ Industrial Light and Magic (ILM) to produce some of the effects for the second movie. ILM created an entirely computer-generated sequence for a movie when it demonstrated the effects of the Genesis Device on a barren planet in what turned out to be the Wrath of Khan.
But was it the first? Or was it Westworld (1973) Going back in history another case emerges that might lay claim to the first digital scene.
But first some background on the move from analog film to digital visual media. Previously, most special effects in films were done by artists using various analog methods. Animation was mainly drawn by hand, frame by frame. Even another futuristic 1982 movie, Tron, displayed results that were stunning for the time, but they were painstakingly done frame by frame.
The origin story for digital FX goes back to 1964 when NASA was directing the first flyby of Mars. NASA was working with its Jet Propulsion Lab (JPL) to develop an imaging system for Mariner 4. They needed to code the shading of 40,000 dots to construct the first image of Mars. Numbers were sent back to Earth from the spacecraft and the first images were actually colored in a “paint by hand” project based on the digital numbers. Some 240,000 bits were aggregated into a series of numbers on a globe.
John Whitney Jr. wrote in the American Cinematographer (November, 1973) that Brent Sellstrom struggled with a problem of representing a robot’s point-of-view (POV) on film. The script of Westworld called for a way to show how the evil robot cowboy, played by bald 70s icon Yul Brynner saw the world. The post-production supervisor for Westworld had to find a way to get the audience’s viewpoint into the head and eyes of the evil robot, the way the mechanical device was seeing the world. The POV shot takes the audience into a character’s head to give them a first-person, or subjective experience. [1]
Sellstrom suspected that JBL’s digital scanning methods might be used to construct the robot’s point-of-view in Westworld. JBL’s estimate to do the job for two minutes of animation would take nine months and cost $200,000. This price was way over their budget so they hired another company Information International, Inc. to scan footage of the robot’s POV and convert it to numerical data with similar techniques to the ones developed at JBL. It used a series of 3600 rectangles. They had to make sure that clothes of the actors and other items were contrasted to other items on the set. It took a minute for each frame and eight hours of processing for 10 seconds of film footage. The scene provided needed POV shot that brought the audience into the robot’s experience and movie went on to be a major hit. In 1976, a sequel called Futureworld scanned and animated its star, Peter Fonda’s head, for the first appearance of 3D computer graphics in a movie. Obviously, a precursor to Max Headroom.[2]
Throughout the 1990s, advancements in computer hardware and software, particularly in rendering and animation technologies, enabled more realistic and sophisticated digital effects. Films like Jurassic Park (1993) and Terminator 2: Judgment Day (1991) showcased groundbreaking CGI that blurred the line between reality and computer-generated imagery. The rise of dedicated visual effects studios, such as Digital Domain, Industrial Light & Magic (ILM), Pixar, and Weta Digital, played a crucial role in driving innovation in digital FX. These studios employed teams of talented artists, technicians, and engineers to push the boundaries of what was possible with digital technology.
Filmmakers began integrating live-action footage with CGI elements seamlessly, allowing for the creation of fantastical worlds, creatures, and visual sequences. Films like The Matrix (1999) and The Lord of the Rings trilogy (2001-2003) pushed the boundaries of digital FX, setting new standards for realism and spectacle. The development of digital character animation techniques, exemplified by films like Toy Story (1995) and Shrek (2001), revolutionized the animation industry and paved the way for the creation of lifelike digital characters that display complex emotions and personalities.
Technologically, ILM’s Renderman, that was spun off to Pixar, has been particularly noteworthy. RenderMan was one of the first rendering software packages to enable the creation of photorealistic images in CGI. Its advanced rendering algorithms and shading techniques allowed filmmakers to achieve lifelike lighting, textures, and reflections, enhancing the realism of digital environments and characters. RenderMan’s impact on digital FX has been recognized with numerous awards, including Academy Awards in 27 of the 30 films to win the Oscar for Best Visual Effects by 2018. Its contributions to the field of computer graphics have been instrumental in advancing the art and technology of filmmaking.
Finally, a note on digital disruption from Clayton M. Christensen talking about the corresponding changes in the computing industry. Christensen argues that the tendency of good customers to always listen to their best customers and improve their products leave them open disruptive innovations. The early digital cameras for example completely surprised film supplier Kodak. More currently, the digital camera has made possible DIY streaming services like YouTube.com.
In my next post on this series, I intend to explore the introduction of artificial intelligence (AI) such as SORA and VIDU to the digital televisual world.
Citation APA (7th Edition)
Pennings, A.J. (2024, Mar 17). Digital Disruption in the Film Industry – Gains and Losses – Part 3: Digital FX Emerges. apennings.com https://apennings.com/technologies-of-meaning/digital-disruption-in-the-film-industry-gains-and-losses-part-3-digital-fx-emerges/
Notes
[1] Background on the role of JPL on digital movie-making from American Cinematographer 54(11):1394–1397, 1420–1421, 1436–1437. November 1973.
[2] Frances Bonner. Fiction 2000: Cyberpunk and the Future of the Narrative (1992) in Slusser, G. and Shippey, T. eds. (Athens: University of Georgia Press)
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is Professor at the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he was at New York University from 2002-2012 and taught film at Marist College in New York and at the University of Hawaii where he often participated in the Hawaii International Film Festival while at the East-West Center in Honolulu, Hawaii. He also taught digital media and metrics at St. Edwards University in Austin, Texas, where he lives when not in Korea.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Digital Domain > Futureworld > Genesis Device > Inc. > Industrial Light & Magic (ILM) > Industrial Light and Magic (ILM) > Information International > Kodak > Pixar > Star Trek: The Motion Picture (1979) > Westworld (1973) > Weta Digital
Four Futures and the S-Curve
Posted on | March 13, 2024 | No Comments
One of my favorite professors in graduate school was Jim Dator, a professor at the University of Hawaii and Director of the Hawaii Research Center for Futures Studies at Manoa. His favorite strategy for thinking about the future was an exercise discussing four types of potential scenarios for the future of humanity: Continued Growth, Transformation, Limits and Discipline, as well as Decline and Collapse.
I include this approach in discussions about different futures strategies in my Introduction to Science, Technology, and Society Studies (STS) course to get students to think more about the trajectories of new technologies and social developments and what they may mean for the world they are inheriting.
I also include a discussion of the S curve initiated by futurist John Smart’s interpretation of Dator’s four scenario exercise, as illustrated above. S-curves, also known as sigmoid curves, are mathematical models often used to describe the adoption or growth rate of various phenomena over time. Examples would be the adoption of Artificial Intelligence (AI) or the growth rate of bacteria in a lab sample. This representation is based on living systems theory by James Miller but seems to fit well with other examples, including Dator’s futures writing exercise. However, Dator saw the scenarios more as four generic, separate alternative futures rather than naturalistic growth phases that could be represented with the sigmoid curve.
Scenarios are narratives or ‘stories’ illustrating possible visions of a future. These scenarios provide a structured way to consider the components of alternative futures and their potential developments. It presents four broad scenarios or perspectives on the future that can help individuals and organizations think about and plan for different possible outcomes. They are not strictly predictions but rather help generate ideas of some possible futures.
Combining an understanding of S-curve dynamics with futures scenario can be useful in projecting trajectories, isolating trends, and constructing visions of likely outcomes. It also marks inflection points (IP) where the variations in the curvature suggest the beginning of a significant change. Also important are tipping points (TP), critical thresholds when a tiny perturbation can qualitatively alter the state or development of a system or society, indicating dramatic change. DP marks the decline or de-acceleration phase. GP (growth point) and SP (saturation point) are also critical indicators of a curve’s dynamics.
S-curves are commonly used to predict the adoption and lifecycle of technologies or products. Innovations such as personal computers, smartphones, and social media platforms have been analyzed using S-curves to predict their growth and market saturation. As they move through stages of introduction, growth, maturity, and decline, S-curves can provide insights into when these stages are likely to occur and their duration. Researchers like Everett Rogers used S-curves to explain the “diffusion of innovations,” describing how new ideas or technologies are adopted by a population over time. For example, understanding the adoption patterns of electric vehicles can help policymakers develop incentives, infrastructure, and safety standards.
The categories below expand on the four scenarios mentioned above.
Continued Growth projects the current emphasis on economic development and its social and environmental implications into the near future. In this scenario, the future is seen as an extension of the present. It assumes existing trends, systems, and patterns will continue without significant disruption. This business-as-usual (BAU) trajectory is represented in the upward orange curve.
Limits and Discipline emphasize the importance of rules, regulations, and control. In this perspective, the future is shaped by enforcing strict controls and adhering to established norms and principles. It is a scenario that focuses on order, authority, and conformity. It suggests a society that highly values places, people, processes, or values that are threatened by the existing economic and social trajectory. In this scenario, it is often believed that society has “limits to growth” and should be “disciplined” around a set of fundamental cultural, ideological, scientific, or religious values. These will likely involve environmental concerns, including “green” solutions such as recycling, social distancing, and mask-wearing in pandemic times.
It could also result from a backlash to accelerated technological developments such as AI and the increasing collection of personal data by cloud services. Robotics is another concern as the technology has a more obvious manifestation than AI. Understanding where this saturation point lies in the S-curve can help predict when growth will likely slow down or stabilize. It is represented by the blue line that reaches a plateau after the tipping point. S-curves often reach a plateau, indicating that the phenomenon is saturated in society or approaching its maximum potential.
Decline and Collapse is represented by the descending green line on the right. This scenario envisions a future characterized by the breakdown of existing systems, institutions, or structures. It suggests a catastrophic turnaround or reversal of fortunes due to natural or human-made disasters. It often involves a significant crisis or disruption that leads to a reevaluation of the way things are done. Will climate change create such a decline? Is nuclear war a possibility? Pollution and changes associated with massive carbon dioxide and methane releases are current concerns as they are linked with dramatic weather changes influencing droughts, floods, and wildfires. The challenge to US leadership in the world by China and Russia could lead to a dramatic escalation of war in the world as witnessed in Ukraine.
Finally, a Transformative society envisions a future marked by radical change, innovation, and the emergence of entirely new paradigms. It challenges individuals and organizations to think creatively, embrace innovation, and be open to transformative possibilities. It emphasizes the need to adapt and thrive in a rapidly changing world. It anticipates a radical makeover of society based on biological, spiritual, or technological revolutions. For example, the creation of new genetically reconfigured “posthuman” bodies is a possibility, perhaps due to the viral innovations of COVID-19 research or rapid adaption to environmental changes. A “singularity” of network-connected humans and AI is another projected scenario. A global set of religious revivals is also considered by many to be a possibility. These scenarios posit entirely redesigned global culture, economic, and political structures.
Dator emphasizes that the purpose of scenario visioning is to determine preferable futures and work towards them rather than prophesizing a specific future. While S-Curves add a temporal trajectory and can indicate future activities, they lack information about time-frames. It is difficult to use them to suggest the number of months, years, decades, or even centuries before they might take shape and play out.
These scenarios are not meant to predict specific outcomes but to provide a structured way to consider different possibilities and their implications. By exploring these scenarios, individuals and organizations can better prepare for a range of future developments and make informed decisions about their strategies, policies, and actions. Dator’s Four Futures framework is a valuable tool for futures thinking and scenario planning.
By analyzing historical data and fitting an S-curve to the data points, it may be possible to gain an understanding of how a particular phenomenon has emerged over time. S-curves can then be used to extrapolate future growth. By extending the curve into the future, you can estimate points when a particular phenomenon is likely to change or reach a certain level of adoption, maturity, or impact. Policymakers can use this information to predict future developments, allowing for better long-term planning and resource allocation.
Citation APA (7th Edition)
Pennings, A.J. (2024, Mar 13). Four Futures and the S-Curve. apennings.com https://apennings.com/political-economies-in-sf/jim-dators-four-futures-and-the-s-curve/
Notes
[1] I was working on my PhD on cyberspace and electric money and found the four futures approach interesting. Dator dissuaded his students of the idea of a one true future whose probability could be calculated with positivistic certainty, and suggested we use a futures visioning process to envision and develop several alternative scenarios.
[2] The notion of ideal types comes primarily from Max Weber.
[3] Dator’s Four Futures presents four broad scenarios or perspectives on the future that can help individuals and organizations think about and plan for different possible outcomes. These scenarios provide a structured way to consider alternative futures and potential developments. The four generic alternative scenarios are: four generic alternative futures” (continuation, collapse, discipline, transformation) Dator, Jim. (2009). Alternative Futures at the Manoa School. Journal of Futures Studies. 14.
[4] Alvin Toffler’s Future Shock (1970) is a book that explores the concept of rapid change and the challenges it poses to individuals and societies. While Toffler introduced the idea of future shock, he did not specifically outline “four scenarios of the future” in that book. Instead, he discussed various scenarios and trends related to technological, social, and economic changes.
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is a professor in the Department of Technology and Society, State University of New York, Korea. SUNY Korea offers degrees from Stony Brook University. From 2002-2012 was on the faculty of New York University. Previously, he taught at Marist College in New York and Victoria University in Wellington, New Zealand. He lives in Austin, Texas when not in South Korea. He also spent 9 years at the East-West Center in Honolulu, Hawaii, including time working on his PhD in Political Science.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: alternative futures > Futures scenarios > James Dator > Jim Dator > Pierre François Verhulst > S-Curves > scenario writing > Transformation scenario
The Future of US Democracy: Getting Excessive Money Out of Elections
Posted on | March 3, 2024 | No Comments
The 2010 Supreme Court (SCOTUS) decision in Citizens United v. FEC ruled that corporations and unions could spend unlimited amounts of money on political campaigns. This was despite a Pew Research Center survey in late 2023 that found that both Republicans and Republican-leaning independents (83%) as well as Democrats and Democratic leaners (80%) agree that wealthy people contributing money to members of Congress can have too much influence on their policy decisions. In general, most Americans believe that excessive money in US politics can undermine the principles of accountability, equality, and fairness that are essential to a functioning democracy.
This post introduces the problem of money in US politics. This danger includes the disproportionate influence of wealthy donors, the erosion of democratic principles, the undermining of fair competition in elections, policy capture and the distortion of policy priorities, as well as the potential for corruption and scandal. Money also plays a significant role in US political advertising, influencing the reach, frequency, and effectiveness of political messages on broadcast channels and through social media. The post then looks at some ways Americans can address the issue through a combination of legislative reforms, legal challenges, grassroots activism, and civic engagement.
Problems Associated with Excessive Money in US Politics
When political campaigns are heavily funded by wealthy individuals, corporations, and special interest groups, there is a risk that these donors may wield undue influence over elected officials. This distortion can undermine the principle of political equality and lead to policies prioritizing donors’ interests over the general public’s needs. Political scientists are now talking about a “donor class,” a small group of wealthy urban and suburban residents who are able and willing to influence the outcome of political elections. Excessive money in politics can erode public trust in the democratic process by creating the perception that politicians are beholden to their wealthy donors rather than accountable to the electorate. This can lead to disillusionment with the political system and decreased voter interest and turnout.[1]
Large campaign war chests can create “barriers to entry” for candidates without access to significant financial resources. This competitive disadvantage can limit the diversity of candidates running for office and discourage individuals from underrepresented communities or with limited financial means from seeking elected positions. Incumbent politicians, in particular, generally have an easier time raising campaign funds compared to challengers. They can leverage their position in office to solicit contributions from political action committees (PACs), donors, and interest groups who have a vested interest in maintaining access and influence with elected officials.[2]
Excessive money in politics can also lead to “policy capture,” where wealthy donors, corporations, and special interest groups leverage their financial resources to gain access, influence decision-making, and shape policy outcomes in ways that benefit their interests, often at the expense of the broader public interest. These powerful interest groups can shape legislation and administrative regulations in their favor. A combination of campaign contributions, lobbying, and other forms of political influence can result in policies that benefit narrow interests at the expense of the broader public good. Policy capture predominantly occurs when regulatory agencies tasked with overseeing specific industries or economic sectors become influenced or controlled by the interests they are supposed to regulate.[3]
Political candidates and parties rely on campaign contributions to fund their campaigns. When wealthy donors, corporations, or special interest groups contribute significant amounts of money to political campaigns, they may gain access to elected officials and policymakers, who can feel indebted to their donors and more inclined to advance policies that align with their interests. Political campaigns that rely heavily on fundraising may prioritize issues of interest to wealthy donors over pressing societal concerns that affect a broader population segment. This skewed focus can lead to a misalignment between government priorities and the needs of ordinary citizens. A major concern is that Political Action Committees (PACs) and Super PACs can raise and spend unlimited money to support political candidates, parties, or causes and exert significant influence over the political process through their financial resources.
The influx of large sums of money into political campaigns can create opportunities for corruption and unethical behavior, such as quid pro quo arrangements where politicians exchange favors for campaign contributions. This obligation can lead to bribery, influence peddling, and other forms of corruption that undermine the integrity of the electoral process and erode public confidence in elected officials. Even if such behavior is not illegal, it can undermine public confidence in the integrity of elected officials and the political process.
Money and Media
The 5-4 Citizens United v. FEC decision by SCOTUS unleashed extraordinary amounts of money for purchasing media airtime, producing advertisements, and targeting specific audiences. Candidates who spend more on advertising tend to also receive more favorable coverage or greater visibility in news stories and analyses, further amplifying the impact of their advertising efforts.
Money can also be secretly used by foreign governments to pay social media platforms, fake news websites, bloggers, and other online channels. These channels can be hired to spread disinformation, misinformation, and propaganda to influence public opinion, sow discord, or undermine trust in democratic institutions. This influence can include spreading false information about candidates, parties, or electoral processes.
Time and column space can be purchased on television, radio, newspapers, and digital platforms to broadcast their messages to voters via memes and other pernicious forms of messaging. The cost of advertising varies depending on factors such as the size of the media market, the popularity of the programming, and the timing of the ad placement.
Creating high-quality political advertisements requires financial resources to cover expenses such as production costs, talent fees, and ad agency fees. Candidates often invest in professional production teams to create polished and persuasive advertisements that resonate with voters.
Money allows political advertisers to target specific demographic groups, geographic regions, or voter segments with tailored messages. By using data analytics and targeting tools, advertisers can optimize their ad spending to reach the most relevant and receptive audiences.
Political candidates and campaigns with greater financial resources have a competitive advantage in advertising. They can outspend their opponents, saturate the airwaves with their messages, and respond quickly to attacks or developments in the campaign.
Money facilitates the production and dissemination of negative advertising. “Mudslinging” has been a particularly effective method in shaping public opinion and swaying undecided voters. Negative ads often require substantial financial resources to fund extensive research, testing, and distribution.
In addition to candidate campaigns, outside groups such as super PACs and advocacy organizations play a significant role in political advertising. These groups can raise and spend unlimited amounts of money independently of candidates, leading to a proliferation of political ads funded by wealthy donors and special interests.
Political advertising spending can also influence media and public relations coverage of political campaigns. Candidates who spend more on advertising may receive more favorable coverage or greater visibility in news stories and analyses, further amplifying the impact of their advertising efforts.
A scary issue is foreign governments using social media platforms, fake news websites, and other online channels to spread disinformation, misinformation, and propaganda. This covert activity can include spreading false information about candidates, parties, or electoral processes. These activities are aimed at influencing public opinion, sowing discord, and undermining trust in democratic institutions.
Getting Money out of US Poltics: Options
Efforts to reduce money’s influence probably require overturning the Supreme Court’s Citizens United decision. Many critics argue that the 2010 Supreme Court decision has exacerbated the problem of money in politics and that SCOTUS has become an instrument of the donor class. Although difficult, overturning or amending this decision through constitutional means could help restore balance to the political system.
But other methods should be used to create public pressure for this change. This endeavor would include legislating campaign finance reform, including stronger disclosure requirements, the public financing of elections, empowering grassroots movements, electoral reforms at the local and state levels, and promoting civic education and engagement.
A top priority should be implementing strict campaign finance laws limiting how much money individuals, corporations, and interest groups can contribute to political campaigns. This restriction can help reduce the influence of wealthy donors and special interests and measures such as public financing of elections, contribution limits, and increased transparency in campaign spending should be pursued.
Strengthening disclosure requirements for campaign contributions and spending can increase transparency and accountability in the political process. Requiring timely and comprehensive reporting of political donations and disclosure of donors behind so-called “dark money” groups can help voters understand who is funding political campaigns.
Implementing public financing systems for political campaigns can also reduce the reliance on private donations and level the playing field for candidates who may not have access to wealthy donors. Public financing programs provide candidates with public funds to finance their campaigns, often with restrictions on private fundraising.
Another critical strategy is supporting movements organizing and promoting activism to help counterbalance the influence of big money in politics. Grassroots movements can mobilize public support for campaign finance reform, hold elected officials accountable, and advocate for policies that promote transparency and fairness in the political process.
Electoral reforms such as ranked-choice voting, proportional representation, or open primaries are also possibilities for the future. They can encourage greater competition and diversity in the political arena, reducing money’s influence in determining election outcomes.
Educating citizens about the importance of participating in the political process and empowering them to become informed voters can counteract the influence of money in politics. Encouraging civic engagement, voter registration, and turnout can amplify the voices of ordinary citizens and dilute the influence of wealthy donors.
Conclusion
Excessive money in political elections corrodes the democratic process by distorting representation, undermining public trust, and prioritizing the interests of wealthy donors over the common good. Efforts to reduce the influence of money in politics aim to promote greater transparency, accountability, and fairness in the political process. Addressing the issue of money in politics requires a combination of legal challenges, legislative reforms, grassroots activism, and civic engagement to create a more equitable and democratic political system.
Notes
[1] Donor class from “The Check is in the Mail: Interdistrict Funding Flows in Congressional Elections” by James G. Gimpel, Frances E. Lee, and Shanna Pearson-Merkowitz, in the American Journal of Political Science, April 2008. See also “Democracy and the Donor Class” from Gare Lamarche, the president of the Democracy Alliance. His speech delivered at the Haas Institute for a Fair and Inclusive Society at the University of California, Berkeley on March 7, 2013.
[2] See the report “Breaking Down Barriers: The Faces of Small Donor Public Financing” from the Brennan Center at the New York University School of Law.
[3] Policy capture is an international concern. See the (2017) International Institute for Democracy and Electoral Assistance. extract from: The Global State of Democracy: Exploring Democracy’s Resilience.
Note: Chat GPT was used for parts of this post. Multiple prompts were used, parsed, and verified.
Citation APA (7th Edition)
Pennings, A.J. (2024, Mar 4). The Future of US Democracy: Getting Excessive Money Out of Elections. apennings.com https://apennings.com/political-economy-of-media/the-future-of-us-democracy-getting-money-out-of-elections/
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea teaching broadband and media policy for sustainable development. From 2002-2012 he was on the faculty of New York University where he taught digital economics and information systems management. He has a PhD in Political Science from the University of Hawaii. He lives in Austin, Texas, when not in the Republic of Korea.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Citizens United v. FEC > donor class > Political Action Committees (PACs)
How Do Artificial Intelligence and Big Data Use APIs and Web Scraping to Collect Data? Implications for Net Neutrality
Posted on | January 19, 2024 | No Comments
One of the books I use in a course called EST 202 – Introduction to Science, Technology, and Society Studies is Michio Kaku’s Physics of the Future (2011). Despite its age, it’s a great starting point for teaching topics like Computers, Robotics, Nanotechnology, Space Travel, and Energy. It also has a chapter on Artificial Intelligence (AI) that I use with the caveat that it doesn’t include a major change in AI occurring around the time it was published. That was the importance of data networking for AI data collection and learning. High-speed broadband networks have become fundamental to new AI and also “Big Data” because the success of these services now depend on their ability to scour the Internet and other networked data sources to find useful information.[1]
This post looks at how collecting information from various structured and “unstructured” data sources have become an essential process for procuring information resources for AI and Big Data.[2] In particular, it looks at two strategies that are used to search networked sources for relevant data. It then discusses some ramifications for net neutrality, a regulatory stance that seeks to avoid discrimination against data content providers, including generative AI, by Internet Service Providers (ISPs).
Broadband communications enable the transfer of data between different applications on sensors, smart devices and cloud locations, contributing to the overall effectiveness of AI models and Big Data analytics. AI encompasses various technologies and approaches, including machine learning (ML), neural networks, natural language processing, expert systems, and robotics.[See 3] Big Data technologies include tools and frameworks designed to process, store, and analyze large datasets.
Technologies like MapReduce and Hadoop at Google and Yahoo! created the programming framework that led to applications like Apache Spark, NoSQL databases, and various data warehousing solutions. These are general-purpose cluster computing systems with programs written in Scala, Java, and Python that make parallel jobs easy to write and manage. These operating engines direct workloads, perform queries, conduct analyses, and support computation graphs at a totally new scale. They work across a wide range of low-cost servers, collecting information from mobile devices, PCs, and the IoTs such as autos, cash registers, and building environmental systems. Information from these data sources becomes fodder for analysis and innovative value creation.
APIs (Application Programming Interfaces) and web scraping collect information from the data networks, including the Internet. APIs are instrumental in integrating data into AI applications and machine learning models. APIs are also crucial in facilitating Big Data collection by providing a relatively standardized way for different software applications to communicate and exchange data. Web scraping is important to both AI and Big Data as the process of extracting information from HTML and CSS-coded websites collects large volumes of usable data.
What are the Differences between Big Data and AI?
While AI and Big Data are distinct concepts, they often intersect as AI systems frequently rely on large datasets for training and learning. Big Data technologies play a crucial role in managing the data requirements of AI applications, providing the necessary infrastructure for processing and analyzing vast amounts of information needed to build and continually train AI models.
The purpose of AI is to enable digital machines to perform tasks that would typically mimic or simulate human-like intelligence. This includes areas such as natural language processing, computer vision, machine learning, and robotics. AI systems can be designed to perform specific tasks, learn from experience, and adapt to changing situations.
AI applications are diverse and can be found in areas such as virtual assistants, image and speech recognition, recommendation engines, autonomous vehicles, and healthcare diagnostics. They strive to tackle tasks such as problem-solving, learning, reasoning, perception, and language understanding.
We are far from attributing human intelligence and consciousness in AI, but data networking appears to be key to ML. Kaku (2011) suggested three traits that would be a good start to theorize consciousness in AI:
1. sensing and recognizing the environment
2. self-awareness
3. planning for the future by setting goals and plans, that is, simulating the future and plotting strategy
Accepting these characteristics, it would be useful to examine the role of online data collection on each of them and collectively in the context of AI.
The purpose of Big Data is to handle and analyze massive volumes of data to derive valuable insights and identify patterns or correlations within the data. It draws on the substantial amount of data that organizations generate, process, and store. Big Data technologies enable organizations to manage and extract value from the datasets to produce meaningful insights, identify patterns, and understand trends that can inform decision-making processes.
Big Data applications span various industries and use cases, including business analytics, financial analysis, healthcare informatics, scientific research, and predictive modeling. Big Data focuses on the efficient handling of large volumes of data that involves data storage, retrieval, processing, and analysis.
Why AI and Big Data Use APIs for Data Collection
An API is a set of rules and tools that allows developers to access the functionality or data of a web service. APIs facilitate Big Data collection and AI machine learning models by providing a communication interface for applications and data networks. APIs allow applications to interact with each other, access external services, and integrate seamlessly into broader systems. Image from [4]
For example, APIs provided by cloud platforms, such as Google Cloud AI, Microsoft Azure Cognitive Services, and Amazon AI, allow developers to access pre-trained AI models for image recognition, natural language processing, and speech recognition. APIs provided by these platforms enable AI applications to access real-time social media and video streams, including posts, comments, and user interactions.
Many online platforms, including social media, e-commerce, and financial services, offer APIs that enable developers to use machine learning capabilities without managing the underlying infrastructure. Services like Amazon SageMaker, Google Cloud AI, and Azure Machine Learning provide APIs for training, deploying, and working machine learning models.
Big Data applications use APIs to collect and funnel large volumes of data into comprehensive datasets. Many governments and organizations release datasets publicly as part of open data initiatives that produce classifications based on the input data or make predictions about human behaviors. Big Data applications can access these datasets over the Internet to support tasks like urban planning, healthcare analytics, and environmental monitoring.
Likewise, APIs are instrumental in integrating machine learning (ML) models into AI applications. APIs and web scraping can be employed to gather relevant and diverse sets of data from the Internet. For example, web scraping collects images from various sources during image recognition tasks and processes them with Convolutional Neural Networks (CNNs), a type of deep learning architecture that uses algorithms specifically for processing pixel data. CNNs consist of layers with learnable filters (kernels) that detect image patterns like edges, textures, and more complex features. CNNs automatically learn and extract hierarchical features from images that help to identify and recognize objects.
Many AI and ML platforms provide APIs that allow developers to access pre-trained AI models they can use without extensive training. These are deep learning models trained on large datasets that find patterns or makes predictions based on data to accomplish specific tasks. They can be used as is or further fine-tuned to fit an application’s particular needs. These models, often made by Google, Meta, Microsoft and NVIDIA, can perform specific tasks such as creative (art, games, media) workflow, cybersecurity, image recognition, natural language processing, and sentiment analysis.
APIs enable integrating data from diverse sources, allowing Big Data applications to pull data from multiple locations and create a comprehensive dataset. APIs are used for real-time data streaming from sources such as social media platforms, financial markets, or IoT devices. Real-time APIs enable continuous data ingestion, enabling Big Data systems to analyze and respond to events as they happen.
Big Data systems often interact with databases to collect structured data. Many databases use APIs to enable programmatic access for querying and retrieving data. This practice is common in scenarios where relational databases or NoSQL databases are part of the data collection process.
Cloud providers offer APIs to access their services and resources. Big Data applications can leverage APIs to collect and process data in cloud-based storage and analytics services. This capacity facilitates scalability and flexibility in handling large datasets.
The Internet of Things (IoT) relies on APIs to enable data collection and integration between mulitple devices, sensors, and applications. IoT devices collectively generate vast amounts of data that APIs collect and manage. For example, MQTT is a messaging protocol API designed for low-bandwidth, high-latency, or unreliable networks and is commonly used for real-time communication in IoT environments. Also, RESTful APIs are used for building scalable and stateless web services and communicate between IoT devices and backend cloud servers. IoT applications requiring data retrieval, updates, and management commonly use APIs to provide a standardized way for AI and Big Data applications to collect data from connected devices such as in home automation and smart city projects.
Some companies and services that specialize in aggregating data from various sources offer APIs for accessing their aggregated datasets. Big Data applications can use these APIs to access pre-processed and curated data relevant to their analysis such as aggregated banking data.
AI both guides and uses ETL (Extract, Transform, Load) data aggregation processes. They often use APIs as part of the extraction phase but also for data transformation and enrichment. For example, ETL data collected from one source may be enriched with additional information from another source using their respective APIs. ETL cleans and organizes raw data and prepares it for data analytics and machine learning in data warehouse environments.
APIs often include mechanisms for authentication and authorization, ensuring that only authorized users or applications can access specific data. This is crucial for maintaining data security and privacy while collecting information for Big Data analysis.
In summary, APIs provide a standardized and efficient means for Big Data applications to collect data from many sources, ranging from online platforms and databases to IoT devices and cloud services. They enable interoperability between different systems and contribute to the integration of diverse datasets for analysis and decision-making.
How AI and Big Data Use Web Scraping
AI and machine learning (ML) can utilize web scraping as a method for collecting data from websites. They use web scraping for: training datasets and machine learning, text and content analysis, market research, resume parsing, price monitoring, social media monitoring and data aggregation, image and video collection, financial data extraction, healthcare data acquisition, and weather data retrieval.
Natural Language Processing (NLP) models, a subset of AI and ML, benefit from gathering text data for training. Web scraping is used to extract textual content from websites, enabling the creation of datasets for tasks such as sentiment analysis, named entity recognition, or language modeling.
AI applications involved in market analysis or competitor tracking use web scraping to collect data from competitors’ websites. This data can be analyzed to gain insights into market trends, pricing strategies, and product features. AI applications use web scraping to monitor product prices, availability, and customer reviews from e-commerce websites. This data can inform marketing strategies and enhance recommendation algorithms.
AI-powered recruitment and job matching systems utilize web scraping to extract job postings from various websites. This acquired dataset provides a view of the job market, salary ranges, and in-demand skills. This information can be used to make informed decisions about talent acquisition, workforce planning, and skill development. Additionally, web scraping can be employed to parse resumes and extract relevant information for matching candidates with job opportunities.
AI models that analyze social media trends, sentiments, or user behavior can utilize web scraping to collect data from platforms like X, Facebook, or Instagram. This data is valuable for training models in social media analytics.
Web scraping can gather relevant and diverse datasets of imagery from the web. For image recognition tasks, web scraping can collect graphics and pictures from various sources. AI applications, especially those dealing with computer vision tasks, often use web scraping to collect image and video datasets. This is common in tasks such as object detection, image classification, and facial recognition. Full self driving (FSD) draws on imagery from cameras to label potential dangers and obstacles.
AI and ML models in finance leverage web scraping to collect financial data, news, or market updates from financial websites. This data can be used for predicting financial market trends or making investment decisions.
Some AI applications in healthcare use web scraping to collect medical literature, patient reviews, and information about healthcare providers. This data can be utilized for building models related to healthcare analytics or patient sentiment analysis.
AI models predicting weather patterns may use web scraping to collect real-time weather data from various sources, including weather websites. This data is crucial for training accurate and up-to-date weather prediction models. They are also economically efficient, allowing many news sources to gather weather information from all over the planet without having to collect it themselves.
Web scraping should be conducted responsibly and ethically, respecting the terms of service of websites and relevant legal regulations. Additionally, websites may have varying degrees of resistance to web scraping, and proper measures should be taken to ensure compliance and minimize any negative impact on the targeted websites.
Implications for Net Neutrality
I’m currently reviewing new technologies and devices to consider their implications for broadband policy. These include connected cars as part of my Automatrix series, Virtual Private Networks (VPNs), and Deep Packet Inspection (DPI). I intend to readdress broadband policy issues in light of the FCC’s new emphasis on net neutrality and take a more critical look at content providers. These platforms and websites collect huge amounts of data on human behavior to influence economic and political decisions.[5] It is too early to draw substantive conclusions about the amount of data traffic that AI will produce. Still, I wanted to explain the predominant collection processes and raise some issues.
Net neutrality principles have typically advocated equal treatment of data traffic and regulations restricting ISP discrimination against content providers operating at the Internet’s edge. The Internet and its World Wide Web (WWW) were designed to prioritize capability at the “host” level – the clouds, devices, and platforms at the network’s edges. AI also operates at the edges. Following historical and legal precedents that reach back to the telegraph and even railroads, the regulatory regime for telecommunications has been codified for the carrier to move information commodities and content with transparency and non-interference.
ISPs have pushed back in the computer age, looking to use the increasing intelligence in their telecommunications networks to extract additional value from informational exchanges. They argue the capital-intensive nature of their service provision requires them to invest in the newest technologies. They further contend that their investments can also offer value-added services that would benefit their customers, such as IPTV and search engines. Content dompetitors have complained this gives the ISPs a competitive and potentially dangerous advantage.
Although it’s early in the era of AI and Big Data data collection, we can expect that they will have a major impact on network resources. Congestion issues are a major concern for ISPs that risk losing customer confidence if traffic slows, videos buffer, and games lag. Will data collection seriously affect broadband usage? Using APIs and large-scale web scraping, particularly when conducted by big entities, might disproportionately affect network speeds. API-based data collection and web scraping practices should be mindful of their impact on the broader networked world.
Notes
[1] Pennings, A.J. (2013, Feb 15). Working Big Data – Hadoop and the Transformation of Data Processing. apennings.com https://apennings.com/data-analytics-and-meaning/working-big-data-hadoop-and-the-transformation-of-data-processing/ and Pennings, A.J. (2011, Dec 11). The New Frontier of Big Data. apennings.com https://apennings.com/technologies-of-meaning/the-new-frontier-of-big-data/ Image of web scraping from https://prowebscraping.com/web-scraping/ offering related services.
[2] Data retrieval has historically drawn from the records of structured databases. IBM has made the distinction between structured and unstructured data where structured data is sourced from “GPS sensors, online forms, network logs, web server logs, OLTP systems, etc., whereas unstructured data sources include email messages, word-processing documents, PDF files, etc.” IBM’s Watson for example, was heavily dependent on the structured information model in its early days. See Pennings, A.J. (2014, Nov 11). IBM’s Watson AI Targets Healthcare. apennings.com https://apennings.com/data-analytics-and-meaning/ibms-watson-ai-targets-healthcare/
[3] AI encompasses various technologies and approaches, including machine learning, neural networks, natural language processing, expert systems, and robotics. Machine learning (ML), a subset of AI, involves algorithms that allow systems to learn from data. Neural networks teach computers to process data with deep learning that uses interconnected nodes or neurons in a layered structure that was inspired by the human brain. Natural language processing is machine learning technology that teaches computers to comprehend, interpret, and manipulate human language. Expert systems use AI to simulate the expertise, judgment, and experience of a human or an organization in a particular field. Robotics is the field of creating intelligent machines that can assist humans in a variety of ways.
[4] Pascal, Heus (2023, Jun 23). AI, APIs, metadata, and data: the digital knowledge and machine intelligence ecosystem. https://blog.postman.com/author/pascal-heus/ https://blog.postman.com/ai-apis-metadata-data-digital-knowledge-and-machine-intelligence-ecosystem/
[5] Large-scale web scraping often involves the extraction of personal data from websites, and this can raise privacy concerns. If not done responsibly, scraping personal or sensitive information might violate privacy regulations. Net neutrality discussions often extend to privacy considerations, emphasizing the need for responsible and ethical data practices. ISPs might be tempted to intervene in web scraping activities by implementing measures such as blocking or throttling, especially if the scraping activity is seen as detrimental to their networks or if it violates terms of service. Such interventions could raise questions about net neutrality, as they involve discriminatory actions against specific types of traffic.
Note: Chat GPT was used for parts of this post. Multiple prompts were used and parsed.
Citation APA (7th Edition)
Pennings, A.J. (2024, Jan 19). How Do Artificial Intelligence and Big Data Use APIs and Web Scraping to Collect Data? Implications for Net Neutrality. apennings.com https://apennings.com/technologies-of-meaning/how-do-artificial-intelligence-and-big-data-use-apis-and-web-scraping-to-collect-data-implications-for-net-neutrality/
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea, where he teaches broadband and cloud policy for sustainable development. From 2002 to 2012, he was on the faculty of New York University, teaching comparative political economy and digital economics. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in Korea.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Apache Hadoop > Apache Spark > APIs > Application Programming Interfaces > Azure Machine Learning > Big Data analytics > NoSQL databases > web scraping
Networking Connected Vehicles in the Automatrix
Posted on | January 15, 2024 | No Comments
Networking of connected vehicles draws on a combination of public-switched wireless communications, GPS and other satellites, and Vehicular Ad hoc Networks (VANET) that directly connect autos with each other and roadside infrastructure.[1] Connecting to 4G LTE, 5G, and even 3G and 2.5G in some cases provides access to the wider world of web devices and resources. Satellites provide geo-location services, emergency, and broadcast entertainment. VANETs enable vehicles to communicate with each other and with roadside infrastructure to improve road safety, traffic efficiency, and provide various applications and services.
This image shows an early version of a connected automatix infrastructure including a VANET.
This post outlines the major ways connected cars and other vehicles use broadband data communications. It builds some earlier work I started on the idea of the Automatrix, starting with “Google: Monetizing the Automatrix” and “Google You Can Drive My Car.” It is also written in anticipation of a continued discussion on net neutrality and connected vehicles although that is beyond the scope of this post.
Public-Switched Wireless Communications
Wireless communications include radio connectivity, cellular network architecture, and “home” orientation. This infrastructure differs significantly from the fixed broadband Internet and World Wide Web model designed around stationary “edge” devices with single Internet Protocol (IP) addresses. Mobile devices have been able to utilize the wireless cellular topology for unprecedented connectivity by replacing the IP address with a new number called the IMSI that identifies itself and maintains a link to a home network, usually a paid service plan with a cellular provider, e.g., Verizon, Orange, Vodaphone.
The digital signal transmission codes have changed over time, allowing for better signal quality, reduced interference, and improved capacity for handling voice and data services. These included Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), and Code Division Multiple Access (CDMA) that support both voice and data services. GSM was widely adopted standard for public-switched wireless communications, but has been largely replaced by CDMA and Long-Term Evolution (LTE) fourth-generation (4G) and more energy hungry and shorter range fifth-generation (5G) networks. With LTE traditional voice calls became digital and users could access a variety of data services, including text messaging, mobile internet, and multimedia content based on Internet Protocols (IP).
The public-switched wireless network divides a geographic coverage area into “cells” where each spatial division is served by a base station or cell tower that manages the electromagnetic spectrum transmissions and supports mobility as users move between cells. As a mobile device transitions from one cell to another, a “handoff” occurs that ensures uninterrupted connectivity as users move across different cells. Roaming agreements between different carriers enable users to maintain connectivity even when outside their home network coverage area. Digital switching systems are employed in the core network infrastructure to handle call routing, signaling, and management.
A key concept in the wireless public network is the notion of “home” with mobile devices typically using SIM cards with an international mobile subscriber identity (IMSI) number to authenticate and identify users on the network. SIM cards store subscriber information, including user credentials and network preferences.
Wireless communications incorporate security measures to protect user privacy and data. Encryption and authentication mechanisms help secure communication over the wireless networks.
Satellites
Satellites play a crucial role in enhancing the capabilities of connected cars by providing various services and functionalities. They extend connectivity to areas with limited or no terrestrial network coverage, allowing access for connected cars traveling through remote or rural locations where traditional cellular coverage may be sparse. GPS satellites provide accurate location information, enabling navigation systems in cars to determine the vehicle’s position, calculate routes, and provide turn-by-turn directions.
Satellites also support a range of location-based services providing real-time traffic information, points of interest, and location-based notifications, enhancing the overall navigation experience. Satellite connectivity facilitates remote diagnostics and maintenance monitoring for connected vehicles. Satellites have provided remote monitoring and management of vehicle fleets. Fleet operators can track vehicle locations, monitor driving behavior, manage fuel efficiency, and schedule maintenance using satellite-based telematics solutions.
Satellites contribute to enhanced safety features in connected cars by enabling automatic crash notification systems. In the event of a collision, the vehicle can send an automatic distress signal with its location to emergency services, facilitating a quicker response. In the case of theft or emergency, satellite communication can be used to remotely disable the vehicle, track its location, or provide assistance to drivers.
Satellites also play a role in delivering over-the-air (OTA) updates to connected cars, allowing manufacturers to use satellite communication to send software updates, firmware upgrades, and map updates directly to the vehicles, ensuring they remain up-to-date with the latest features and improvements. They can also remotely assess vehicle health, identify potential issues, and schedule maintenance, reducing the need for physical visits to service centers.
Lastly, satellites support the delivery of entertainment and infotainment services to connected cars. Satellite radio services, for example, provide a wide range of channels with music, news, and other content, accessible to drivers and passengers in areas with limited terrestrial radio coverage.
Satellites can contribute to Vehicle-to-Everything (V2X) communication by providing a reliable and wide-reaching communication infrastructure. V2X communication allows connected cars to exchange information with other vehicles, infrastructure (such as traffic signals), and even pedestrians, enhancing safety and traffic efficiency.
The integration of satellite technology enhances the overall connectivity, safety, and functionality of connected cars, contributing to a more advanced and intelligent automatrix.
Vehicular Ad hoc Networks (VANETs)
VANETs play a significant role in enhancing communication and connectivity among vehicles and with roadside infrastructure. VANETs have no base stations and devices can only transmit to other devices in the near proximity, such as other cars, emergency vehicles (ambulances, police, etc.) and roadside devices.
Here are some key characteristics of vehicular networks:
– A dynamic and rapidly changing network topology due to the constant movement of vehicles. Nodes (vehicles) enter and leave the network frequently, leading to a highly active environment.
– Direct communication between vehicles, allowing them to share information such as speed, position, and other relevant data. V2V communication plays a crucial role in enhancing road safety and traffic efficiency.
– Interactions between vehicles and roadside infrastructure, such as traffic lights, road signs, and sensors, enable vehicles to receive real-time information about traffic conditions and other relevant data.
– In the absence of a fixed infrastructure for communication, vehicles act as both nodes and routers, forming an ad hoc network where communication links are established based on proximity.
– Broadcast mode disseminates information about traffic warnings, road conditions, and emergency alerts to nearby vehicles.
– Low-latency communication supports real-time applications like collision avoidance systems and emergency alerts. Timely information exchange is crucial for the effectiveness of these applications.
– Security and privacy techniques for authentication, confidentiality, and data integrity.
– Connected vehicles support various traffic safety applications, including collision and lane-switching warnings, as well as collaborative cruise control. These applications aim to enhance overall road safety.
– Vehicular communication is influenced by signal fading and attenuation, especially in urban environments with obstacles. These factors need to be overcome for reliable communication.[3]
VANETs play a crucial role in the development of Intelligent Transportation Systems (ITS) and contribute to creating safer, more efficient, and connected road networks. Due to the rapid mobility of vehicles, the Automatrix may experience frequent connectivity disruptions. Protocols and mechanisms are important to cope with intermittent connectivity.
One of the reasons I liked the category of the Automatrix was that the attention was on the context, not exclusively the individual vehicles. When it comes to connected cars, the implications of net neutrality are significant and can influence various aspects of their functionality and services.[4]
Connected cars contribute to the broader concept of the Internet of Things (IoT) by creating an interconnected network where vehicles, infrastructure, and users communicate and collaborate to enhance safety, efficiency, and overall driving experience. These connected vehicles leverage various sensors, embedded and internal Ethernet systems, and communication protocols to tether to Bluetooth and access mobile cellular and satellite services.
Notes
[1] Wahid I, Tanvir S, Ahmad M, Ullah F, AlGhamdi AS, Khan M, Alshamrani SS. (23 July 2022) Vehicular Ad Hoc Networks Routing Strategies for Intelligent Transportation System. Electronics 2022, 11(15), 2298; https://www.mdpi.com/2079-9292/11/15/2298
[2] Image from Hakim Badis, Abderrezak Rachedi, in Modeling and Simulation of Computer Networks and Systems, 2015 https://www.sciencedirect.com/topics/computer-science/vehicular-ad-hoc-network
[3] https://www.emqx.com/en/blog/connected-cars-and-automotive-connectivity-all-you-need-to-know
[4] https://edition.cnn.com/2023/09/26/tech/fcc-net-neutrality-internet-providers/index.html
Citation APA (7th Edition)
Pennings, A.J. (2024, Jan 15). Networking Connected Cars in the Automatrix. apennings.com https://apennings.com/telecom-policy/networking-in-the-automatrix/
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea teaching broadband policy and ICT for sustainable development. From 2002-2012 he was on the faculty of New York University where he taught digital economics and information systems management. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Code Division Multiple Access (CDMA) > Frequency Division Multiple Access (FDMA) > international mobile subscriber identity (IMSI) number > Public-Switched Wireless Communications > SIM card > Time Division Multiple Access (TDMA) > VANETs
Net Neutrality and the Use of Virtual Private Networks (VPNs)
Posted on | November 26, 2023 | No Comments
Net neutrality regulations strive to treat VPNs (Virtual Private Networks) neutrally, meaning that Internet Service Providers (ISPs) should not discriminate against or block the use of VPN services. As a regulatory principle, Net neutrality advocates for equal treatment of all data on the Internet, regardless of the type of content, application, or service. VPN is a technology that establishes an encrypted connection over the Internet by allowing users to access a private network remotely. This connection provides anonymity, privacy, and security but may also be used in sensitive activities, including bypassing geographical restrictions imposed by licensing agreements, ISPs, or regional authorities.
In this post, I investigate the complexities of VPNs and their implications for both content providers and ISPs. First, I describe how VPNs work. Then I explore how content service providers like video streaming platforms treat VPNs. Next, I do a similar analysis of different strategies used by ISPs when they want to hamper VPN use. Lastly, I return to the VPNs’ relationship to net neutrality.
VPNs are widely used for personal and business purposes to protect sensitive data and enable secure remote access to private networks. In many cases, ISPs and other carriers, as well as OTT (Over-the-Top) content providers, may attempt to block or restrict the use of Virtual Private Networks (VPNs). However, the extent to which VPNs are blocked can vary depending on the region, the specific ISP, and local regulations.
How does a VPN work?
A VPN works by creating a secure and encrypted connection between the user’s device and a VPN server. When a user contacts a VPN, they are authenticated, typically by entering a username and password, often automatically through VPN client software. Some VPNs may also use additional authentication methods, such as multi-factor authentication, for enhanced security. When the connection is authenticated, the communication between the user’s device (computer, smartphone, etc.) and the VPN server is encrypted for security.
The encrypted data moving between user and server is encapsulated with a process known as tunneling. This creates a private and protected pathway for data to travel between the user’s device and the VPN server. Various tunneling protocols, such as OpenVPN, L2TP/IPsec, or IKEv2/IPsec, are used to establish this secure connection. The VPN server then assigns the user’s device a new IP address, replacing the device’s original IP address. This is often a virtual IP address within a range managed by the VPN server.
All Internet traffic to the user’s device is then routed through the VPN server. This means that websites, services, and online resources such as a streaming service, perceive the user’s location as that of the VPN server rather than the user’s actual location. Users can access content that may be geo-restricted or censored in their physical location by connecting to a VPN server in a different geographic location. This allows them to appear as if they are accessing the Internet from the location of the VPN server.
Anti-VPN Technologies Used by Content Providers
VPNs become a net neutrality issue when they are targeted by either content providers or ISPs. Some content providers and streaming services may block access from known VPN IP addresses to enforce regional restrictions on their content. Streaming services negotiate licensing agreements with content providers to distribute content only in specific regions. Other concerns include copyright infringement by other content providers and the quality of service of traffic routed through multiple servers. Complicated data packet routes can cause latency or buffering issues, which degrade the streaming experience. Nevertheless, VPNs can circumvent this blocking by masking the user’s real IP address and making it appear as if they are connecting from a different location.
Content services employ various techniques to detect the use of VPNs and proxy servers. They maintain databases of IP addresses associated with VPNs and proxy servers and compare the user’s IP address against these databases to check for matches. If the detected IP address is on the list of known VPN servers, the streaming service may block access or display an error message.
Content providers such as video streaming services may also analyze user behavior to detect patterns indicative of VPN usage. For example, if a user rapidly connects from different geographical locations, it may raise suspicion and trigger additional checks to determine if a VPN is in use. VPN detection may involve checking for DNS (Domain Name System) leaks that reveals DNS requests or vulnerabilities in WebRTC (Web Real-Time Communication) protocols that gives real-time guarantees but can reveal client credentials. These leaks can expose the user’s actual IP address, allowing the content services to identify VPN usage.
Streaming services may decide to block entire IP ranges associated with data centers or hosting providers commonly used by VPN services. This approach helps prevent access from a broad range of VPN users sharing similar IP addresses. Streaming services regularly use geolocation services to determine the physical location of an IP address. If the detected location does not match the expected geographical area based on the user’s account information, it may trigger suspicion of VPN use.
VPN connections often exhibit different speed characteristics compared to regular links. Streaming services may analyze the connection speed and behavior to identify patterns associated with VPN usage. Lastly, some streaming services may employ captcha challenges or additional verification steps when they detect suspicious activity, such as rapid and frequent connection attempts from different locations. This targeting can inconvenience users but serves to identify and block VPN usage.
How ISPs treat VPNs
Net neutrality principles call for ISPs to treat all data packets on the Internet equally. It can prohibit ISPs from discriminating against specific online services, applications, or providers, including the data packets generated by VPN services. This norm means that ISPs should not block or throttle VPN traffic just because it is VPN traffic. VPN providers, like any other online service, should be able to reach users without facing unfair restrictions.
Nevertheless, ISPs may employ various techniques to block or throttle VPN traffic. These measures are often implemented for network management, compliance with regional regulations, or enforcing content restrictions. Deep Packet Inspection (DPI) is a technology that allows ISPs to inspect the content of data packets passing through their networks. By analyzing the characteristics of the traffic, including protocol headers and content payload, DPI can identify patterns associated with VPN traffic. ISPs may use DPI to detect and block specific VPN protocols or to throttle VPN traffic. Some advanced filtering technologies can detect and block VPN traffic. However, this approach is more common in regions with strict Internet censorship.
ISPs can block or restrict traffic on specific ports commonly associated with VPN protocols. For example, they might block traffic on ports used by OpenVPN (e.g., TCP port 1194 or UDP port 1194) or other well-known VPN protocols. By blocking these ports, ISPs aim to prevent establishing VPN connections. ISPs may also maintain lists of IP addresses associated with known VPN servers and block traffic to and from these addresses. This method targets specific VPN servers or services rather than attempting to identify VPN traffic based on its characteristics.
Some VPN protocols obfuscate or disguise their traffic, making it more challenging for ISPs to detect and block them. This subterfuge includes techniques like adding a layer of encryption or using obfuscated protocols that resemble regular HTTPS traffic. ISPs may also analyze traffic patterns and behaviors to identify characteristics associated with VPN usage. For example, rapid and frequent connection attempts from different locations might trigger suspicion and lead to traffic restrictions. VPNs can circumvent this blocking by masking the user’s actual IP address and making it appear as if they are connecting from a different location.
DNS filtering blocks access to specific domain names associated with VPN services. This method aims to prevent users from resolving the domain names of VPN servers, making it more difficult for them to establish connections. ISPs may implement filtering at the application layer to identify and block VPN traffic based on the behavior and characteristics of specific VPN applications. Instead of outright blocking VPN traffic, some ISPs may employ bandwidth throttling to reduce the speed of VPN connections. This slowing can make VPN usage less practical or effective for users, especially when attempting to stream high-quality video or engage in other bandwidth-intensive activities.
The effectiveness of these methods can vary, and users often find workarounds to bypass VPN restrictions. VPN providers may also respond by developing new techniques to evade detection. The cat-and-mouse game between VPN providers and ISPs is ongoing, with each side adapting its strategies to stay ahead. Users who encounter VPN restrictions may explore alternative VPN protocols, use obfuscation features, or consider other means to maintain privacy and access unrestricted Internet content.
Net neutrality aims to prevent anti-competitive practices by ISPs. While some telecom entities block VPNs for legitimate reasons, such as maintaining network integrity or complying with local regulations, their actions can also violate user privacy and restrict the free flow of information. If ISPs were to block or throttle VPN traffic selectively, it could impact competition by favoring certain online services over others. This interference could be particularly concerning if ISPs were to prioritize their own VPN services over those provided by third-party VPN providers. Advocates for net neutrality argue that it is crucial for maintaining a level playing field on the Internet, fostering competition, innovation, and the free flow of information.
However, the specific regulations and enforcement mechanisms related to net neutrality can differ, and debates on this topic continue in various jurisdictions. In some countries, governments or ISPs may implement restrictions on the use of VPNs as part of broader Internet censorship efforts. These restrictions can be aimed at controlling access to certain websites, services, or content deemed inappropriate or against local laws. While net neutrality principles provide a foundation for treating VPNs fairly, the actual implementation and regulatory landscape can vary by country. Some regions have specific regulations that address net neutrality, while others may not. Additionally, the status of net neutrality can change based on regulatory decisions and legislative developments.
Citation APA (7th Edition)
Pennings, A.J. (2023, Nov 25). Net Neutrality and the Use of Virtual Private Networks (VPNs). apennings.com https://apennings.com/telecom-policy/net-neutrality-and-the-use-of-virtual-private-networks-vpns/
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea teaching broadband policy and ICT for sustainable development. From 2002-2012 he was on the faculty of New York University where he taught digital economics and information systems management. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in South Korea.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: bandwidth throttling > Common carrier law > Deep Packet Inspection (DPI) > DNS > Domain Name System > Net Neutrality > VPNs Virtual Private Networks > WebRTC (Web Real-Time Communication)