Anthony J. Pennings, PhD

WRITINGS ON DIGITAL ECONOMICS, ENERGY STRATEGIES, AND GLOBAL COMMUNICATIONS

Digital Disruption in the Film Industry – Gains and Losses – Part 3: Digital FX Emerges

Posted on | March 17, 2024 | No Comments

“To succeed predictably, disruptors must be good theorists.” – Clayton Christensen

I had a chance to attend a special showing of The Wrath of Khan (1982), the second Star Trek movie, with my daughter a few years ago at the University of Texas in Austin. It included a live appearance by William Shatner, who starred as the infamous Captain Kirk in the movie as well as the original series. Shatner told the story of how Paramount executives were jealous of the success of Star Wars (1977) and how that led to the resurgence of the Star Trek franchise and incidently, the first use of digital special effects in a movie.

This post discusses the beginning of the digital or computer-generated imagery (CGI) revolution. Previously I wrote about the emergence of the digital camera and the digital disruption caused by non-linear digital editing. Incidently, I happened to be one of the first academics to teach non-linear editing with the University of Hawaii obtained the first Avid.

It seems appropriate that Star Trek would make both film as well as computer history. Its first attempt, Star Trek: The Motion Picture (1979), was moderately successful, but very expensive due to its grandiose sets. The second movie was given over to Paramount’s television studios who tightened the script and economized on the sets. They also hired George Lucas’ Industrial Light and Magic (ILM) to produce some of the effects for the second movie. ILM created an entirely computer-generated sequence for a movie when it demonstrated the effects of the Genesis Device on a barren planet in what turned out to be the Wrath of Khan.

But was it the first? Or was it Westworld (1973) Going back in history another case emerges that might lay claim to the first digital scene.

But first some background on the move from analog film to digital visual media. Previously, most special effects in films were done by artists using various analog methods. Animation was mainly drawn by hand, frame by frame. Even another futuristic 1982 movie, Tron, displayed results that were stunning for the time, but they were painstakingly done frame by frame.

The origin story for digital FX goes back to 1964 when NASA was directing the first flyby of Mars. NASA was working with its Jet Propulsion Lab (JPL) to develop an imaging system for Mariner 4. They needed to code the shading of 40,000 dots to construct the first image of Mars. Numbers were sent back to Earth from the spacecraft and the first images were actually colored in a “paint by hand” project based on the digital numbers. Some 240,000 bits were aggregated into a series of numbers on a globe.

John Whitney Jr. wrote in the American Cinematographer (November, 1973) that Brent Sellstrom struggled with a problem of representing a robot’s point-of-view (POV) on film. The script of Westworld called for a way to show how the evil robot cowboy, played by bald 70s icon Yul Brynner saw the world. The post-production supervisor for Westworld had to find a way to get the audience’s viewpoint into the head and eyes of the evil robot, the way the mechanical device was seeing the world. The POV shot takes the audience into a character’s head to give them a first-person, or subjective experience. [1]

Sellstrom suspected that JBL’s digital scanning methods might be used to construct the robot’s point-of-view in Westworld. JBL’s estimate to do the job for two minutes of animation would take nine months and cost $200,000. This price was way over their budget so they hired another company Information International, Inc. to scan footage of the robot’s POV and convert it to numerical data with similar techniques to the ones developed at JBL. It used a series of 3600 rectangles. They had to make sure that clothes of the actors and other items were contrasted to other items on the set. It took a minute for each frame and eight hours of processing for 10 seconds of film footage. The scene provided needed POV shot that brought the audience into the robot’s experience and movie went on to be a major hit. In 1976, a sequel called Futureworld scanned and animated its star, Peter Fonda’s head, for the first appearance of 3D computer graphics in a movie. Obviously, a precursor to Max Headroom.[2]

Throughout the 1990s, advancements in computer hardware and software, particularly in rendering and animation technologies, enabled more realistic and sophisticated digital effects. Films like Jurassic Park (1993) and Terminator 2: Judgment Day (1991) showcased groundbreaking CGI that blurred the line between reality and computer-generated imagery. The rise of dedicated visual effects studios, such as Digital Domain, Industrial Light & Magic (ILM), Pixar, and Weta Digital, played a crucial role in driving innovation in digital FX. These studios employed teams of talented artists, technicians, and engineers to push the boundaries of what was possible with digital technology.

Filmmakers began integrating live-action footage with CGI elements seamlessly, allowing for the creation of fantastical worlds, creatures, and visual sequences. Films like The Matrix (1999) and The Lord of the Rings trilogy (2001-2003) pushed the boundaries of digital FX, setting new standards for realism and spectacle. The development of digital character animation techniques, exemplified by films like Toy Story (1995) and Shrek (2001), revolutionized the animation industry and paved the way for the creation of lifelike digital characters that display complex emotions and personalities.

Technologically, ILM’s Renderman, that was spun off to Pixar, has been particularly noteworthy. RenderMan was one of the first rendering software packages to enable the creation of photorealistic images in CGI. Its advanced rendering algorithms and shading techniques allowed filmmakers to achieve lifelike lighting, textures, and reflections, enhancing the realism of digital environments and characters. RenderMan’s impact on digital FX has been recognized with numerous awards, including Academy Awards in 27 of the 30 films to win the Oscar for Best Visual Effects by 2018. Its contributions to the field of computer graphics have been instrumental in advancing the art and technology of filmmaking.

Finally, a note on digital disruption from Clayton M. Christensen talking about the corresponding changes in the computing industry. Christensen argues that the tendency of good customers to always listen to their best customers and improve their products leave them open disruptive innovations. The early digital cameras for example completely surprised film supplier Kodak. More currently, the digital camera has made possible DIY streaming services like YouTube.com.

In my next post on this series, I intend to explore the introduction of artificial intelligence (AI) such as SORA and VIDU to the digital televisual world.

Citation APA (7th Edition)

Pennings, A.J. (2024, Mar 17). Digital Disruption in the Film Industry – Gains and Losses – Part 3: Digital FX Emerges. apennings.com https://apennings.com/technologies-of-meaning/digital-disruption-in-the-film-industry-gains-and-losses-part-3-digital-fx-emerges/

Share

Notes

[1] Background on the role of JPL on digital movie-making from American Cinematographer 54(11):1394–1397, 1420–1421, 1436–1437. November 1973.

[2] Frances Bonner. Fiction 2000: Cyberpunk and the Future of the Narrative (1992) in Slusser, G. and Shippey, T. eds. (Athens: University of Georgia Press)

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor at the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he was at New York University from 2002-2012 and taught film at Marist College in New York and at the University of Hawaii where he often participated in the Hawaii International Film Festival while at the East-West Center in Honolulu, Hawaii. He also taught digital media and metrics at St. Edwards University in Austin, Texas, where he lives when not in Korea.

Four Futures and the S-Curve

Posted on | March 13, 2024 | No Comments

One of my favorite professors in graduate school was Jim Dator, a professor at the University of Hawaii and Director of the Hawaii Research Center for Futures Studies at Manoa. His favorite strategy for thinking about the future was an exercise discussing four types of potential scenarios for the future of humanity: Continued Growth, Transformation, Limits and Discipline, as well as Decline and Collapse.

I include this approach in discussions about different futures strategies in my Introduction to Science, Technology, and Society Studies (STS) course to get students to think more about the trajectories of new technologies and social developments and what they may mean for the world they are inheriting.

Dators Scenarios on an S Curve

I also include a discussion of the S curve initiated by futurist John Smart’s interpretation of Dator’s four scenario exercise, as illustrated above. S-curves, also known as sigmoid curves, are mathematical models often used to describe the adoption or growth rate of various phenomena over time. Examples would be the adoption of Artificial Intelligence (AI) or the growth rate of bacteria in a lab sample. This representation is based on living systems theory by James Miller but seems to fit well with other examples, including Dator’s futures writing exercise. However, Dator saw the scenarios more as four generic, separate alternative futures rather than naturalistic growth phases that could be represented with the sigmoid curve.

Scenarios are narratives or ‘stories’ illustrating possible visions of a future. These scenarios provide a structured way to consider the components of alternative futures and their potential developments. It presents four broad scenarios or perspectives on the future that can help individuals and organizations think about and plan for different possible outcomes. They are not strictly predictions but rather help generate ideas of some possible futures.

Combining an understanding of S-curve dynamics with futures scenario can be useful in projecting trajectories, isolating trends, and constructing visions of likely outcomes. It also marks inflection points (IP) where the variations in the curvature suggest the beginning of a significant change. Also important are tipping points (TP), critical thresholds when a tiny perturbation can qualitatively alter the state or development of a system or society, indicating dramatic change. DP marks the decline or de-acceleration phase. GP (growth point) and SP (saturation point) are also critical indicators of a curve’s dynamics.

S-curves are commonly used to predict the adoption and lifecycle of technologies or products. Innovations such as personal computers, smartphones, and social media platforms have been analyzed using S-curves to predict their growth and market saturation. As they move through stages of introduction, growth, maturity, and decline, S-curves can provide insights into when these stages are likely to occur and their duration. Researchers like Everett Rogers used S-curves to explain the “diffusion of innovations,” describing how new ideas or technologies are adopted by a population over time. For example, understanding the adoption patterns of electric vehicles can help policymakers develop incentives, infrastructure, and safety standards.

The categories below expand on the four scenarios mentioned above.

Continued Growth projects the current emphasis on economic development and its social and environmental implications into the near future. In this scenario, the future is seen as an extension of the present. It assumes existing trends, systems, and patterns will continue without significant disruption. This business-as-usual (BAU) trajectory is represented in the upward orange curve.

Limits and Discipline emphasize the importance of rules, regulations, and control. In this perspective, the future is shaped by enforcing strict controls and adhering to established norms and principles. It is a scenario that focuses on order, authority, and conformity. It suggests a society that highly values places, people, processes, or values that are threatened by the existing economic and social trajectory. In this scenario, it is often believed that society has “limits to growth” and should be “disciplined” around a set of fundamental cultural, ideological, scientific, or religious values. These will likely involve environmental concerns, including “green” solutions such as recycling, social distancing, and mask-wearing in pandemic times.

It could also result from a backlash to accelerated technological developments such as AI and the increasing collection of personal data by cloud services. Robotics is another concern as the technology has a more obvious manifestation than AI. Understanding where this saturation point lies in the S-curve can help predict when growth will likely slow down or stabilize. It is represented by the blue line that reaches a plateau after the tipping point. S-curves often reach a plateau, indicating that the phenomenon is saturated in society or approaching its maximum potential.

Decline and Collapse is represented by the descending green line on the right. This scenario envisions a future characterized by the breakdown of existing systems, institutions, or structures. It suggests a catastrophic turnaround or reversal of fortunes due to natural or human-made disasters. It often involves a significant crisis or disruption that leads to a reevaluation of the way things are done. Will climate change create such a decline? Is nuclear war a possibility? Pollution and changes associated with massive carbon dioxide and methane releases are current concerns as they are linked with dramatic weather changes influencing droughts, floods, and wildfires. The challenge to US leadership in the world by China and Russia could lead to a dramatic escalation of war in the world as witnessed in Ukraine.

Finally, a Transformative society envisions a future marked by radical change, innovation, and the emergence of entirely new paradigms. It challenges individuals and organizations to think creatively, embrace innovation, and be open to transformative possibilities. It emphasizes the need to adapt and thrive in a rapidly changing world. It anticipates a radical makeover of society based on biological, spiritual, or technological revolutions. For example, the creation of new genetically reconfigured “posthuman” bodies is a possibility, perhaps due to the viral innovations of COVID-19 research or rapid adaption to environmental changes. A “singularity” of network-connected humans and AI is another projected scenario. A global set of religious revivals is also considered by many to be a possibility. These scenarios posit entirely redesigned global culture, economic, and political structures.

Dator emphasizes that the purpose of scenario visioning is to determine preferable futures and work towards them rather than prophesizing a specific future. While S-Curves add a temporal trajectory and can indicate future activities, they lack information about time-frames. It is difficult to use them to suggest the number of months, years, decades, or even centuries before they might take shape and play out.

These scenarios are not meant to predict specific outcomes but to provide a structured way to consider different possibilities and their implications. By exploring these scenarios, individuals and organizations can better prepare for a range of future developments and make informed decisions about their strategies, policies, and actions. Dator’s Four Futures framework is a valuable tool for futures thinking and scenario planning.

By analyzing historical data and fitting an S-curve to the data points, it may be possible to gain an understanding of how a particular phenomenon has emerged over time. S-curves can then be used to extrapolate future growth. By extending the curve into the future, you can estimate points when a particular phenomenon is likely to change or reach a certain level of adoption, maturity, or impact. Policymakers can use this information to predict future developments, allowing for better long-term planning and resource allocation.

Citation APA (7th Edition)

Pennings, A.J. (2024, Mar 13). Four Futures and the S-Curve. apennings.com https://apennings.com/political-economies-in-sf/jim-dators-four-futures-and-the-s-curve/

Notes

[1] I was working on my PhD on cyberspace and electric money and found the four futures approach interesting. Dator dissuaded his students of the idea of a one true future whose probability could be calculated with positivistic certainty, and suggested we use a futures visioning process to envision and develop several alternative scenarios.
[2] The notion of ideal types comes primarily from Max Weber.
[3] Dator’s Four Futures presents four broad scenarios or perspectives on the future that can help individuals and organizations think about and plan for different possible outcomes. These scenarios provide a structured way to consider alternative futures and potential developments. The four generic alternative scenarios are: four generic alternative futures” (continuation, collapse, discipline, transformation) Dator, Jim. (2009). Alternative Futures at the Manoa School. Journal of Futures Studies. 14.
[4] Alvin Toffler’s Future Shock (1970) is a book that explores the concept of rapid change and the challenges it poses to individuals and societies. While Toffler introduced the idea of future shock, he did not specifically outline “four scenarios of the future” in that book. Instead, he discussed various scenarios and trends related to technological, social, and economic changes.

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a professor in the Department of Technology and Society, State University of New York, Korea. SUNY Korea offers degrees from Stony Brook University. From 2002-2012 was on the faculty of New York University. Previously, he taught at Marist College in New York and Victoria University in Wellington, New Zealand. He lives in Austin, Texas when not in South Korea. He also spent 9 years at the East-West Center in Honolulu, Hawaii, including time working on his PhD in Political Science.

The Future of US Democracy: Getting Excessive Money Out of Elections

Posted on | March 3, 2024 | No Comments

The 2010 Supreme Court (SCOTUS) decision in Citizens United v. FEC ruled that corporations and unions could spend unlimited amounts of money on political campaigns. This was despite a Pew Research Center survey in late 2023 that found that both Republicans and Republican-leaning independents (83%) as well as Democrats and Democratic leaners (80%) agree that wealthy people contributing money to members of Congress can have too much influence on their policy decisions. In general, most Americans believe that excessive money in US politics can undermine the principles of accountability, equality, and fairness that are essential to a functioning democracy.

This post introduces the problem of money in US politics. This danger includes the disproportionate influence of wealthy donors, the erosion of democratic principles, the undermining of fair competition in elections, policy capture and the distortion of policy priorities, as well as the potential for corruption and scandal. Money also plays a significant role in US political advertising, influencing the reach, frequency, and effectiveness of political messages on broadcast channels and through social media. The post then looks at some ways Americans can address the issue through a combination of legislative reforms, legal challenges, grassroots activism, and civic engagement.

Problems Associated with Excessive Money in US Politics

When political campaigns are heavily funded by wealthy individuals, corporations, and special interest groups, there is a risk that these donors may wield undue influence over elected officials. This distortion can undermine the principle of political equality and lead to policies prioritizing donors’ interests over the general public’s needs. Political scientists are now talking about a “donor class,” a small group of wealthy urban and suburban residents who are able and willing to influence the outcome of political elections. Excessive money in politics can erode public trust in the democratic process by creating the perception that politicians are beholden to their wealthy donors rather than accountable to the electorate. This can lead to disillusionment with the political system and decreased voter interest and turnout.[1]

Large campaign war chests can create “barriers to entry” for candidates without access to significant financial resources. This competitive disadvantage can limit the diversity of candidates running for office and discourage individuals from underrepresented communities or with limited financial means from seeking elected positions. Incumbent politicians, in particular, generally have an easier time raising campaign funds compared to challengers. They can leverage their position in office to solicit contributions from political action committees (PACs), donors, and interest groups who have a vested interest in maintaining access and influence with elected officials.[2]

Excessive money in politics can also lead to “policy capture,” where wealthy donors, corporations, and special interest groups leverage their financial resources to gain access, influence decision-making, and shape policy outcomes in ways that benefit their interests, often at the expense of the broader public interest. These powerful interest groups can shape legislation and administrative regulations in their favor. A combination of campaign contributions, lobbying, and other forms of political influence can result in policies that benefit narrow interests at the expense of the broader public good. Policy capture predominantly occurs when regulatory agencies tasked with overseeing specific industries or economic sectors become influenced or controlled by the interests they are supposed to regulate.[3]

Political candidates and parties rely on campaign contributions to fund their campaigns. When wealthy donors, corporations, or special interest groups contribute significant amounts of money to political campaigns, they may gain access to elected officials and policymakers, who can feel indebted to their donors and more inclined to advance policies that align with their interests. Political campaigns that rely heavily on fundraising may prioritize issues of interest to wealthy donors over pressing societal concerns that affect a broader population segment. This skewed focus can lead to a misalignment between government priorities and the needs of ordinary citizens. A major concern is that Political Action Committees (PACs) and Super PACs can raise and spend unlimited money to support political candidates, parties, or causes and exert significant influence over the political process through their financial resources.

The influx of large sums of money into political campaigns can create opportunities for corruption and unethical behavior, such as quid pro quo arrangements where politicians exchange favors for campaign contributions. This obligation can lead to bribery, influence peddling, and other forms of corruption that undermine the integrity of the electoral process and erode public confidence in elected officials. Even if such behavior is not illegal, it can undermine public confidence in the integrity of elected officials and the political process.

Money and Media

The 5-4 Citizens United v. FEC decision by SCOTUS unleashed extraordinary amounts of money for purchasing media airtime, producing advertisements, and targeting specific audiences. Candidates who spend more on advertising tend to also receive more favorable coverage or greater visibility in news stories and analyses, further amplifying the impact of their advertising efforts.

Money can also be secretly used by foreign governments to pay social media platforms, fake news websites, bloggers, and other online channels. These channels can be hired to spread disinformation, misinformation, and propaganda to influence public opinion, sow discord, or undermine trust in democratic institutions. This influence can include spreading false information about candidates, parties, or electoral processes.

Time and column space can be purchased on television, radio, newspapers, and digital platforms to broadcast their messages to voters via memes and other pernicious forms of messaging. The cost of advertising varies depending on factors such as the size of the media market, the popularity of the programming, and the timing of the ad placement.

Creating high-quality political advertisements requires financial resources to cover expenses such as production costs, talent fees, and ad agency fees. Candidates often invest in professional production teams to create polished and persuasive advertisements that resonate with voters.

Money allows political advertisers to target specific demographic groups, geographic regions, or voter segments with tailored messages. By using data analytics and targeting tools, advertisers can optimize their ad spending to reach the most relevant and receptive audiences.

Political candidates and campaigns with greater financial resources have a competitive advantage in advertising. They can outspend their opponents, saturate the airwaves with their messages, and respond quickly to attacks or developments in the campaign.

Money facilitates the production and dissemination of negative advertising. “Mudslinging” has been a particularly effective method in shaping public opinion and swaying undecided voters. Negative ads often require substantial financial resources to fund extensive research, testing, and distribution.

In addition to candidate campaigns, outside groups such as super PACs and advocacy organizations play a significant role in political advertising. These groups can raise and spend unlimited amounts of money independently of candidates, leading to a proliferation of political ads funded by wealthy donors and special interests.

Political advertising spending can also influence media and public relations coverage of political campaigns. Candidates who spend more on advertising may receive more favorable coverage or greater visibility in news stories and analyses, further amplifying the impact of their advertising efforts.

A scary issue is foreign governments using social media platforms, fake news websites, and other online channels to spread disinformation, misinformation, and propaganda. This covert activity can include spreading false information about candidates, parties, or electoral processes. These activities are aimed at influencing public opinion, sowing discord, and undermining trust in democratic institutions.

Getting Money out of US Poltics: Options

Efforts to reduce money’s influence probably require overturning the Supreme Court’s Citizens United decision. Many critics argue that the 2010 Supreme Court decision has exacerbated the problem of money in politics and that SCOTUS has become an instrument of the donor class. Although difficult, overturning or amending this decision through constitutional means could help restore balance to the political system.

But other methods should be used to create public pressure for this change. This endeavor would include legislating campaign finance reform, including stronger disclosure requirements, the public financing of elections, empowering grassroots movements, electoral reforms at the local and state levels, and promoting civic education and engagement.

A top priority should be implementing strict campaign finance laws limiting how much money individuals, corporations, and interest groups can contribute to political campaigns. This restriction can help reduce the influence of wealthy donors and special interests and measures such as public financing of elections, contribution limits, and increased transparency in campaign spending should be pursued.

Strengthening disclosure requirements for campaign contributions and spending can increase transparency and accountability in the political process. Requiring timely and comprehensive reporting of political donations and disclosure of donors behind so-called “dark money” groups can help voters understand who is funding political campaigns.

Implementing public financing systems for political campaigns can also reduce the reliance on private donations and level the playing field for candidates who may not have access to wealthy donors. Public financing programs provide candidates with public funds to finance their campaigns, often with restrictions on private fundraising.

Another critical strategy is supporting movements organizing and promoting activism to help counterbalance the influence of big money in politics. Grassroots movements can mobilize public support for campaign finance reform, hold elected officials accountable, and advocate for policies that promote transparency and fairness in the political process.

Electoral reforms such as ranked-choice voting, proportional representation, or open primaries are also possibilities for the future. They can encourage greater competition and diversity in the political arena, reducing money’s influence in determining election outcomes.

Educating citizens about the importance of participating in the political process and empowering them to become informed voters can counteract the influence of money in politics. Encouraging civic engagement, voter registration, and turnout can amplify the voices of ordinary citizens and dilute the influence of wealthy donors.

Conclusion

Excessive money in political elections corrodes the democratic process by distorting representation, undermining public trust, and prioritizing the interests of wealthy donors over the common good. Efforts to reduce the influence of money in politics aim to promote greater transparency, accountability, and fairness in the political process. Addressing the issue of money in politics requires a combination of legal challenges, legislative reforms, grassroots activism, and civic engagement to create a more equitable and democratic political system.

Notes

[1] Donor class from “The Check is in the Mail: Interdistrict Funding Flows in Congressional Elections” by James G. Gimpel, Frances E. Lee, and Shanna Pearson-Merkowitz, in the American Journal of Political Science, April 2008. See also “Democracy and the Donor Class” from Gare Lamarche, the president of the Democracy Alliance. His speech delivered at the Haas Institute for a Fair and Inclusive Society at the University of California, Berkeley on March 7, 2013.
[2] See the report “Breaking Down Barriers: The Faces of Small Donor Public Financing” from the Brennan Center at the New York University School of Law.
[3] Policy capture is an international concern. See the (2017) International Institute for Democracy and Electoral Assistance. extract from: The Global State of Democracy: Exploring Democracy’s Resilience.
Note: Chat GPT was used for parts of this post. Multiple prompts were used, parsed, and verified.

Citation APA (7th Edition)

Pennings, A.J. (2024, Mar 4). The Future of US Democracy: Getting Excessive Money Out of Elections. apennings.com https://apennings.com/political-economy-of-media/the-future-of-us-democracy-getting-money-out-of-elections/

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea teaching broadband and media policy for sustainable development. From 2002-2012 he was on the faculty of New York University where he taught digital economics and information systems management. He has a PhD in Political Science from the University of Hawaii. He lives in Austin, Texas, when not in the Republic of Korea.

How Do Artificial Intelligence and Big Data Use APIs and Web Scraping to Collect Data? Implications for Net Neutrality

Posted on | January 19, 2024 | No Comments

One of the books I use in a course called EST 202 – Introduction to Science, Technology, and Society Studies is Michio Kaku’s Physics of the Future (2011). Despite its age, it’s a great starting point for teaching topics like Computers, Robotics, Nanotechnology, Space Travel, and Energy. It also has a chapter on Artificial Intelligence (AI) that I use with the caveat that it doesn’t include a major change in AI occurring around the time it was published. That was the importance of data networking for AI data collection and learning. High-speed broadband networks have become fundamental to new AI and also “Big Data” because the success of these services now depend on their ability to scour the Internet and other networked data sources to find useful information.[1]

web scraping

This post looks at how collecting information from various structured and “unstructured” data sources have become an essential process for procuring information resources for AI and Big Data.[2] In particular, it looks at two strategies that are used to search networked sources for relevant data. It then discusses some ramifications for net neutrality, a regulatory stance that seeks to avoid discrimination against data content providers, including generative AI, by Internet Service Providers (ISPs).

Broadband communications enable the transfer of data between different applications on sensors, smart devices and cloud locations, contributing to the overall effectiveness of AI models and Big Data analytics. AI encompasses various technologies and approaches, including machine learning (ML), neural networks, natural language processing, expert systems, and robotics.[See 3] Big Data technologies include tools and frameworks designed to process, store, and analyze large datasets.

Technologies like MapReduce and Hadoop at Google and Yahoo! created the programming framework that led to applications like Apache Spark, NoSQL databases, and various data warehousing solutions. These are general-purpose cluster computing systems with programs written in Scala, Java, and Python that make parallel jobs easy to write and manage. These operating engines direct workloads, perform queries, conduct analyses, and support computation graphs at a totally new scale. They work across a wide range of low-cost servers, collecting information from mobile devices, PCs, and the IoTs such as autos, cash registers, and building environmental systems. Information from these data sources becomes fodder for analysis and innovative value creation.

APIs (Application Programming Interfaces) and web scraping collect information from the data networks, including the Internet. APIs are instrumental in integrating data into AI applications and machine learning models. APIs are also crucial in facilitating Big Data collection by providing a relatively standardized way for different software applications to communicate and exchange data. Web scraping is important to both AI and Big Data as the process of extracting information from HTML and CSS-coded websites collects large volumes of usable data.

What are the Differences between Big Data and AI?

While AI and Big Data are distinct concepts, they often intersect as AI systems frequently rely on large datasets for training and learning. Big Data technologies play a crucial role in managing the data requirements of AI applications, providing the necessary infrastructure for processing and analyzing vast amounts of information needed to build and continually train AI models.

The purpose of AI is to enable digital machines to perform tasks that would typically mimic or simulate human-like intelligence. This includes areas such as natural language processing, computer vision, machine learning, and robotics. AI systems can be designed to perform specific tasks, learn from experience, and adapt to changing situations.

AI applications are diverse and can be found in areas such as virtual assistants, image and speech recognition, recommendation engines, autonomous vehicles, and healthcare diagnostics. They strive to tackle tasks such as problem-solving, learning, reasoning, perception, and language understanding.

We are far from attributing human intelligence and consciousness in AI, but data networking appears to be key to ML. Kaku (2011) suggested three traits that would be a good start to theorize consciousness in AI:

1. sensing and recognizing the environment
2. self-awareness
3. planning for the future by setting goals and plans, that is, simulating the future and plotting strategy

Accepting these characteristics, it would be useful to examine the role of online data collection on each of them and collectively in the context of AI.

The purpose of Big Data is to handle and analyze massive volumes of data to derive valuable insights and identify patterns or correlations within the data. It draws on the substantial amount of data that organizations generate, process, and store. Big Data technologies enable organizations to manage and extract value from the datasets to produce meaningful insights, identify patterns, and understand trends that can inform decision-making processes.

Big Data applications span various industries and use cases, including business analytics, financial analysis, healthcare informatics, scientific research, and predictive modeling. Big Data focuses on the efficient handling of large volumes of data that involves data storage, retrieval, processing, and analysis.

Why AI and Big Data Use APIs for Data Collection

An API is a set of rules and tools that allows developers to access the functionality or data of a web service. APIs facilitate Big Data collection and AI machine learning models by providing a communication interface for applications and data networks. APIs allow applications to interact with each other, access external services, and integrate seamlessly into broader systems. Image from [4]

For example, APIs provided by cloud platforms, such as Google Cloud AI, Microsoft Azure Cognitive Services, and Amazon AI, allow developers to access pre-trained AI models for image recognition, natural language processing, and speech recognition. APIs provided by these platforms enable AI applications to access real-time social media and video streams, including posts, comments, and user interactions.

Many online platforms, including social media, e-commerce, and financial services, offer APIs that enable developers to use machine learning capabilities without managing the underlying infrastructure. Services like Amazon SageMaker, Google Cloud AI, and Azure Machine Learning provide APIs for training, deploying, and working machine learning models.

Big Data applications use APIs to collect and funnel large volumes of data into comprehensive datasets. Many governments and organizations release datasets publicly as part of open data initiatives that produce classifications based on the input data or make predictions about human behaviors. Big Data applications can access these datasets over the Internet to support tasks like urban planning, healthcare analytics, and environmental monitoring.

Likewise, APIs are instrumental in integrating machine learning (ML) models into AI applications. APIs and web scraping can be employed to gather relevant and diverse sets of data from the Internet. For example, web scraping collects images from various sources during image recognition tasks and processes them with Convolutional Neural Networks (CNNs), a type of deep learning architecture that uses algorithms specifically for processing pixel data. CNNs consist of layers with learnable filters (kernels) that detect image patterns like edges, textures, and more complex features. CNNs automatically learn and extract hierarchical features from images that help to identify and recognize objects.

Many AI and ML platforms provide APIs that allow developers to access pre-trained AI models they can use without extensive training. These are deep learning models trained on large datasets that find patterns or makes predictions based on data to accomplish specific tasks. They can be used as is or further fine-tuned to fit an application’s particular needs. These models, often made by Google, Meta, Microsoft and NVIDIA, can perform specific tasks such as creative (art, games, media) workflow, cybersecurity, image recognition, natural language processing, and sentiment analysis.

APIs enable integrating data from diverse sources, allowing Big Data applications to pull data from multiple locations and create a comprehensive dataset. APIs are used for real-time data streaming from sources such as social media platforms, financial markets, or IoT devices. Real-time APIs enable continuous data ingestion, enabling Big Data systems to analyze and respond to events as they happen.

Big Data systems often interact with databases to collect structured data. Many databases use APIs to enable programmatic access for querying and retrieving data. This practice is common in scenarios where relational databases or NoSQL databases are part of the data collection process.

Cloud providers offer APIs to access their services and resources. Big Data applications can leverage APIs to collect and process data in cloud-based storage and analytics services. This capacity facilitates scalability and flexibility in handling large datasets.

The Internet of Things (IoT) relies on APIs to enable data collection and integration between mulitple devices, sensors, and applications. IoT devices collectively generate vast amounts of data that APIs collect and manage. For example, MQTT is a messaging protocol API designed for low-bandwidth, high-latency, or unreliable networks and is commonly used for real-time communication in IoT environments. Also, RESTful APIs are used for building scalable and stateless web services and communicate between IoT devices and backend cloud servers. IoT applications requiring data retrieval, updates, and management commonly use APIs to provide a standardized way for AI and Big Data applications to collect data from connected devices such as in home automation and smart city projects.

Some companies and services that specialize in aggregating data from various sources offer APIs for accessing their aggregated datasets. Big Data applications can use these APIs to access pre-processed and curated data relevant to their analysis such as aggregated banking data.

AI both guides and uses ETL (Extract, Transform, Load) data aggregation processes. They often use APIs as part of the extraction phase but also for data transformation and enrichment. For example, ETL data collected from one source may be enriched with additional information from another source using their respective APIs. ETL cleans and organizes raw data and prepares it for data analytics and machine learning in data warehouse environments.

APIs often include mechanisms for authentication and authorization, ensuring that only authorized users or applications can access specific data. This is crucial for maintaining data security and privacy while collecting information for Big Data analysis.

In summary, APIs provide a standardized and efficient means for Big Data applications to collect data from many sources, ranging from online platforms and databases to IoT devices and cloud services. They enable interoperability between different systems and contribute to the integration of diverse datasets for analysis and decision-making.

How AI and Big Data Use Web Scraping

AI and machine learning (ML) can utilize web scraping as a method for collecting data from websites. They use web scraping for: training datasets and machine learning, text and content analysis, market research, resume parsing, price monitoring, social media monitoring and data aggregation, image and video collection, financial data extraction, healthcare data acquisition, and weather data retrieval.

Natural Language Processing (NLP) models, a subset of AI and ML, benefit from gathering text data for training. Web scraping is used to extract textual content from websites, enabling the creation of datasets for tasks such as sentiment analysis, named entity recognition, or language modeling.

AI applications involved in market analysis or competitor tracking use web scraping to collect data from competitors’ websites. This data can be analyzed to gain insights into market trends, pricing strategies, and product features. AI applications use web scraping to monitor product prices, availability, and customer reviews from e-commerce websites. This data can inform marketing strategies and enhance recommendation algorithms.

AI-powered recruitment and job matching systems utilize web scraping to extract job postings from various websites. This acquired dataset provides a view of the job market, salary ranges, and in-demand skills. This information can be used to make informed decisions about talent acquisition, workforce planning, and skill development. Additionally, web scraping can be employed to parse resumes and extract relevant information for matching candidates with job opportunities.

AI models that analyze social media trends, sentiments, or user behavior can utilize web scraping to collect data from platforms like X, Facebook, or Instagram. This data is valuable for training models in social media analytics.

Web scraping can gather relevant and diverse datasets of imagery from the web. For image recognition tasks, web scraping can collect graphics and pictures from various sources. AI applications, especially those dealing with computer vision tasks, often use web scraping to collect image and video datasets. This is common in tasks such as object detection, image classification, and facial recognition. Full self driving (FSD) draws on imagery from cameras to label potential dangers and obstacles.

AI and ML models in finance leverage web scraping to collect financial data, news, or market updates from financial websites. This data can be used for predicting financial market trends or making investment decisions.

Some AI applications in healthcare use web scraping to collect medical literature, patient reviews, and information about healthcare providers. This data can be utilized for building models related to healthcare analytics or patient sentiment analysis.

AI models predicting weather patterns may use web scraping to collect real-time weather data from various sources, including weather websites. This data is crucial for training accurate and up-to-date weather prediction models. They are also economically efficient, allowing many news sources to gather weather information from all over the planet without having to collect it themselves.

Web scraping should be conducted responsibly and ethically, respecting the terms of service of websites and relevant legal regulations. Additionally, websites may have varying degrees of resistance to web scraping, and proper measures should be taken to ensure compliance and minimize any negative impact on the targeted websites.

Implications for Net Neutrality

I’m currently reviewing new technologies and devices to consider their implications for broadband policy. These include connected cars as part of my Automatrix series, Virtual Private Networks (VPNs), and Deep Packet Inspection (DPI). I intend to readdress broadband policy issues in light of the FCC’s new emphasis on net neutrality and take a more critical look at content providers. These platforms and websites collect huge amounts of data on human behavior to influence economic and political decisions.[5] It is too early to draw substantive conclusions about the amount of data traffic that AI will produce. Still, I wanted to explain the predominant collection processes and raise some issues.

Net neutrality principles have typically advocated equal treatment of data traffic and regulations restricting ISP discrimination against content providers operating at the Internet’s edge. The Internet and its World Wide Web (WWW) were designed to prioritize capability at the “host” level – the clouds, devices, and platforms at the network’s edges. AI also operates at the edges. Following historical and legal precedents that reach back to the telegraph and even railroads, the regulatory regime for telecommunications has been codified for the carrier to move information commodities and content with transparency and non-interference.

ISPs have pushed back in the computer age, looking to use the increasing intelligence in their telecommunications networks to extract additional value from informational exchanges. They argue the capital-intensive nature of their service provision requires them to invest in the newest technologies. They further contend that their investments can also offer value-added services that would benefit their customers, such as IPTV and search engines. Content dompetitors have complained this gives the ISPs a competitive and potentially dangerous advantage.

Although it’s early in the era of AI and Big Data data collection, we can expect that they will have a major impact on network resources. Congestion issues are a major concern for ISPs that risk losing customer confidence if traffic slows, videos buffer, and games lag. Will data collection seriously affect broadband usage? Using APIs and large-scale web scraping, particularly when conducted by big entities, might disproportionately affect network speeds. API-based data collection and web scraping practices should be mindful of their impact on the broader networked world.

Notes

[1] Pennings, A.J. (2013, Feb 15). Working Big Data – Hadoop and the Transformation of Data Processing. apennings.com https://apennings.com/data-analytics-and-meaning/working-big-data-hadoop-and-the-transformation-of-data-processing/ and Pennings, A.J. (2011, Dec 11). The New Frontier of Big Data. apennings.com https://apennings.com/technologies-of-meaning/the-new-frontier-of-big-data/ Image of web scraping from https://prowebscraping.com/web-scraping/ offering related services.

[2] Data retrieval has historically drawn from the records of structured databases. IBM has made the distinction between structured and unstructured data where structured data is sourced from “GPS sensors, online forms, network logs, web server logs, OLTP systems, etc., whereas unstructured data sources include email messages, word-processing documents, PDF files, etc.” IBM’s Watson for example, was heavily dependent on the structured information model in its early days. See Pennings, A.J. (2014, Nov 11). IBM’s Watson AI Targets Healthcare. apennings.com https://apennings.com/data-analytics-and-meaning/ibms-watson-ai-targets-healthcare/

[3] AI encompasses various technologies and approaches, including machine learning, neural networks, natural language processing, expert systems, and robotics. Machine learning (ML), a subset of AI, involves algorithms that allow systems to learn from data. Neural networks teach computers to process data with deep learning that uses interconnected nodes or neurons in a layered structure that was inspired by the human brain. Natural language processing is machine learning technology that teaches computers to comprehend, interpret, and manipulate human language. Expert systems use AI to simulate the expertise, judgment, and experience of a human or an organization in a particular field. Robotics is the field of creating intelligent machines that can assist humans in a variety of ways.

[4] Pascal, Heus (2023, Jun 23). AI, APIs, metadata, and data: the digital knowledge and machine intelligence ecosystem. https://blog.postman.com/author/pascal-heus/ https://blog.postman.com/ai-apis-metadata-data-digital-knowledge-and-machine-intelligence-ecosystem/

[5] Large-scale web scraping often involves the extraction of personal data from websites, and this can raise privacy concerns. If not done responsibly, scraping personal or sensitive information might violate privacy regulations. Net neutrality discussions often extend to privacy considerations, emphasizing the need for responsible and ethical data practices. ISPs might be tempted to intervene in web scraping activities by implementing measures such as blocking or throttling, especially if the scraping activity is seen as detrimental to their networks or if it violates terms of service. Such interventions could raise questions about net neutrality, as they involve discriminatory actions against specific types of traffic.

Note: Chat GPT was used for parts of this post. Multiple prompts were used and parsed.

Citation APA (7th Edition)

Pennings, A.J. (2024, Jan 19). How Do Artificial Intelligence and Big Data Use APIs and Web Scraping to Collect Data? Implications for Net Neutrality. apennings.com https://apennings.com/technologies-of-meaning/how-do-artificial-intelligence-and-big-data-use-apis-and-web-scraping-to-collect-data-implications-for-net-neutrality/

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea, where he teaches broadband and cloud policy for sustainable development. From 2002 to 2012, he was on the faculty of New York University, teaching comparative political economy and digital economics. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in Korea.

Networking Connected Vehicles in the Automatrix

Posted on | January 15, 2024 | No Comments

Networking of connected vehicles draws on a combination of public-switched wireless communications, GPS and other satellites, and Vehicular Ad hoc Networks (VANET) that directly connect autos with each other and roadside infrastructure.[1] Connecting to 4G LTE, 5G, and even 3G and 2.5G in some cases provides access to the wider world of web devices and resources. Satellites provide geo-location services, emergency, and broadcast entertainment. VANETs enable vehicles to communicate with each other and with roadside infrastructure to improve road safety, traffic efficiency, and provide various applications and services.

This image shows an early version of a connected automatix infrastructure including a VANET.

This post outlines the major ways connected cars and other vehicles use broadband data communications. It builds some earlier work I started on the idea of the Automatrix, starting with “Google: Monetizing the Automatrix” and “Google You Can Drive My Car.” It is also written in anticipation of a continued discussion on net neutrality and connected vehicles although that is beyond the scope of this post.

Public-Switched Wireless Communications

Wireless communications include radio connectivity, cellular network architecture, and “home” orientation. This infrastructure differs significantly from the fixed broadband Internet and World Wide Web model designed around stationary “edge” devices with single Internet Protocol (IP) addresses. Mobile devices have been able to utilize the wireless cellular topology for unprecedented connectivity by replacing the IP address with a new number called the IMSI that identifies itself and maintains a link to a home network, usually a paid service plan with a cellular provider, e.g., Verizon, Orange, Vodaphone.

The digital signal transmission codes have changed over time, allowing for better signal quality, reduced interference, and improved capacity for handling voice and data services. These included Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), and Code Division Multiple Access (CDMA) that support both voice and data services. GSM was widely adopted standard for public-switched wireless communications, but has been largely replaced by CDMA and Long-Term Evolution (LTE) fourth-generation (4G) and more energy hungry and shorter range fifth-generation (5G) networks. With LTE traditional voice calls became digital and users could access a variety of data services, including text messaging, mobile internet, and multimedia content based on Internet Protocols (IP).

The public-switched wireless network divides a geographic coverage area into “cells” where each spatial division is served by a base station or cell tower that manages the electromagnetic spectrum transmissions and supports mobility as users move between cells. As a mobile device transitions from one cell to another, a “handoff” occurs that ensures uninterrupted connectivity as users move across different cells. Roaming agreements between different carriers enable users to maintain connectivity even when outside their home network coverage area. Digital switching systems are employed in the core network infrastructure to handle call routing, signaling, and management.

A key concept in the wireless public network is the notion of “home” with mobile devices typically using SIM cards with an international mobile subscriber identity (IMSI) number to authenticate and identify users on the network. SIM cards store subscriber information, including user credentials and network preferences.

Wireless communications incorporate security measures to protect user privacy and data. Encryption and authentication mechanisms help secure communication over the wireless networks.

Satellites

Satellites play a crucial role in enhancing the capabilities of connected cars by providing various services and functionalities. They extend connectivity to areas with limited or no terrestrial network coverage, allowing access for connected cars traveling through remote or rural locations where traditional cellular coverage may be sparse. GPS satellites provide accurate location information, enabling navigation systems in cars to determine the vehicle’s position, calculate routes, and provide turn-by-turn directions.

Satellites also support a range of location-based services providing real-time traffic information, points of interest, and location-based notifications, enhancing the overall navigation experience. Satellite connectivity facilitates remote diagnostics and maintenance monitoring for connected vehicles. Satellites have provided remote monitoring and management of vehicle fleets. Fleet operators can track vehicle locations, monitor driving behavior, manage fuel efficiency, and schedule maintenance using satellite-based telematics solutions.

Satellites contribute to enhanced safety features in connected cars by enabling automatic crash notification systems. In the event of a collision, the vehicle can send an automatic distress signal with its location to emergency services, facilitating a quicker response. In the case of theft or emergency, satellite communication can be used to remotely disable the vehicle, track its location, or provide assistance to drivers.

Satellites also play a role in delivering over-the-air (OTA) updates to connected cars, allowing manufacturers to use satellite communication to send software updates, firmware upgrades, and map updates directly to the vehicles, ensuring they remain up-to-date with the latest features and improvements. They can also remotely assess vehicle health, identify potential issues, and schedule maintenance, reducing the need for physical visits to service centers.

Lastly, satellites support the delivery of entertainment and infotainment services to connected cars. Satellite radio services, for example, provide a wide range of channels with music, news, and other content, accessible to drivers and passengers in areas with limited terrestrial radio coverage.

Satellites can contribute to Vehicle-to-Everything (V2X) communication by providing a reliable and wide-reaching communication infrastructure. V2X communication allows connected cars to exchange information with other vehicles, infrastructure (such as traffic signals), and even pedestrians, enhancing safety and traffic efficiency.

The integration of satellite technology enhances the overall connectivity, safety, and functionality of connected cars, contributing to a more advanced and intelligent automatrix.

Vehicular Ad hoc Networks (VANETs)

VANETs play a significant role in enhancing communication and connectivity among vehicles and with roadside infrastructure. VANETs have no base stations and devices can only transmit to other devices in the near proximity, such as other cars, emergency vehicles (ambulances, police, etc.) and roadside devices.

Here are some key characteristics of vehicular networks:

– A dynamic and rapidly changing network topology due to the constant movement of vehicles. Nodes (vehicles) enter and leave the network frequently, leading to a highly active environment.
– Direct communication between vehicles, allowing them to share information such as speed, position, and other relevant data. V2V communication plays a crucial role in enhancing road safety and traffic efficiency.
– Interactions between vehicles and roadside infrastructure, such as traffic lights, road signs, and sensors, enable vehicles to receive real-time information about traffic conditions and other relevant data.
– In the absence of a fixed infrastructure for communication, vehicles act as both nodes and routers, forming an ad hoc network where communication links are established based on proximity.
– Broadcast mode disseminates information about traffic warnings, road conditions, and emergency alerts to nearby vehicles.
– Low-latency communication supports real-time applications like collision avoidance systems and emergency alerts. Timely information exchange is crucial for the effectiveness of these applications.
– Security and privacy techniques for authentication, confidentiality, and data integrity.
– Connected vehicles support various traffic safety applications, including collision and lane-switching warnings, as well as collaborative cruise control. These applications aim to enhance overall road safety.
– Vehicular communication is influenced by signal fading and attenuation, especially in urban environments with obstacles. These factors need to be overcome for reliable communication.[3]

VANETs play a crucial role in the development of Intelligent Transportation Systems (ITS) and contribute to creating safer, more efficient, and connected road networks. Due to the rapid mobility of vehicles, the Automatrix may experience frequent connectivity disruptions. Protocols and mechanisms are important to cope with intermittent connectivity.

One of the reasons I liked the category of the Automatrix was that the attention was on the context, not exclusively the individual vehicles. When it comes to connected cars, the implications of net neutrality are significant and can influence various aspects of their functionality and services.[4]

Connected cars contribute to the broader concept of the Internet of Things (IoT) by creating an interconnected network where vehicles, infrastructure, and users communicate and collaborate to enhance safety, efficiency, and overall driving experience. These connected vehicles leverage various sensors, embedded and internal Ethernet systems, and communication protocols to tether to Bluetooth and access mobile cellular and satellite services.

Notes

[1] Wahid I, Tanvir S, Ahmad M, Ullah F, AlGhamdi AS, Khan M, Alshamrani SS. (23 July 2022) Vehicular Ad Hoc Networks Routing Strategies for Intelligent Transportation System. Electronics 2022, 11(15), 2298; https://www.mdpi.com/2079-9292/11/15/2298
[2] Image from Hakim Badis, Abderrezak Rachedi, in Modeling and Simulation of Computer Networks and Systems, 2015 https://www.sciencedirect.com/topics/computer-science/vehicular-ad-hoc-network
[3] https://www.emqx.com/en/blog/connected-cars-and-automotive-connectivity-all-you-need-to-know
[4] https://edition.cnn.com/2023/09/26/tech/fcc-net-neutrality-internet-providers/index.html

Citation APA (7th Edition)

Pennings, A.J. (2024, Jan 15). Networking Connected Cars in the Automatrix. apennings.com https://apennings.com/telecom-policy/networking-in-the-automatrix/

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea teaching broadband policy and ICT for sustainable development. From 2002-2012 he was on the faculty of New York University where he taught digital economics and information systems management. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

Net Neutrality and the Use of Virtual Private Networks (VPNs)

Posted on | November 26, 2023 | No Comments

Net neutrality regulations strive to treat VPNs (Virtual Private Networks) neutrally, meaning that Internet Service Providers (ISPs) should not discriminate against or block the use of VPN services. As a regulatory principle, Net neutrality advocates for equal treatment of all data on the Internet, regardless of the type of content, application, or service. VPN is a technology that establishes an encrypted connection over the Internet by allowing users to access a private network remotely. This connection provides anonymity, privacy, and security but may also be used in sensitive activities, including bypassing geographical restrictions imposed by licensing agreements, ISPs, or regional authorities.

In this post, I investigate the complexities of VPNs and their implications for both content providers and ISPs. First, I describe how VPNs work. Then I explore how content service providers like video streaming platforms treat VPNs. Next, I do a similar analysis of different strategies used by ISPs when they want to hamper VPN use. Lastly, I return to the VPNs’ relationship to net neutrality.

VPNs are widely used for personal and business purposes to protect sensitive data and enable secure remote access to private networks. In many cases, ISPs and other carriers, as well as OTT (Over-the-Top) content providers, may attempt to block or restrict the use of Virtual Private Networks (VPNs). However, the extent to which VPNs are blocked can vary depending on the region, the specific ISP, and local regulations.

How does a VPN work?

A VPN works by creating a secure and encrypted connection between the user’s device and a VPN server. When a user contacts a VPN, they are authenticated, typically by entering a username and password, often automatically through VPN client software. Some VPNs may also use additional authentication methods, such as multi-factor authentication, for enhanced security. When the connection is authenticated, the communication between the user’s device (computer, smartphone, etc.) and the VPN server is encrypted for security.

The encrypted data moving between user and server is encapsulated with a process known as tunneling. This creates a private and protected pathway for data to travel between the user’s device and the VPN server. Various tunneling protocols, such as OpenVPN, L2TP/IPsec, or IKEv2/IPsec, are used to establish this secure connection. The VPN server then assigns the user’s device a new IP address, replacing the device’s original IP address. This is often a virtual IP address within a range managed by the VPN server.

All Internet traffic to the user’s device is then routed through the VPN server. This means that websites, services, and online resources such as a streaming service, perceive the user’s location as that of the VPN server rather than the user’s actual location. Users can access content that may be geo-restricted or censored in their physical location by connecting to a VPN server in a different geographic location. This allows them to appear as if they are accessing the Internet from the location of the VPN server.

Anti-VPN Technologies Used by Content Providers

VPNs become a net neutrality issue when they are targeted by either content providers or ISPs. Some content providers and streaming services may block access from known VPN IP addresses to enforce regional restrictions on their content. Streaming services negotiate licensing agreements with content providers to distribute content only in specific regions. Other concerns include copyright infringement by other content providers and the quality of service of traffic routed through multiple servers. Complicated data packet routes can cause latency or buffering issues, which degrade the streaming experience. Nevertheless, VPNs can circumvent this blocking by masking the user’s real IP address and making it appear as if they are connecting from a different location.

Content services employ various techniques to detect the use of VPNs and proxy servers. They maintain databases of IP addresses associated with VPNs and proxy servers and compare the user’s IP address against these databases to check for matches. If the detected IP address is on the list of known VPN servers, the streaming service may block access or display an error message.

Content providers such as video streaming services may also analyze user behavior to detect patterns indicative of VPN usage. For example, if a user rapidly connects from different geographical locations, it may raise suspicion and trigger additional checks to determine if a VPN is in use. VPN detection may involve checking for DNS (Domain Name System) leaks that reveals DNS requests or vulnerabilities in WebRTC (Web Real-Time Communication) protocols that gives real-time guarantees but can reveal client credentials. These leaks can expose the user’s actual IP address, allowing the content services to identify VPN usage.

Streaming services may decide to block entire IP ranges associated with data centers or hosting providers commonly used by VPN services. This approach helps prevent access from a broad range of VPN users sharing similar IP addresses. Streaming services regularly use geolocation services to determine the physical location of an IP address. If the detected location does not match the expected geographical area based on the user’s account information, it may trigger suspicion of VPN use.

VPN connections often exhibit different speed characteristics compared to regular links. Streaming services may analyze the connection speed and behavior to identify patterns associated with VPN usage. Lastly, some streaming services may employ captcha challenges or additional verification steps when they detect suspicious activity, such as rapid and frequent connection attempts from different locations. This targeting can inconvenience users but serves to identify and block VPN usage.

How ISPs treat VPNs

Net neutrality principles call for ISPs to treat all data packets on the Internet equally. It can prohibit ISPs from discriminating against specific online services, applications, or providers, including the data packets generated by VPN services. This norm means that ISPs should not block or throttle VPN traffic just because it is VPN traffic. VPN providers, like any other online service, should be able to reach users without facing unfair restrictions.

Nevertheless, ISPs may employ various techniques to block or throttle VPN traffic. These measures are often implemented for network management, compliance with regional regulations, or enforcing content restrictions. Deep Packet Inspection (DPI) is a technology that allows ISPs to inspect the content of data packets passing through their networks. By analyzing the characteristics of the traffic, including protocol headers and content payload, DPI can identify patterns associated with VPN traffic. ISPs may use DPI to detect and block specific VPN protocols or to throttle VPN traffic. Some advanced filtering technologies can detect and block VPN traffic. However, this approach is more common in regions with strict Internet censorship.

ISPs can block or restrict traffic on specific ports commonly associated with VPN protocols. For example, they might block traffic on ports used by OpenVPN (e.g., TCP port 1194 or UDP port 1194) or other well-known VPN protocols. By blocking these ports, ISPs aim to prevent establishing VPN connections. ISPs may also maintain lists of IP addresses associated with known VPN servers and block traffic to and from these addresses. This method targets specific VPN servers or services rather than attempting to identify VPN traffic based on its characteristics.

Some VPN protocols obfuscate or disguise their traffic, making it more challenging for ISPs to detect and block them. This subterfuge includes techniques like adding a layer of encryption or using obfuscated protocols that resemble regular HTTPS traffic. ISPs may also analyze traffic patterns and behaviors to identify characteristics associated with VPN usage. For example, rapid and frequent connection attempts from different locations might trigger suspicion and lead to traffic restrictions. VPNs can circumvent this blocking by masking the user’s actual IP address and making it appear as if they are connecting from a different location.

DNS filtering blocks access to specific domain names associated with VPN services. This method aims to prevent users from resolving the domain names of VPN servers, making it more difficult for them to establish connections. ISPs may implement filtering at the application layer to identify and block VPN traffic based on the behavior and characteristics of specific VPN applications. Instead of outright blocking VPN traffic, some ISPs may employ bandwidth throttling to reduce the speed of VPN connections. This slowing can make VPN usage less practical or effective for users, especially when attempting to stream high-quality video or engage in other bandwidth-intensive activities.

The effectiveness of these methods can vary, and users often find workarounds to bypass VPN restrictions. VPN providers may also respond by developing new techniques to evade detection. The cat-and-mouse game between VPN providers and ISPs is ongoing, with each side adapting its strategies to stay ahead. Users who encounter VPN restrictions may explore alternative VPN protocols, use obfuscation features, or consider other means to maintain privacy and access unrestricted Internet content.

Net neutrality aims to prevent anti-competitive practices by ISPs. While some telecom entities block VPNs for legitimate reasons, such as maintaining network integrity or complying with local regulations, their actions can also violate user privacy and restrict the free flow of information. If ISPs were to block or throttle VPN traffic selectively, it could impact competition by favoring certain online services over others. This interference could be particularly concerning if ISPs were to prioritize their own VPN services over those provided by third-party VPN providers. Advocates for net neutrality argue that it is crucial for maintaining a level playing field on the Internet, fostering competition, innovation, and the free flow of information.

However, the specific regulations and enforcement mechanisms related to net neutrality can differ, and debates on this topic continue in various jurisdictions. In some countries, governments or ISPs may implement restrictions on the use of VPNs as part of broader Internet censorship efforts. These restrictions can be aimed at controlling access to certain websites, services, or content deemed inappropriate or against local laws. While net neutrality principles provide a foundation for treating VPNs fairly, the actual implementation and regulatory landscape can vary by country. Some regions have specific regulations that address net neutrality, while others may not. Additionally, the status of net neutrality can change based on regulatory decisions and legislative developments.

Citation APA (7th Edition)

Pennings, A.J. (2023, Nov 25). Net Neutrality and the Use of Virtual Private Networks (VPNs). apennings.com https://apennings.com/telecom-policy/net-neutrality-and-the-use-of-virtual-private-networks-vpns/

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea teaching broadband policy and ICT for sustainable development. From 2002-2012 he was on the faculty of New York University where he taught digital economics and information systems management. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in South Korea.

Deep Packet Inspection of Internet Traffic and Net Neutrality

Posted on | November 4, 2023 | No Comments

With a 3-2 shift in the Federal Communications Commission (FCC) leaning towards restoring net neutrality, advocates are again arguing for the equal treatment of all data traffic by Internet service providers (ISPs). Net neutrality principles strive to prevent ISPs such as AT&T, Comcast Xfinity, Korea Telecom, Vodaphone, etc. from engaging in practices that could stifle competition, limit consumer choice, or infringe on the free flow of information online. This post describes Deep Packet Inspection (DPI) and how it can influence the capability of ISPs and nations to potentially discriminate against certain network traffic.

Deep Packet Inspection (DPI) is a network technology used to inspect and analyze the contents of data packets running through the Internet. It is a critical component of many network security, monitoring, and optimization solutions.[1] However, DPI can be used in ways that violate net neutrality principles, such as by degrading or blocking specific types of content, devices, services, or applications. In such cases, DPI is directly at odds with net neutrality or the “Open Internet,” which encompasses a broader range of principles and values related to maintaining a free, accessible, and inclusive Internet environment for all users.

The importance of DPI in relation to net neutrality depends on how it is used and the specific context in which it is applied. It can be both important and controversial in the context of net neutrality. When ISPs employ DPI to discriminate against or favor certain types of traffic, it can undermine the open and neutral character of the Internet. This intrusion can lead to anti-competitive behavior and harm consumers’ access to a diverse and free Internet.

DPI can also be used for legitimate network management and security purposes. For instance, it can help identify and mitigate distributed denial-of-service (DDoS) attacks, detect malware, and manage network congestion. In these cases, DPI serves to protect the integrity and security of the network without violating net neutrality.

Deep Packet Inspection is used for examining the contents of data packets as they pass through a network. This involves prioritizing or limiting specific types of traffic to optimize network performance. Several technologies are essential for deep packet inspection to fulfill its various functions, including network management, security, application optimization, quality of service (QoS), and traffic shaping. Advanced DPI systems may incorporate machine learning and artificial intelligence (AI) algorithms to improve accuracy in identifying new or unknown applications and to detect evolving threats by analyzing network behavior over time.

DPI begins with the acquisition of data packets from network traffic. This can be achieved using packet capture technologies, such as network taps, port mirroring, or packet sniffers. These tools intercept and copy data packets for analysis. Once captured, the data packets are parsed to extract relevant information. This process involves breaking down the packets into their constituent parts, such as headers and payloads. DPI may perform content analysis to extract valuable information from packets, such as identifying files, images, video, or text within network traffic. Once packets are captured, they must be processed efficiently. High-performance technologies, such as multi-core CPUs or specialized hardware accelerators, are essential for quickly analyzing and processing packets.

DPI systems may classify network flows based on various criteria, such as source/destination IP addresses, ports, or traffic characteristics. Flow classification is essential for monitoring and controlling different types of traffic effectively. This is useful for security, compliance, and traffic optimization purposes. These can be used to block or throttle (slow down) specific websites or services.

DPI systems also need to understand various network protocols, such as HTTP, SMTP, FTP, or proprietary protocols used by specific applications. Protocol decoding engines are necessary to extract and interpret protocol-specific information. They can decode and analyze the data exchanged within these protocols, making it possible to identify the applications and services being used.

DPI relies on pattern matching algorithms to identify specific content within packets. Regular expressions, string matching, or more advanced techniques like Aho-Corasick algorithms are used to detect patterns associated with threats, protocols, or applications. Sophisticated DPI algorithms are used to analyze packet payloads, extract data, and identify application behavior, even if it uses non-standard ports or encryption.[2]

DPI often employs signature-based analysis, where patterns in packet contents are matched against a database of known patterns associated with specific applications or threats. This allows for the identification of applications, services, or security risks. DPI can also employ behavioral analysis techniques to identify anomalies or suspicious activities within network traffic. For example, it can detect unusual patterns in data transfer or deviations from expected behavior. DPI systems rely on extensive signature databases that contain patterns, behaviors, or attributes associated with specific applications, malware, or network threats. To remain effective, DPI systems need to regularly update their signature databases to account for new applications, protocols, or emerging threats. This requires efficient mechanisms for signature updates and database management. Regular updates to these databases are crucial to stay current with new threats and applications.

It’s important to note that DPI technology raises important considerations related to user privacy and network neutrality. The use of DPI for deep inspection of user traffic often involves monitoring the content of communications without user consent or proper safeguards. DPI systems must incorporate strong security and privacy measures to protect the data they handle and to ensure compliance with legal and regulatory requirements.

Since DPI involves the inspection of data content, it must be performed securely. Data encryption and privacy measures are crucial to protect the confidentiality of network traffic and user data. DPI systems generate logs and reports for monitoring, compliance, and troubleshooting purposes. Robust reporting and logging mechanisms are essential. Ensuring that DPI respects user privacy rights is crucial in any context.

Encrypted traffic poses a challenge for DPI. Some systems incorporate SSL/TLS decryption capabilities to inspect encrypted data, although this must be done with care to protect user privacy and maintain compliance with data protection regulations.

The use of DPI for legitimate security and network management purposes should be balanced with privacy concerns and adhere to relevant laws and regulations. DPI technology may need to integrate with other network security and monitoring solutions, such as firewalls, intrusion detection systems (IDS), and intrusion prevention systems (IPS).

Net neutrality regulations often require ISPs to be transparent about their traffic management practices, and DPI can be a tool to monitor and enforce these rules. In this context, DPI can play a positive role in upholding net neutrality by ensuring that ISPs are following the established regulations.

In summary, the importance of DPI for net neutrality largely depends on how it is applied and the specific goals it serves. When used in ways that violate net neutrality principles, such as blocking, degrading, or throttling certain content or devices, DPI is detrimental to the open Internet. However, when it is employed for network management, security, and ensuring ISP compliance with net neutrality regulations, it can be an important tool for maintaining a free, fast, and open Internet while still safeguarding the network’s integrity and security. Balancing these interests and ensuring proper oversight and transparency is essential in the discussion of DPI and net neutrality.

Citation APA (7th Edition)

Pennings, A.J. (2023, Nov 4). Deep Packet Inspection of Internet Traffic and Net Neutrality. apennings.com https://apennings.com/technologies-of-meaning/deep-packet-inspection-of-internet-traffic-and-net-neutrality/

Notes

[1] See Pennings, A.J. (2021, May 16). US Internet Policy, Part 5: Trump, Title I, and the End of Net Neutraliy. apennings.com https://apennings.com/telecom-policy/us-internet-policy-part-5-trump-title-i-and-the-end-of-net-neutrality/

[2] Çelebi, M. Yavanoglu, U. (2023) Accelerating Pattern Matching Using a Novel Multi-Pattern-Matching Algorithm on GPU. Applied Sciences. 13(14):8104. https://doi.org/10.3390/app13148104

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea and a Research Professor at Stony Brook University. He teaches broadband policy and ICT for sustainable development. Previously he taught digital economics and information systems management at New York University’s Department of Management and Technology. He also taught in Digital Media Management MBA at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

US Internet Policy, Part 7: Net Neutrality Discussion Returns with New FCC Democratic Majority

Posted on | October 9, 2023 | No Comments

The election of Joe Biden as US president in 2020 significantly impacted Internet policy discussions. After the Georgia senatorial runoff that shifted the balance of power to the Democrats, preparation at the Federal Communications Commission (FCC) began to target many issues that were dismissed or ignored during the Trump administration.

But plans stalled as Gigi Sohn, President Biden’s nominee to the FCC, was subject to an intense lobbying effort from the telecom industry to block her seat at the commission. The former FCC staffer, longtime consumer broadband advocate, and first openly LGBTIQ+ nominee for commissioner eventually withdrew her consideration for the post in March 2023. Democrats finally regained majority control of the FCC when a new nominee, Anna Gomez, was confirmed by the US Senate on September 7, 2023.[1]

Pending Internet Policy Issues

– More, better, and cheaper broadband access and connectivity through mobile, satellite, and wireline facilities, especially in rural areas.
– Antitrust concerns about cable and telco ISPs, including net neutrality.
– Privacy and the collection of behavioral data by platforms to predict, guide, and manipulate online user actions.
Section 230 reform for Internet platforms and content producers, including assessing social media companies’ legal responsibilities for user-generated content.
Security issues, including ransomware and other threats to infrastructure, including Border Gateway Protocol (BGP) security between countries.
– Deep fakes, memes, and other issues of misrepresentation, including fake news.
– eGovernment and digital money, particularly the role of blockchain, CBDCs, and cryptocurrencies.
– Formation of Web 3.0, where services are monetized but ownership democratized with new trust-based protocols using blockchain technologies, the core technologies of crypto and nfts.

Addressing Net Neutrality

FCC Chairwoman Jessica Rosenworcel has scheduled October 19 for a vote on how to proceed with new rulemaking and address some issues that have come to the forefront of public scrutiny. With two other Biden appointments, the FCC is poised to act on the party’s priorities, including restoring net neutrality regulations. Such rules barred broadband providers from interfering with web traffic but were gutted by Republican commissioners during the administration of President Donald Trump. Chairwoman Rosenworcel’s speech:

Net neutrality is the legal principle that Internet Service Providers (ISPs) should treat all data and online content equally. It derives from commercial law that strives to treat all customers equally. For example, a hotel should not be able to restrict certain people from lodging at their facilities. It was applied to railroad law to ensure towns along a train route would not be excluded from sending their goods, such as cattle or wheat, to market. The common carrier precendent was applied to telegraph and later to telephone regulation. The principle has been bandied back and forth in the FCC for many years, reflecting different philosophies and sympathies for lobbying arguments.

My previous posts reviewed the issues dealing with wired broadband net neutrality based on FCC’s rulemaking based on the Communications Act of 1934 that emphasized common carriage, the commercial obligation to serve all customers equally and fairly. Historically, these legislated guidelines allowed the US telecommunications system to dramatically expand voice communications from the 1930s through the 1970s.[2]

The FCC later decided that data communications and computer processing service providers operating on top of the telco infrastructure would be better served as lightly regulated Title I “enhanced” companies. This designation allowed the Internet to take off in the 1990s and fostered the growth of thousands of Internet Service Providers (ISPs). For example, it allowed dial-up phone users to connect to ISPs to connect to the Internet for long durations without paying extra toll charges. This dynamic would change as competition heated up to provide “broadband” for the Internet and interactive television.

Consolidation Under Deregulated “Information Services”

Under GOP-leaning Michael Powell’s FCC chairmanship, the ISP market structure consolidated dramatically with deregulation for both cable TV companies and Plain Old Telephone companies (POTs), allowing them to enter new markets. Cable television companies had developed broadband capabilities in the late 1990s with cable modems and coaxial cables to connect to the Internet. Likewise, the Regional Bell Operating Companies (RBOCs) that had carved up America’s telecommunications after the breakup of AT&T in the 1980s, developed Asymmetric Digital Subscriber Lines (ADSL or DSL) broadband technologies to provide high-speed services to households over copper lines. This service uses faster coaxial or fiber optic lines to transmit to a local node or curb and then copper lines into the premise. These companies had envisioned developing joint “information highways” going back to the Bell Atlantic/Tele-Communications, Inc. (TCI) deal that was announced in October 1993. That deal died in 1997 but was finally consummated by AT&T on March 9, 1999, in an all-stock deal worth about $48 billion.

AT&T wanted those cable lines from TCI to expand their local phone service, which it was already doing in another agreement with Time Warner. The merger would allow them to extend their markets and combine infrastructure for cost savings and efficiencies. This combination could provide a significant competitive advantage against other telephone providers and new entrants like satellite or wireless providers. It would also allow them to offer a broader range of services, including bundled packages. But AT&T and RBOCs were limited by the FCC’s ruling on the Telecommunications Act of 1996 that distinguished between Title II common carrier services and Title I deregulated information services. FCC decisions in 2005 facilitated significant changes in the market structure of the Internet.

In 2005, both cable and phone companies suddenly became deregulated ISPs. This change allowed significant consolidation as telephone and cable companies, competing to provide “triple play” (TV, broadband, and voice) services to households, frantically merged with other telecommunications companies to dominate “broadband.” AT&T and Verizon, traditional telephone companies, merged with cable companies (and mobile) to create telecom behemoths. The road kill included thousands of smaller ISPs that eventually were no longer able to compete or even interconnect with the larger companies.

Two things led to sweeping deregulation. First, a U.S. Supreme Court decision (National Cable & Telecommunications Association v. Brand X Internet Services) upheld the FCC’s 2002 ruling that providing cable modem service (i.e., cable television broadband Internet) is an interstate information service. This decision meant that cable companies were confirmed in June of 2005 as subject to the less stringent Title I of the Communications Act of 1934. Two months later, Powell’s FCC allowed former Bell telephone companies to become Title I “information services” during George W. Bush’s administration. The Regional Bell Operating Companies (RBOCs), companies that had carved up America’s telecommunications after the breakup of AT&T in the 1980s and developed Asymmetric Digital Subscriber Lines (ADSL) broadband technologies for “information highways” suddenly became deregulated ISPs.

Although there are currently 2940 Internet service providers in the United States, the top 8 companies have over 90 percent of the subscribers. These are the top 8 Internet providers in the U.S. as of June 2023:

– AT&T 22%
– Spectrum 20%
Xfinity 19%
– Verizon 6%
– Cox 5%
– T-Mobile 5%
– Century Link 2%
– Frontier 2%

The Internet and its World Wide Web were designed to allow devices like PCs, laptops, and mobile phones to talk to each other without much interference from the intermediate network that moves their data. Net neutrality strives to ensure that all online content, services, and applications running through that network are treated equally, regardless of their source. This equality promotes free access to information and prevents ISPs from blocking or throttling (slowing down) specific websites or services. Net neutrality allows users to choose which websites and services they access, without interference from ISPs. Users can explore a diverse range of content and make their own decisions about what to consume. It also ensures that nonprofit organizations, activists, and community groups have equal access to the Internet, allowing them to advocate for social and political causes without discrimination. The danger is that ISPs could examine and manipulate users’ Internet traffic, compromising their privacy and secure communication.

However, the current reality is that net neutrality is not being enforced. It was defeated in the 2017 FCC decision by another vote of 3-2. Pai’s FCC was concerned that net neutrality regulations would discourage ISPs from investing in network infrastructure and improving Internet speeds since they cannot charge content providers for prioritized access. Many net neutrality critics argued that without paid prioritization, the quality of some services would suffer, particularly during peak times when networks become congested.

Big ISPs argued that without the ability to create tiered service plans or charge content providers for faster access, they would struggle to manage network traffic and recoup the costs of infrastructure investments. They suggested that net neutrality rules limit an ISPs’ ability to manage and optimize network traffic efficiently, potentially affecting all users’ service quality. The general argument was that government regulation of the Internet stifles innovation and imposes unnecessary bureaucratic burdens on ISPs that hinder user performance.

It’s important to note that net neutrality is a complex policy principle, and its impact on underserved and economically disadvantaged communities depends on effective enforcement and regulatory oversight. Additionally, while net neutrality works to ensure equitable access to the Internet, broader efforts, such as affordable broadband access programs and digital literacy initiatives, are critical to addressing the digital divide and promoting digital inclusion for all, including those with lower incomes.

Notes

[1] The Federal Communications Commission (FCC) is meant to be an independent agency of the United States government responsible for regulating communications by wire and radio in the United States. It is designed to operate independently of partisan politics. The FCC comprises five commissioners appointed by the President of the United States and confirmed by the Senate. No more than three commissioners can be members of the same political party by law. The political affiliation of FCC commissioners can vary depending on the presidential administration in power during their appointments. As a result, the FCC’s policies and priorities may shift with changes in leadership and the political makeup of the commission. Therefore, the FCC’s stance on various issues, including telecommunications, broadband regulation, net neutrality, and media ownership, can change over time based on the views and priorities of the commissioners appointed by the current administration. It is important to recognize that a combination of legal mandates, policy considerations, public input, and the political environment at the time influences the FCC’s actions and decisions.

Citation APA (7th Edition)

Pennings, A.J. (2023, Oct 9). US Internet Policy, Part 7: Net Neutrality Discussion Returns with New FCC Democratic Majority. apennings.com https://apennings.com/telecom-policy/us-internet-policy-part-7-net-neutrality-discussion-returns-with-new-fcc-democratic-majority/

[2] List of Prevous Posts in this Series

Pennings, A.J. (2022, Jun 22). US Internet Policy, Part 6: Broadband Infrastructure and the Digital Divide. apennings.com https://apennings.com/telecom-policy/u-s-internet-policy-part-6-broadband-infrastructure-and-the-digital-divide/

Pennings, A.J. (2021, May 16). US Internet Policy, Part 5: Trump, Title I, and the End of Net Neutraliy. apennings.com https://apennings.com/telecom-policy/us-internet-policy-part-5-trump-title-i-and-the-end-of-net-neutrality/

Pennings, A.J. (2021, Mar 26). Internet Policy, Part 4: Obama and the Return of Net Neutrality, Temporarily. apennings.com https://apennings.com/telecom-policy/internet-policy-part-4-obama-and-the-return-of-net-neutrality/

Pennings, A.J. (2021, Feb 5). US Internet Policy, Part 3: The FCC and Consolidation of Broadband. apennings.com https://apennings.com/telecom-policy/us-internet-policy-part-3-the-fcc-and-consolidation-of-broadband/

Pennings, A.J. (2020, Mar 24). US Internet Policy, Part 2: The Shift to Broadband. apennings.com https://apennings.com/telecom-policy/us-internet-policy-part-2-the-shift-to-broadband/

Pennings, A.J. (2020, Mar 15). US Internet Policy, Part 1: The Rise of ISPs. apennings.com https://apennings.com/telecom-policy/us-internet-policy-part-1-the-rise-of-isps/

Related Posts

Pennings, A.J. (2023, May 6). Deregulating US Data Communications. apennings.com https://apennings.com/how-it-came-to-rule-the-world/deregulating-telecommunications/

Pennings, A.J. (2021, Sep 22). Engineering the Politics of TCP/IP and the Enabling Framework of the Internet. apennings.com https://apennings.com/telecom-policy/engineering-tcp-ip-politics-and-the-enabling-framework-of-the-internet/

Pennings, A.J. (2019, Nov 26). The CDA’s Section 230: How Facebook and other ISP became Exempt from Third Party Liabilities. apennings.com https://apennings.com/telecom-policy/the-cdas-section-230-how-facebook-and-other-isps-became-exempt-from-third-party-content-liabilities/

Pennings, A.J. (2018, Oct 17). Potential Bill on Net Neutrality and Deep Pocket Inspection apennings.com https://apennings.com/telecom-policy/potential-bill-on-net-neutrality-and-deep-pocket-inspection/

Pennings, A.J. (2016, Nov 15). Broadband Policy and the Fall of the ISPs. apennings.com https://apennings.com/global-e-commerce/broadband-and-the-fall-of-the-us-internet-service-providers/

Pennings, A.J. (2011, Jan 31). Comcast and General Electric Complete NBC Universal Deal. apennings.com https://apennings.com/media-strategies/comcast-and-general-electric-complete-nbc-universal-deal/

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea. From 2002-2012 was on the faculty of New York University starting programs in Digital Communications and Information Systems Management while teaching digital economics and policy. He also helped set up the Digital Media Management program at St. Edwards University in Austin, Texas, where he lives when not in the Republic of Korea.

« go backkeep looking »
  • Referencing this Material

    Copyrights apply to all materials on this blog but fair use conditions allow limited use of ideas and quotations. Please cite the permalinks of the articles/posts.
    Citing a post in APA style would look like:
    Pennings, A. (2015, April 17). Diffusion and the Five Characteristics of Innovation Adoption. Retrieved from https://apennings.com/characteristics-of-digital-media/diffusion-and-the-five-characteristics-of-innovation-adoption/
    MLA style citation would look like: "Diffusion and the Five Characteristics of Innovation Adoption." Anthony J. Pennings, PhD. Web. 18 June 2015. The date would be the day you accessed the information. View the Writing Criteria link at the top of this page to link to an online APA reference manual.

  • About Me

    Professor at State University of New York (SUNY) Korea since 2016. Moved to Austin, Texas in August 2012 to join the Digital Media Management program at St. Edwards University. Spent the previous decade on the faculty at New York University teaching and researching information systems, digital economics, and strategic communications.

    You can reach me at:

    apennings70@gmail.com
    anthony.pennings@sunykorea.ac.kr

    Follow apennings on Twitter

  • About me

  • Writings by Category

  • Flag Counter
  • Pages

  • Calendar

    December 2024
    M T W T F S S
     1
    2345678
    9101112131415
    16171819202122
    23242526272829
    3031  
  • Disclaimer

    The opinions expressed here do not necessarily reflect the views of my employers, past or present.