Lotus Spreadsheets – The Killer App of the Reagan Revolution – Part 1
Posted on | September 23, 2014 | No Comments
The major feature of the “Reagan Revolution,” according to Peter Gowan’s Global Gamble, was to “put money-capital in the policy saddle for the first time in decades.”[1] From the time of his presidential inauguration in early 1981 and throughout his eight-year tenure, Reagan’s administration sought to propel the financial sector through widespread policy changes designed to “roll back” the containment of finance instituted during FDR’s reign and the early Cold War. Part of what made this political and economic movement consequential was the development of the electronic spreadsheet and its use on the newly invented personal computers such as the Apple II and IBM PC.
In this post I want to set the context for the success of the spreadsheet, particularly Lotus 1-2-3. In a future post, I will explore the formal aspects of the spreadsheet as a meaning-making application and why they are so effective in social and economic realms. I’m not taking a technological determinist position here, but rather arguing that spreadsheets and related financial technology facilitated the impact of what is sometimes called the “Reagan Revolution.” The major policy characteristics of this economic transformation included:
- the deregulation of banking and financial industries;
- relaxing the laws on anti-trust and corporate acquisitions;
- major tax cuts to privatize surplus wealth;
- securitization of student debt and other financial instruments;
- removing the caps on interest rates that banks could charge on credit cards and other loans;
- increasing US government debt to feed the bond industry and provide an additional hedge for financial risk-taking;
- selling off government agencies and assets;
- strengthening the dollar to help export production capital to low-cost countries, as well as;
- pressuring countries around the world to enhance the flow of US-produced news; and the
- liberalization of global capital controls.[2]
Also important was the increased military spending and commercialization of Cold War technology that facilitated globalization of capital with fiber optics, microprocessors and packet-switched data communications. These technologies were the primary drivers of financial innovation and economic activity in the 1980s and their productive legacy continues to shape the global economy.
The ingeniously innovative “microcomputer” spreadsheet, VisiCalc, was created when Dan Bricklin teamed with his friend Bob Franston in 1977 on an homework assignment for his Harvard MBA degree. Not surprising, it was an assignment to do the calculations for one corporation to take over another corporation that sparked Bricklin’s computing solution. Faced with doing the monotonous calculations on standard green ledger sheets, he fantasized about creating a calculating tool that combined the usability of fighter plane “heads-up display” simulation with re-editing capability of a word processor. The two of them went to work with an Apple II and the result was a new “visible calculator” technology that rocked the financial world.[3]
The use of the spreadsheet exploded after IBM introduced its own “Personal Computer” in August of 1981. Soon after, Lotus 1-2-3 became available for the “PC” and all the “IBM-compatible” clones such as Compaq and Dell. Lotus 1-2-3 was named for its spreadsheet, graphing, and database capabilities that combined to produce an extraordinary new facility to both conceptually and textually organize financial information. Although the earliest PCs were weaker than their bigger contemporaries – mainframes, and even the relatively large minicomputer, they had several advantages that increased their usefulness.
The main advantage of the PC-based spreadsheet was its immediacy – it put computing power in the hands of a single user and bypassed the traditional authority structures of the data processing centers organized around mainframes and minicomputers. The microcomputer was characterized in part by its accessibility: it was small, relatively cheap, and available via a number of retail outlets. It used a keyboard for human input, a cathode ray monitor to view data, and a newly invented floppy disk for storage. Together they allowed a user to input their own numbers and play with different combinations. The main benefit being the new flexibility in terms of the speed and amount of information immediately available. Unlike using a spreadsheet on a mainframe, which required trips to the EDP department for each data input change, the PC-based spreadsheet allowed new data to be entered easily via the keyboard and provided immediate results on the screen.
One implication was that frustrated accountants and financial analysts would go out and buy their own computers and software packages, often over the objections or indifference of the EDP department. People could do the calculations themselves, and ignore the bureaucracy.[4] Lotus 1-2-3, with its combination of graphics, spreadsheets, and data management caught the eye of many business entrepreneurs and corporate executives who saw the value of a computer program that simplified the monumental amount of numerical calculations and manipulation needed by the modern corporation. By October 1985, CFO magazine was reporting that “droves of middle managers and most financial executives are crunching numbers with spreadsheet programs such as Lotus 1-2-3.”[5]
Microcomputer based spreadsheets became ubiquitous in the business world and became a major productivity tool. In an era of incredible economic and financial flux, the electronic spreadsheet became the “killer app” that guaranteed the success of the PC industry and also provided an incredible new utility for individuals in the financial sphere. They were empowered to create dramatic new numerical calculations and construct new financial “what-if” scenarios in unprecedented short timeframes. As the Reagan Revolution took hold, the spreadsheet was there to to itemize and measure value, mobilize dormant resources, and place them into the transactional circuits of the global economy.
For example, spreadsheets were used around the world in a process called “privatization” where national assets were minutely valued to produce collateral for international loans or in the case of state-owned enterprises (SOES), turned into shares that could then be listed on national or international stockmarkets and sold. Margaret Thatcher started this process with the sale of British Telecom and soon after New Zealand became the international model when it “corporatized” and sold off its telecommunications agency to retire one-third of the accumulated national debt.
Within a liberalized regulatory infrastructure and an interlinked chain of financial institutions, financial traders eager to become “masters of the universe” quickly adopted the new technology. “Spreadsheet knowledge” began to have an extraordinary ability to capture and fix value in monetary terms. Spreadsheets are not so much a reflective technology as they are a constitutive and productive technology. They do not reveal the world as much as they create new financial meaning by creating and solidifying relationships between previously disparate resources. They were increasingly used by accountants and other financial magicians to construct value in such a way that it can be entered into the flows and accumulation processes of corporations other organizations enmeshed in the monetary flows of the global economy.
The PC-based spreadsheet created a new visualization process that combined financial calculation with interactive manipulation in such a way as to help create a new financial-based economic dynamism. It is this combination of financial deregulation and technological innovation that created the trajectory of digital money-capital and enshrined the legacy of the Reagan Revolution. That inheritance lives on in the disparities of debt and wealth so prevalent in today’s dystopic global economy.
In Part II, I will discuss some of the historical precedents that led up to the 1980s as a period of intensifying financialization that welcomed the use of the PC-based spreadsheets. The corporate environment was particularly vulnerable to a variety of financial raids enhanced by spreadsheet technologies like Lotus 1-2-3.
In later posts, I will examine the more formal aspects of how the spreadsheet works by using a combination of cultural and media analysis to explore its internal machinations and external implications. An important question in this inquiry examines the importance of “spreadsheet capitalism,” the role of these calculative devices in organizing and evaluating the financial information is central to modern organizations and the global political economy.
Citation APA (7th Edition)
Pennings, A.J. (2014, Sep 23). Lotus Spreadsheets – The Killer App of the Reagan Revolution – Part 1 apennings.com https://apennings.com/how-it-came-to-rule-the-world/spreadsheets-the-killer-app-of-the-reagan-revolution-part-1/
Notes
[1] Peter Gowan’s (1999) Global Gamble: Washington’s Faustian Bid for World Dominance.
[2] US debt tripled under Reagan to over $2 trillion. Notable liberalization of global money flows occurred when Reagan addressed eurodollars by allowing onshore facilities. This list is compiled from my work on How IT Came to Rule the World which examined the Reagan legacy in such entries as From Sputnik Moment to Reagan Revolution and How Star Wars and Japan’s Artificial Intelligence Threat Led to the Internet.
[3] Bricklin quote from (2002) Computing Encyclopedia. Volume 5: People. Smart Computing Reference Series. p. 30.
[4] Stephen Levy’s “Spreadhsheet Way of Knowledge” was an early influence. So much so that I asked one of my NYU students to create the linked website. It was originally published as Chapter 10 in Tom Forester’s (ed.) Computers in the Human Context: Information Technology, Productivity and People. Basil Blackwell. 108 Cowley Road, Oxford OX4 1JF, UK.
[5] Quote from CFO on the impact of Lotus 1-2-3 in the corporate world from David M. Katz, “The Taking of Lotus 1-2-3? Blame Microsoft.” CFO.com. December 31, 2002.
© ALL RIGHTS RESERVED

var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Global Gamble > Lotus 1-2-3 > Reagan Revolution > spreadsheet > spreadsheet capitalism > VisiCalc
Management and the Abstraction of Workplace Knowledge into Big Data
Posted on | August 30, 2014 | No Comments
The factory of the future will have only two workers: a man and a dog. The human being’s job is to feed the dog, whose function is to keep the man away from the machine. – Warren Gameliel Bennis
Understanding information technologies and the emergence of “big data” in the workplace requires some scrutiny of work processes, the relationship between labor and human bodies, and the historic role of management. In particular, how has a worker’s laboring activities been transformed into knowledge that could be collected, analyzed and used by managers? What are the implications of this abstraction of labor and its transformation into abstract data and technology-assisted management?
This post looks at how industrial intelligence has been removed from the site of the working body and relocated to the electronic space of cybernetic analysis, control and communication. It also discusses how this process has been transferred to other aspects of economic and social and has been part of a new phenomenon called “big data.”
Shoshana Zuboff’s In the Age of the Smart Machine: The Future of Work and Power (1988) was one of the more interesting inquiries into the processes of computerization and electronic communications to emerge out of the 1980s. It was a significant contribution to the organizational and sociological discussion on the way information technologies were being used in manufacturing and service sectors. One of her main contributions, the verb “informating,” provided important insights into the key practice of the new technologies and the construction of digital data in the cybernetic age.
Analyzing pre-Internet computerized environments, she identified informating as the process of digitally registering a wide range of information related to computer tasks. She both connected and compared informating to the processes of automating. Computers have historically been involved in automating – the process of replacing human activities and work with machinery. Zuboff distinguished automating from informating because the latter “produces a voice that symbolically renders events, objects, and processes so that they become visible, knowable, and shareable in a new way.”[1] Consequently, informating is an effective concept for approaching that vast data gathering and analysis project that is currently consolidating a wide range of structured and unstructured data from throughout the cybersphere.
The data collection processes involved in computerization are significant. They lead to an accumulation of information that is intimately related to the individual, and yet are essential for the continuance of modern private and public bureaucracies. As they monitor the various activities of everyday life as well as industrial production, they also keep a record that can be accessed or fed into larger databases across the Internet. For example, in a supermarket, your groceries’ barcodes are read and fed into a computer. Not only does it tabulate the price but it enters the information into database files for inventory, finance, and marketing that can later be analysed, examined, graded, and shared. In conjunction with loyalty programs and recommendation engines, big data is used by supermarkets to identify customer attributes like parenting or gluten-free preferences and tailor digital coupons and other promotions through email and social media.
Zuboff’s concern with the codification of the work environment intricacies into machine-compatible texts opened up a range of inquiry that is applicable to other facets of modern life. Drawing on what she terms the dual capacity of information technology: its ability to both automate and informate productive activities; she was able to analyze how technology changes the practices of work, managerial authority, and the supervision of employees. The “Internet of Things” (IoT), a connective network of electronic devices or “things” embedded with microelectronics, algorithmic software, and multi-faceted sensors, collects and exchanges data from dispersed objects in cloud-based data facilities for analysis. For example, “solar roads” that collect sunlight for electricity will be equipped with sensors that monitor highway conditions and alert oncoming cars as well as transportation authorities. If there’s debris or snow on the road, sensors in the smart pavement will detect it and relay the data.
The simultaneous growth of industry and bureaucracy at the beginning of the twentieth century created new demands for skills, machinery and control mechanisms that could be implemented in the workplace. Industrialism was maturing as was the consumer society, and manufacturing was gearing up for mass production. Work and workers became objects of intense study so that their skills and knowledge could be abstracted and translated into new work procedures and technologies. This process also created a growing class of managers whose job it was to study, refine and supervise these processes.
Frederick Taylor emerged as the leading figure in the trend towards observing, describing, and then systematizing worker’s skills so that they could be “re-engineered,” to use a modern buzzword. Taylor’s studies of minute worker activity lead to “time studies” designed to refine muscular movement in manufacturing and other work activities. They were also meant to “provide the quantitative empirical basis for a more rationalized control of industrial production.”[4] In Zuboff’s terms: “Taylorism meant that the body as the source of skill was to be the object of inquiry in order that the body as a source of effort could become the object of more control.”
She elaborates on the use of the information:
-
Once explicated, the workers know-how was expropriated to the ranks of management, where it became management’s prerogative to reorganize that knowledge according to its own interests, needs, and motives. The growth of the management hierarchy depended in part on upon this transfer of knowledge from the private sentience of the worker’s active body to the lists, flowcharts, and other systems of measurement in the planner’s office.[2]
Taylor’s work was published as The Principles of Scientific Management (1911). His ideas were a major inspiration for the efficiency movement that sought to identify and eliminate waste in all social and economic areas of what would be called the Progressive Era in the US.
Taylor’s “scientific management” ideas were never implemented by any one company without some modification. However, Henry Ford was able to simplify the process with his moving assembly line for automobile production. He implemented a series of conveyor belts, overhead rails and planned sequences that would keep production in constant flow. Based on the Midwest’s great meat-packing “disassembly” lines, Ford aspired to the ease in which oil and other liquids and gasses could be moved and processed.[4]
By further reducing the need for physical effort and skill, Ford was able to develop economies of scale and create the gigantic new automobile industry that could grow and include new unskilled immigrants and rural laborers. One of the costs involved, however, was the loss of skilled labor. Worker’s capabilities became “congealed” in the machinery, in the sense that their energies and skills are designed into the machinery of production, including robots. Also, one working body could be replaced easily by another. Often the benefit was an easier job for the worker in terms of physical toil, but it came at the price of the autonomy and negotiating power.
Managers facilitated the movement of bodily effort and skill into the machines and industrial techniques and then expanded into the intellectual areas of the owner/executive. Workers and managers operate with different types of literacies. Workers have been generally body-oriented and utilize the action-centered skills developed in physical labor. They develop implicit knowledges gained through performance and learned by observation and imitation. Zuboff called the activities when laborers use their bodies to work on materials and tools “acting-on.” Whether stirring paper pulp, operating a forklift or typing on computer keyboards, their major concern is with working with things rather than people.[5]
Conversely, white-collar workers use their bodies in significantly different ways. Although differences occur between top managers and middle managers, she uses the term “acting-with” to distinguish managers’ main responsibilities from the “acting-on” activities that monopolize workers’ activities. Top managers are also very much engaged in bodily activities, but primarily those that call on their abilities to interact with other people. Bodily presence, manifested primarily through the voice but also through dress and non-verbal behaviors are key to their success. Face-to-face verbal interchanges culling gossip, opinion, hearsay, and physical cues while transmitting in a way that heightens their personal charisma and sociability is a primary responsibility of top managers. Zuboff returns to the word “sentience” to describe the way top managers develop a “feel” for people and situations.
Zuboff’s study of working environments was conducted in the era of traditional databases that collected, sorted, and retrieved data according to prescripted formats and stored on a mainframe’s magnetic tape. With the introduction of Internet, cheap servers, data centers and software solutions like Hadoop, a new system became possible. It was feasible to collect unstructured data from mobile devices, PCs, and the whole Internet of “things” in the workplace. Information from data sources such as environmental sensors, production schedules, timesheets, etc., increasingly became fodder for analysis and innovative value creation.
She also drew on the politics of Michel Foucault, who focused, in part, on the “panopticon” of procedures of examination and file-building that were a crucial for the exercise of modern power. The examination works to hold their subjects of attention “in a mechanism of objectification.”[6] Examination turns the economies of surveillance and visibility into an operation of control. It proceeds by the textualization, the informating of data according to a set of prescribed protocols and knowledges. The file has an agenda and not just a loose collection of random documents. Under this official gaze, individuals become blank slates to be evaluated, classified, and registered in the official system of files. Max Weber had already identified the file to be crucial for the organization of bureaucracy. The examination that places individuals in a field of surveillance also situates them in a network of big data collection; it engages them in a whole mass of documents that capture and fix them.”[7]
These “cybernetic identities” are characteristic of the information age where the proliferation of multimediated information is changing the way people operate in the arenas of their lives. Furthermore, since information technology is largely developed out of institutional requirements, it is inherently political. Cybernetic identities are connected to the great bureaucratic spaces of credit, education, and production. They are the result of types of observation, classification, and registration. They result from a penetrating gaze which codes, disciplines, and files under the appropriate heading. Actions lose their actuality, and bodies lose their corporeality.
Mark Poster used Foucault to think about the consequences of computer databases on subjectivity and its multiplication of selves to feed an extensive array of organizational files. He was less concerned with databases as “an invasion of privacy, as a threat to a centered individual, but as the multiplication of the individual, the constitution of an additional self, one that may be acted upon to the detriment of the ‘real’ self without that ‘real’ self ever being aware of what is happening.” The texture of postmodern subjectivity is dispersed among multiple sources of information production and storage. In The Mode of Information, he warned of the “destabilization of the subject,” a fixed self no more but rather one “multiplied by databases, dispersed by computer messaging and conferencing, decontextualized and re-identified by TV ads, dissolved and materialized continuously in the electronic transmission of symbols.”[8] In an age when Google wants to “organize the world’s information,” we are still trying to determine the implications of that multiplication of identity within the networks of institutional power.
Citation APA (7th Edition)
Pennings, A.J. (2014, Aug 30) Management and the Abstraction of Workplace Knowledge into Big Data. apennings.com https://apennings.com/technologies-of-meaning/management-and-the-abstraction-of-knowledge/
Notes
[1] Zuboff, Shoshana. In the Age of the Smart Machine: the Future of Work and Power. New York: Basic, 1988. Print., p. 9.
[4] Beniger, The Control Revolution: Technological and Economic Origins of the Information Revolution. 1986; p. 298-299.
[2] Beniger, JamesThe Control Revolution: Technological and Economic Origins of the Information Society. 1986; p. 294.
[3] Zuboff, S. In The Age Of The Smart Machine: The Future Of Work And Power. 1988; p. 43.
[5] Distinctions between “acting-on” and acting-with” from Zuboff, S. In The Age Of The Smart Machine: The Future Of Work And Power. 1988; p. ??.
[6] Rabinow, Paul, comp. The Foucault Reader. London: Penguin, 1991. Print., p. 200-201.
[7] Poster, Mark. The Mode of Information: Poststructuralism and Social Context. Chicago: University of Chicago, 1990. Print. p. 98
[8] Poster, Mark. The Mode of Information: Poststructuralism and Social Context. Chicago: University of Chicago, 1990. Print. p. 15.
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is a professor of global media at Hannam University in South Korea. Previously, he taught at St. Edwards University in Austin, Texas and was on the faculty of New York University from 2002-2012. He taught at Victoria University in Wellington, New Zealand and was a Fellow at the East-West Center in Hawaii.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Big Data > Frederick Taylor > Informating > scientific management > Shoshana Zuboff > Taylorism
Discerning Media Economics
Posted on | August 25, 2014 | No Comments
The area of media economics has made important contributions to understanding traditional broadcast technologies such as newspapers, television and radio, but has only recently addressed the capabilities and consequences of digital media. This post seeks to investigate two interrelated areas: media economics and the political economy of communications/media; then begin to apply them to the realm of new digital media and the web by reviewing some important contributions to field.
When I arrived at New York University in the wake of the dot.com crash, one of the first courses we created was the Political Economy of Digital Media. The hubris of the “New Economy” met with a bitter setback in those first few years of the new millennium as hundreds of new companies in New York City’s “Silicon Alley” and around the country ran out of cash and gullible investors. So the course became a foundation for a digital media management program that was more attuned to the economic realities of New York City’s very dynamic but competitive media environment. The course combined microeconomic concerns about the management and operations of digital media firms with the larger macroeconomic issues of the emerging “new media” industry and its relationship to employment patterns, investment activities and international trade.
Gillian Doyle’s Understanding Media Economics (2008) for example examined these different media industry sectors including film and “new media”, but lacked a comprehensive understanding of the role of digitalization and its impact on convergence of these industry sectors. A better approach was pursued in Media Economics: Applying Economics to New and Traditional Media by Colin Hoskins, Stuart McFadyen, and Adam Finn that organized its inquiry into media activities by economic areas such as supply and demand, consumer behavior, production and market structure. However, it still relied heavily on the analysis of traditional media with little more than token references to digital media and the Internet.
The political economy of communication/media genre has admirably placed emphasis on the role of media in society, problems associated with monopoly, and tensions in the workplace; but it has also relied on the traditional mass media model and has failed to connect with significant audiences despite its major goal of mobilizing for social action and political intervention. Critical texts like The Business of Media Corporate Media and the Public Interest (2006) by David Croteau and William Hoynes, while providing a very useful discussion of the role of the citizen knowledge and the public sphere, failed to anticipate key aspects of digitalization, social media and netcentric commerce that are radically changing the news industry and the organization of online knowledge.
Many of the political economy of media texts are written from a Marxist perspective, providing interesting social insights, but organizing their critiques around claims to an internal validity that have not been sufficiently substantiated. Furthermore, they over-utilize insular language that reduces external validity – their applicability to contemporary issues, and thus their relevance to the activism needed to address and confront social problems brought on by the new media. Vincent Mosco’s Political Economy of Communication (2009) for example, was claimed to be a major rewrite of his classic manuscript by the same name, but has been criticized for its adherence to an economic reductionism forged in an era of durable goods manufacturing and an insular debate with cultural studies. It neglects to apply its analysis to the web economy where the “click” is a new form of laboring.
Robert McChesney’s The Political Economy of Media: Enduring Issues, Emerging Dilemmas (2008), while reminding us of the problems of a media-saturated society such as censorship, propaganda, commercialism, and the depoliticization of society failed to address the relationship of media to economic sustainability and innovation, creative expression, as well as learning and education. As a result, his emphasis on the “critical” path of media scholarship, while dismissing what he disdainfully refers to the “administrative” path of communications, hasn’t framed its arguments in a manner that reaches students confronting the economic issues of their lives as well as practitioners in the field facing highly complex design and implementation problems.
He is such a major contributor to the area though, that it is hard to be too critical of his stance and I invite the reader to look at McChesney’s considerable body of work at Amazon. Likewise, take a look at Gilliam Doyle’s newer (2013) Understanding Media Economics, as she made an interesting transition from examining separate media areas like film, print and television, to looking at the characteristics of a more converged digital environment. With more emphasis on network effects, technological disruption, and the economics of content distribution, her analysis transcends some of the traditional barriers between these various media.
This area of economic inquiry is very promising for the future as it now encompasses a wide realm of digital media activities, going beyond traditional media to incorporate e-commerce and a number of other digital applications from drone journalism to quantifying health technologies. What is particularly exciting is the possibility of combining the development of individual skills and productive capabilities with exposure to progressive, socially conscious media and a new dimension of overall economic analysis.
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is a professor of global media at Hannam University in South Korea. Previously, he taught at St. Edwards University in Austin, Texas and was on the faculty of New York University from 2002-2012. His first faculty position was at Victoria University in Wellington, New Zealand.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Media Economics > Political Economy of Communication > Political Economy of Media
Reviewing Castells’ Global Automaton
Posted on | August 3, 2014 | No Comments
In my long-term quest to find some answers as to what constitutes the techno-informational framework of the global financial system, I ran across Manuel Castells’ description of the “Automaton” a number of years ago. He wrote a chapter called “Information Technology and Global Capitalism” in Global Capitalism (2000) where he made some linkages between the global financial system and popular imagery about machinery and robots. He makes several incisive points about how trading in financial instruments, in conjunction with the surveying force of the informational/news infrastructure that supports it, has transformed into a sort of entity for its own sake. The trading in bonds, currencies, derivatives, and stocks in digital markets around the world has transformed into a disciplinary mechanism with governments, corporations and other entities in its binds. In several posts I have used Walter Wriston’s “information standard” in a similar way, but I think it’s useful to review Castell’s perspective:
The outcome of this process of financial globalization may be that we have created an Automaton, at the core of our economies, decisively conditioning our lives. Humankind’s nightmare of seeing our machines taking control of our world seems on the edge of becoming reality, not in form of robots eliminating our jobs or government computers that police our lives, but as an electronically based system of financial transactions.[1]
While that ending falls a bit flat, it is useful to pick up on that last word — transactions. The point Castells wants to make is that transactions do not equal markets. He says “While capitalists, and capitalist managers, still exist, they are all determined by the Automaton. And this Automaton is the not the market. It does not follow market rules — at least not the kind of rules based on supply and demand which we learned from our economics primers.”
Equilibrium—based market concepts have never worked exactly to basic economic theory in the financial area because speculative forces make rising prices attractive. Instead of reducing demand, as the “invisible hand” dictates, rising prices increase demand. As prices continue to rise, they allow an asset bubble to form. And we know what happens to bubbles.
In a world where banks, hedge, mutual and sovereign wealth funds transact in sums totaling tens of trillions of dollars a day, Castells contends that the motivations involved in financial transactions are complex. “Movements in financial markets are induced by a mixture of market rules, business and political strategies, crowd psychology, rational expectations, irrational behavior, speculative manoeuvres and information turbulences of all sorts.” For Castells, what he calls the “Automaton”, creates a “collective capitalist” system that operates with its own set of conditions.[2]
For Castells, this Automaton increasingly controls the operation of global financial markets. Furthermore, he suggests that this growing dependence on computer systems in the financial world means that the global economy is on the cusp of becoming, for all practical purposes, beyond the control of individuals, corporations, or governments. Felix Stalder has raised questions about the Castells’ analysis of informational capitalism in his Manuel Castells and the Theory of the Network Society (2006). He questions the claim that the economy is beyond the control of anyone and specifically asks “What do we win, and what do we lose, when we call the financial markets an “automaton”?”[3]
To address these concerns it is useful to outline the ways Castells argues financial markets have emerged over the years.
One is that in today’s world, electronic transaction systems allow for the fast movement of capital instruments between countries. With the advent of computer systems and a world networked via fiber optic cables and artificial satellites, computerized transaction systems have allowed fast movements of capital instruments between countries. I would add that this is not just a technological feat but one that required a dramatic transformation in the way telecommunications enterprises are organized and also they way they relate to their respective governments. The privatization of telecom operations around the world has allowed them to modernize with Internet-based technologies and largely transcend national borders. International regulatory bodies, such as the IMF and the World Trade Organization (WTO) have applied considerable pressure on countries to adopt policies that encourage global inter-connectivity and allow for unfettered capital flows.
Second, these global systems permit investors to rapidly make trades and transfer capital from one country to another in the search for optimal returns. Financial traders have always had incentives to invest in the newest and fastest technology. From carrier pigeons to the telegraph and stock ticker to modern computers and satellites, innovations in financial technology can provide a commercial advantage. Speed is a strategic priority for trading activities and high-frequency trading (HFT) is the latest in this historic trend. HFT uses sophisticated tools and proprietary computer algorithms to move in and out of financial positions in fractions of a second.
Third, an important aspect of modern capitalism is that its representational systems and virtual markets allow for nearly instantaneous translation between types of financial instruments. Bonds can be sold quickly and invested in gold, or oil positions can be liquidated to purchase shares of a company. Furthermore, different countries offer different bonds, different currencies, and a range of different types of derivatives as well as traditional shares of listed corporations.
The development of a wide array of new financial instruments, such as the collateralized debt obligations (CDO) and credit default swaps that ruined the housing markets and brought the global economy to its knees in the “Great Recession” of 2008, has added to the global volatility. This instability further intensifies the need for portfolio diversification that involve mixing investments across a range of different countries as risk management scenarios call for the hedging of financial bets across as many global markets and products as possible.
The system is facilitated by networks of global mass media that analyze and broadcast financial information that can instantaneously impact a wide range of financial decisions. I recently gave a presentation about the surveying aspects of financial media in Seoul, South Korea. I pointed out how news in general works to provide a type of surveillance of society and is becoming increasingly sophisticated with representational techniques that convey all sorts of statistical and graphical information relevant to financial transactions. The system provides a variety of general political and macroeconomic information as well as immediately actionable intelligence. Social media has now become part of the financial world, providing tweets and viral shares of news items that are potentially consequential to the pricing of financial instruments.
Another trend is the use of quantitative algorithms to discern patterns out of the frenetic energies of the global markets. Wall Street has increasingly been influenced by “quants”, a new type of financial trader more reliant on computer modeling than the gut-based decision making built intuitively and through years of “pit” experience. Castell’s sums it up. “All these elements are recombined in increasingly unpredictable patterns whose frantic modeling occupies would-be Nobel Prize recipients and addicted financial gamblers (sometimes embodied in the same persons).”
In short, Castells followed Walter Wriston in proposing that interconnected financial news and transaction networks, along with domestic and international deregulation, have created a radical and potentially unstable global system. They both argued that the increased flow of market-related media information across national borders and the torrent of financial transactions totaling trillions of dollars a day make the specter of financial machines taking control of our political economy an unnerving possibility.
Returning to Felix Stalder’s above question; we win in terms of some conceptualization of some relatively invisible global financial networks. This understanding is more political than the traditional narrative of free enterprise and self-regulating markets. However, we lose in terms of allocating responsibility for the system and thus a focus for policy concerns. I plan to stick with the information standard as an integrative concept in this area because I like its linkage to the gold standard and feel it is more flexible in terms of suggesting a path of analysis and reform.
Notes
[1] The “Automaton” was first named in a chapter entitled “Information Technology and Global Capitalism” in a compiled book on Global Capitalism (2000) that was edited by Will Hutton and Anthony Giddens.
[2] Quote on what drives the “collective capitalist” system from M. Castells, “Information Technology and Global Capitalism”. In (2000) Hutton, W. and Giddens, A. (eds.) Global Capitalism. NY: The New Press. p. 57.
[3] Felix Stalder. Manuel Castells and the Theory of the Network Society. Polity Press, 2006.
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is a professor of global media at Hannam University in South Korea. Previously, he taught at St. Edwards University in Austin, Texas and was on the faculty of New York University from 2002-2012. His first faculty position was at Victoria University in Wellington, New Zealand.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Automaton > financial technology > high-frequency trading (HFT) > Information Standard > Manuel Castells > quantitative algorithms
New York City’s Emphasis on Global Media Management
Posted on | July 11, 2014 | No Comments
Several years ago I started exploring whether it was prudent to create a degree program in Global Media Management. The idea was reinvigorated being here in South Korea in a program that is focused on global issues and skills. So I’m going back to some of my initial research when I was in New York City and Mayor Bloomberg and others recognized the need for skills and additional emphasis in this area.
“New York City is the media capital of the world, but—with the industry undergoing profound changes—it’s incumbent on us to take steps now to capitalize on growth opportunities and ensure we remain an industry leader,” warned New York City Mayor Michael Bloomberg, himself a founder of a multi-billion dollar media empire, while announcing several initiatives to help traditional media workers transition to the digital sector.
The initiatives came out of MediaNYC 2020, a program that gathered media and information technology executives along with government officials and university faculty to develop a NYC-based research lab, create digital media apprenticeships, offer a technology-equipment bond program to provide tax-exempt financing for media and technology companies, and award fellowships to 20 “rising star” digital media entrepreneurs every year.[1]
One very active participant in the Media NYC 2020 initiative was the The Levin Institute, a free standing institution of the SUNY system dedicated to researching global issues. They received a $300,000 grant from the Carnegie Institute to conduct a research and public engagement project to research the dynamics of globalization called New York in the World. The Levin Institute and the Economic Development Corporation of New York City (NYCEDC) hosted a panel discussion in 2009 called Media NYC 2020: NYC as a Global Media Center. It was part of the MediaNYC 2020 initiative that laid out the history of the media industry in NYC and the challenges to its major legacy industries: print, television, and advertising.
The panel followed a previous study examining the specific implications for education in the area of global media management. The Levin Institute interviewed more than 25 corporate practitioners, professors and students in the media industry to gain an understanding of the issues and challenges related to globalization facing the industry. These research strategies ranged from exploratory conversations with the leaders of global firms, in-depth critical incident assessments from leading analysts and additional input from consultants. Their conclusion: “The findings from this research confirmed and validated the urgent need and nuanced demand for a specialized, unique program in Global Media.” They elaborated:
-
SUMMARY OF FINDINGS
Across the board, our data gathering has revealed a critical deficit in global media talent. As the industry weathers the revolutionizing effects of consolidation and digitization, media companies both big and small must grow and innovate across borders and platforms in order to survive. This requires both region-specific and medium-specific knowledge; managers with a deep understanding of the dynamics of foreign markets and the singularity of multiple medias, who are capable of solving contextual issues and forging valuable partnerships. According to our conversations, these managers have been difficult to find.
[2]
So the challenge is to continually define and address the talent needs in the global media sector. I recently defined several digital media “archetypes” of skills sets needed including:
- Design;
- Technology/Programming;
- Business Management;
- Communications;
- and Analytics.
I might add Global Acumen to that list. Understanding the challenges and opportunities for digital firms operating at least in part, globally, requires strong sets of localization skills as well as the ability to scale operations and platforms across wide geographical and temporal spans. It’s a big world with a lot of different regions, countries, economies and cultures. I usually start my students of with several (4-10) required viewings of The Commanding Heights so they have some understanding of the dynamics of global political economies.
I like this phrase “the singularity of multiple medias” mentioned above. Going back to New York City, I commend Cornell University’s new program in “Connective Media” At NYU I created BS programs in both Digital Communications and Media as well as Information Systems and worked hard to integrate them. Cornell has partnered with the Technion-Israel Institute of Technology to offer a MS in Information Systems with a specialization in Connective Media. It looks to address the analytics component mentioned above by integrating the expertise of software engineers and data scientists with that of media content designers, production teams, and editorial staffs.[3]
As I mentioned in “Producing Digital Content Synergies,” media firms in this new digital environment are increasingly combining multiple sets of skills and expertise to cross-produce and cross-promote content concepts (think Harry Potter series or The Hunger Games) across the organization or in cooperation with other firms. This means utilizing a wide range of available production and post-production resources to develop, package, distribute and monetize cultural products and other digital properties and promote them in a number of global/local markets.
While this emphasis on global media management reminds me of the “Silicon Alley” phenomenon in NYC during the dot.com era, it also makes me wonder to what extent they had the idea right, but just had too much money and not enough time to develop the technology and business skills to make it work.
Notes
[1] This post “New Initiatives Will Help NYC Continue as the Global Media Capital in the Digital Age” on Bloomberg’s personal blog is an informative update on these initiatives.
[2] The summary of findings are from http://www.levin.suny.edu/global-media.cfm. This link appears to be no longer available and was accessed on 8/14/09.
[3] More information on Cornell’s new presence in NYC and Connective Media.
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is a professor of global media at Hannam University in South Korea. Previously, he taught at St. Edwards University in Austin, Texas and was on the faculty of New York University from 2002-2012. He taught at Victoria University in Wellington, New Zealand and was a Fellow at the East-West Center in Hawaii.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: cross-production > cross-promotion > Global Media > Media Management > MediaNYC 2020 > Silicon Alley
Drone Journalism and Remote Sensing
Posted on | June 16, 2014 | No Comments
After 9/11, I developed and often taught a course at New York University called Remote Sensing and Surveillance. It was designed to study the promises and perils of technologies such as aerial photography, closed circuit cameras, multiple orbit-earth satellites, and a number of IP-based web surveillance systems. The course combined a social science approach with an in-depth look at the electromagnetic radiation utilized by a variety of cameras and other sensors. Concerns for individual privacy and national sovereignty were addressed, especially after the passage of the Patriot Act.
The attention to a relatively new endeavor called drone journalism had me thinking about the class lately. The use of small unmanned aerial systems (sUAS) has been touted as supportive in the future of a variety of investigative reporting needs. These drones collect audio, photographs, video, and other types of data that can be useful for covering news-worthy conditions and events related to criminal forensics, disaster management, environmental damage, adverse weather, traffic reporting and sports coverage. Drone journalism received attention recently for its aerial drone “coverage” of the riots in the Istanbul and the political unrest in the Ukraine.
Legal action by the major media organizations to protect drone journalism is especially noteworthy. By referring to the First Amendment, the New York Times and other media organizations are calling drone surveillance for news purposes a constitutionally protected right. A fine imposed on Raphael Pirker by the FAA Federal Aviation Administration (FAA) for reckless flying of a drone has brought the issue to Federal court. If given constitutional protection, drone journalism is likely to become a daily practice in the media business.
In light of this development I thought it might be useful to revisit the objectives and content of the class and to see what it has to offer in terms of insights and/or criticism of this emerging aspect of the news business.
Tech-wise, the course taught the fundamentals of electromagnetic radiation and how these energies interact with Earth materials such as vegetation, water, soil and rock, as well as humans and human artifacts. It also covered how the energy reflected or emitted from these materials is detected and recorded using a variety of remote sensing instruments such as digital cameras, multispectral scanners, hyperspectral instruments, and RADAR, etc. The course also touched on the principles of visual photo-interpretation, although this is a particularly complex topic that has both denotative and connotative considerations.
Satellites provide a useful historical reference that can indicate potential directions for drone journalism. As I point out in Seeing from Space: Cold War Origins to Google Earth, satellites were initially used for spying and military photo reconnaissance. Improvements in visual acuity have increased to the point where they have recently developed the capacity to detect sub-pixel targets less than 9% the size of one single pixel. They can also take advantage of infra-red and other electromagnetic wavelengths to see at night and under other adverse conditions.
Many news viewers are becoming more environmentally and scientifically literate in the types of news drones can provide. Climate change and attention to other types of pollution will be a fruitful area for drone journalism. Satellites equipped for remote sensing have been used to monitor environmental conditions such as forest resources, fish migration, oil reserves, soil composition, radiation contamination, river flows, food harvests, as well as to forecast disasters due to natural causes such as flooding or droughts. Some of this will be picked up by drone facilitated aerial photography.
It is no secret that drone technology has developed precise methods for defining locations. With the introduction of the Global Positioning System (GPS) satellites were used in warfare for guiding missiles to their targets, routing convoys through unknown territories, and locating lost soldiers and equipment. GPS, combined with geo-location technologies such as cell towers and Wi-Fi hotspots, provide remarkable accuracy in identifying the positions of objects, places and people and are now used in a wide variety of commercial activities, including mobile phone apps. For news operations, geo-location can help recover classified advertising and other revenue sources by providing location specific news items, weather reports and reviews of nearby establishments.
While drone technology conjures up thoughts of paparazzi buzzing stars and and peeps looking into bedroom windows, it could also be used for more socially responsive journalism. Among other choices, it can be a deterrent for environmentally dangerous companies and crowd abusing police teams. I’m not advocating drone technology for journalism at this time as I think it has a number of safety and privacy issues to resolve. However, as the decision is not up to me, I thought the least I could do is outline some of the issues from my perspective.
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is Professor at the Department of Technology and Society at the State University of New York (SUNY) in South Korea. Previously, he taught at St. Edwards University in Austin, Texas and was on the faculty of New York University from 2002-2012. He taught at Victoria University in Wellington, New Zealand and was a Fellow at the East-West Center in Hawaii.
Tags: drone journalism > drones > First Amendment > Global Positioning System (GPS) > privacy > remote sensing
Technologies of Democracy
Posted on | June 12, 2014 | No Comments
I’m rereading a book, Technologies of Power: Information Machines and Democratic Prospects by one of my mentors from graduate school. Majid Tehranian was a Professor of International Communications at the University of Hawaii and Founding Director of the Toda Institute for Global Peace and Policy Research. I do believe that it is one of the best books of the 20th century on the topic of media technology and democracy.[1] OK, I’m biased, as a graduate assistant, I actually did the index for the book. But looking back at the book, I’m pleased to have played a small role in its birth.
At the time, we were debating Ithiel de Sola Pool‘s Technologies of Freedom. The MIT professor did a lot of the research on his book at the University of Hawaii Law Library a few years before and put forward a strong libertarian thesis about communications technologies and how technical change in this dynamic area should be treated by the legal/regulatory system.
Tehranian felt compelled to respond with his own book in which he used a “technostructuralist” approach emphasizing that new technologies are guided in their development and deployment along existing lines of corporate, governmental, and military power. However, he proposed a “dual-effects hypothesis” for studying the historical influence of new “information machines” as they have often “played a dual role in the service of centralization and the dispersion of power.” These potentially disrupting effects led him to focus on the importance and promises of information technologies for democratic processes.[2]
Originally from Iran, Tehranian was not Islamic, so returning home was problematic after the revolution in 1979. But he was mindful of its lessons, particularly the role of small media, including cassettes and photocopying. Both were used effectively in the overthrow of the Shah. Cassettes were often made by the Ayatollah Khomeini who, while exiled in France, sent recorded messages to mosques throughout Iran, stirring revolt. Tehranian sometimes referred to the impact of small media in Iran as the reign of Xeroxcracy.
Tehranian was influenced in part by University of Hawaii colleagues Ted Becker, Chair of the Political Science Department at the time, and his wife Christa D. Slaton. Their The Future of Teledemocracy offered a compelling vision for the use of computers to enhance democratic efforts. In a benchmark study that was part of the futures movement at the University of Hawaii, in collaboration with Victoria University of Wellington, New Zealand, they charted a path towards a more deliberative direct democracy. Using interactive TV, televoting, and other digital initiatives, they proposed technology could be designed to engage the public and bring them back into political affairs. They particularly stressed the development of collaborative designs for scientific deliberative polling in electronic town meetings. Later they would embrace the Internet as a tool to facilitate this process.[3]
Tehranian was not prone to simple definitions, but he also saw the value of conceptualizing ideas for the sake of argument. He framed one definition of democracy in terms of the contributions by technology. Thus he provided this rather technocratic perspective on the conditions for democracy. “If we view democracy as a cybernetic social system of networks in which there are many autonomous and decentralized nodes of power and information with their multiple channels of communication, the new media are increasingly providing the technological conditions for such a system.” (p.6)
Tehranian identified six cybernetic conditions for democracy: interactivity; universality; channel capacity; content variety; speedy transmission; and, low noise.
Interactivity recognizes the value of horizontal communications – people talking to each other, sharing information, engaging in social discourse. Tehranian grew up in a world where media was primarily one-way and vertically oriented from top to bottom. So he was intrigued with the emerging capabilities of media technologies to provide “multiple feedback systems” and allow autonomous centers of power to engage with each other in a pluralistic system of checks and balances.
Universality is another important condition for teledemocracy. This refers to access to the technological devices needed to participate in tele-democratic deliberations. Lower costs for mobile phones and computers mean increased penetration and participation in the political process. Higher rates of literacy and education also increase the population who can follow civic activities and participate in public discussions. Social media is now seen as a potential new “networked public sphere” for political debate and sharing relevant information.
Content Variety refers to the increasing diversity of professional and user-generated programming. Tehranian was disappointed in the global diet of television programs such as Dallas and Days of our Lives that were scheduled at the expense of more cultural and educational programs. He contrasted official messaging associated with more authoritarian political systems with the “symbolic variety” needed for democratic communications systems. Access and active participation mean little if the content available is limited in accuracy, relevance, scope and quality. In the age of Youtube and social media, the variety of ideas and messages have increased dramatically, although concerns have been raised about the statistics and surveillance of who watches what.
Channel Capacity involves the ability to transmit and receive high fidelity and high resolution professionally produced or “user-generated” information. While broadband and wifi speeds have increased regularly, the results are distributed unevenly. I’m writing this in South Korea, which, at an average connection of 63.6 megabits per second, is second only to Hong Kong for the highest broadband rates in the world. I have a house in Austin, Texas that is one of the Google Fiber cities beginning to offer 1 gigabit per second transmission to homes and businesses.
Low Noise is an interesting concept that has both a technical and political dimension. Technically, noise is a long-recognized impediment to message delivery. Politically, Tehranian pointed to the need for democratic rules and procedures that facilitate rational discourse. I lived a block from Washington Square Park in New York City when the Occupy Wall Street movement would meet there. The protesters gathered and utilized an interesting set of protocols to amplify messages and to communicate silently. The crowd who could hear the speaker would repeat their words like a “human mic” so people in the back could hear. They would also use hand signals to agree, disagree, or morally oppose anything being proposed. CNN reporter Jeanne Moos had this satirical piece on the Occupy Wall Street’s communication protocols.
Teledemocratic systems will need to develop their own rules of political communication to develop a unique public sphere concept of citizen participation, protest, and voting.
Speedy Transmission is a related concept that also has technical and political conditions. Faster communications has been a major motivator for information technology development from the telegraph to the ARPANET. He applies it also to level in which the public communicates concerns and demands to the political system and vice-versa, official responses are returned to public and actions through initiatives such as the Internet domain level .gov and .us.
The capabilities of digital media have accelerated and I plan to look at additional characteristics that might provide insights into the possibilities of teledemocracy. This list would probably include mobility, search, capture, representation, storage, and viral or spreadable media.
Notes
[1] Tehranian received a PhD in Political Economy from Harvard University and did his PhD dissertation on the global politics of oil cartels.
[2] Dual-effects hypothesis from Technologies of Power, p. 53.
[3] Ted Becker is the Alumni Professor of Political Science at Auburn University.
[4] Conditions for democracy can be found in Technologies of Power in Chapter 1 primarily pp. 6-9, and at the conclusion of Chapter 2, pp. 52-53.
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is a professor of global media at Hannam University in South Korea. Previously, he taught at St. Edwards University in Austin, Texas and was on the faculty of New York University from 2002-2012. He taught at Victoria University in Wellington, New Zealand and was a Fellow at the East-West Center in Hawaii.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Content Variety > cybernetic conditions for democracy > Majid Tehranian > Technologies of Freedom > Technologies of Power: Information Machines and Democratic Prospects > technostructuralist > teledemocracy > Xeroxcracy
Revisiting Huxley and Orwell on Technology and Democracy
Posted on | June 1, 2014 | No Comments
One of the faces I miss most from my days on the NYU campus is that of Neil Postman, a professor of media ecology at the Steinhardt school. Professor Postman died a few years ago but not without leaving behind a legacy, including one of my favorite books, Amusing Ourselves to Death: Public Discourse in the Age of Show Business (1985).
What I liked best about the book was his discussion of differences between authors Aldous Huxley and George Orwell, whose classic novels Brave New World (1932) and 1984 (1984) helped shape the American debate on dictatorship, democracy, surveillance, and propaganda. Both addressed these issues but with different perspectives.
Our engagement with these narratives has had significant implications for how we view computerization and the media, and how these technologies shape our society. Stuart McMillan drew up the contrast as specified by Postman in this cartoon series (that was recently taken down).
As Postman pointed out, while many people think the two authors had similar ideas about the characteristics and dangers of a totalitarian political system, a closer reading suggests otherwise. He stressed that Orwell’s vision was one of a government using violent oppression to crush the spirit of its populations while Huxley’s story was about a government that used distraction and pleasure to rule.
“What Orwell feared were those who would ban books. What Huxley feared was that there would be no reason to ban a book, for there would be no one who wanted to read one. Orwell feared those who would deprive us of information. Huxley feared those who would give us so much that we would be reduced to passivity and egoism. Orwell feared that the truth would be concealed from us. Huxley feared the truth would be drowned in a sea of irrelevance. Orwell feared we would become a captive culture. Huxley feared we would become a trivial culture, preoccupied with some equivalent of the feelies, the orgy porgy, and the centrifugal bumblepuppy.”[1]
While repressive dictatorships around the world continue to lend credence to Orwell’s vision, Postman was more concerned with Huxley’s idea of a regime of amusement and triviality, thus the title of his book.[2] The brutal crackdowns in the Middle East associated with the Arab Spring were chilling reminders of the use of force and surveillance by repressive governments. But what are the dangers when a political system such as the US, which draws its legitimacy from its heritage of democratic citizen participation, is subjected to a barrage of emotional trivia? Even our election system, the epitome of democratic practice has been reduced to a “race” between personalities, rather than a discussion of relevant issues and related policies.
Postman’s Amusing Ourselves to Death was published in the wake of 1984 as the Western world breathed a sigh of relief. For the most part, Orwell’s nightmare had not come true, despite Ronald Reagan’s tirades on the intrusion of government into our lives. In fact, Reagan’s turn to supply side economics unleashed a new media philosophy articulated by his choice to head the FCC, Chairman Mark S. Fowler, whose attitude was the public interest is what the “public is interested in.” Public TV was cut back, and private sector forces unleashed, particularly on children’s programming that consequently saw more and more commercial advertisements.
Luckily, it was also a time of technological choice and many parents began to switch their kids to VHS tapes and programming that was less violent and intrusive. Also, more channels emerged as the age of network television was being replaced by the multi-channel universe of cable television. Content variety is a important component of democratic participation.
Postman, who was a self-professed Luddite, was not a fan of the computer nor the Internet because of the parade of individualized amusements and distractions it offers. He rarely used computers and by extension email. A pioneer of the media ecology approach, he saw the introduction of a new technology as something that changes and disrupts our lives. Media is more than machines, but rather shape entire environments; structuring what we can see and say, assigning us roles and the extent of our participation in them, specifying what we are permitted to do and say, and what is dangerous and forbidden.
In Technopoly: The Surrender of Culture to Technology (1992) he saw the computer, as other media technologies, as a Faustian bargain in which something is a gained, but much is also taken away. The computer gives us home shopping and lots of data at one’s fingertips, but with a trade-off. His primary concern was that the computer would lead to a new era where people were isolated in their world of infotainment fantasies and would reduce their connections with their community.
Citation APA (7th Edition)
Pennings, A.J. (2014, Jun 1). Revisiting Huxley and Orwell on Technology and Democracy. apennings.com https://apennings.com/dystopian-economies/revisiting-huxley-and-orwell-on-technology-and-democracy/
Notes
[1] From the Introduction to Amusing Ourselves to Death: Public Discourse in the Age of Show Business
[2] 1980 movie of Brave New World and 1956 version of 1984.
© ALL RIGHTS RESERVED

var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: 1984 > Aldous Huxley > Amusing Ourselves to Death: Public Discourse in the Age of Show Business > Brave New World > Faustian bargain > George Orwell > media ecology > Neil Postman > Technopoly: The Surrender of Culture to Technology