Anthony J. Pennings, PhD

WRITINGS ON DIGITAL ECONOMICS, ENERGY STRATEGIES, AND GLOBAL COMMUNICATIONS

Cisco Systems: From Campus to the World’s Most Valuable Company, Part One: Stanford University

Posted on | May 24, 2016 | No Comments

Cisco Systems emerged from employees and students at Stanford University in the early 1980s to become the major supplier of the Internet’s enigmatic plumbing. In the process, it’s stock value increased dramatically and it became the largest company in the world by market capitalization. Cisco originally produced homemade multi-platform routers to connect campus computers through an Ethernet LAN and throughout the 1980s, they built the networking technology for the National Science Foundation’s NSFNET. As the World Wide Web took off during the 1990s, they helped countries around the world transit their telecommunications systems to Internet protocols. Cisco went public on February 4, 1990, with a valuation of $288 million. By 2002, Cisco Systems was calculated to be the world’s most valuable company, worth $579.1 billion to second place Microsoft’s $578.2 billion. Microsoft had replaced General Electric’s No. 1 ranking in 1998.

This post will present the early years of Cisco Systems development and the creation of networking technology on the Stanford University campus. The next post will discuss the commercialization and success of Cisco Systems as it helped to create the global Internet by first commercializing multi-protocol routers for local area networks.

Leonard Bosack and Sandra K. Lerner formed Cisco Systems in the early 1980s and were the driving forces of the young company. In the beginning, they each were the heads of different computer centers on campus and incidentally, (or perhaps consequently) dating. The couple met on the Stanford campus (Bosack earned a master’s in computer science in 1981 and Lerner received a master’s in statistics in 1981) while managing the computer facilities of different departments on the Stanford campus. The two faculties were located at different corners of the campus and the couple began to work together to link them, and to other organizations scattered around the campus. Drawing on work being conducted at Stanford and Silicon Valley, they developed a multi-protocol router to connect the departments. Bosack and Lerner left Stanford University in December 1984 to launch Cisco Systems.

Robert X. Cringely, author of Accidental Empires: How the Boys of Silicon Valley Make Their Millions, Battle Foreign Competition, and Still Can’t Get a Date interviewed both founders for his PBS video series, Nerds 2.0.1

Bosack and Lerner happened on their university positions during a very critical time in the development of computer networks. The Stanford Research Institute (SRI) was one of the four original ARPANET nodes and the campus later received technology from Xerox PARC, particularly the Alto computers and the Aloha Network technology, now known as Ethernet.[1] This technology, originally developed at the University of Hawaii to connect different islands, was improved by Robert Metcalfe and other Xerox PARC researchers and granted to the Stanford University in late 1979.[2] Ethernet technologies needed router technology to network effectively and interconnect different computers and Ethernet segments.

A DARPA-funded effort during the early 1970s at Stanford had involved research to design a new set of computer communication protocols that would allow several different packet networks to be interconnected. In June of 1973, Vinton G. Cerf started work on a novel network protocol with funding from the new IPTO director, Robert Kahn. DARPA was originally interested in supporting command-and-control applications and in creating a flexible network that was robust and could adjust to the changing situations to which military officers are accustomed. In July 1977, initial success led to a sustained effort to develop the Internet protocols known as TCP/IP (Transmission Control Protocol and Internet Protocol). DARPA and the Defense Communications Agency, which had taken over the operational management of the ARPANET, supplied sustained funding the project.[3]

The rapidly growing “Internet” was implementing the new DARPA-mandated TCP/IP protocols. Routers were needed to “route” packets of data to their intended destinations. Every packet of information has an address that helps it find its way through the physical infrastructure of the Internet. Stanford had been one of the original nodes on ARPANET, the first packet-switching network. In late 1980, Bill Yeager was assigned to work on a router as part of the SUMEX (Stanford University Medical Experimental) initiative at Stanford University to build a network router. Using a PDP-11, he first developed a router and TIP (Terminal Interface Processor). Two years later he developed a Motorola 68000-based router and TIP using experimental circuit boards that would later become the basis for the workstations sold by SUN Microsystems.[4]

Bosack and Lerner had operational rather than research or teaching jobs. Len Bosack was the Director of Computer Facilities for Stanford’s Department of Computer Science, while Sandy Lerner was Director of Computer Facilities for Stanford’s Graduate School of Business. Despite their fancy titles, they had to run wires, install the protocols, and get the computers to work. They were in charge of getting the computers and the networks to work and make them usable for the university. Bosack had worked for DEC, helping to design the memory management architecture for the PDP-10. At Stanford, many different types of computers: mainframes, minis, and the microcomputers were all in demand and used by faculty, staff, and students.

Some 5000 computers were scattered around the campus. The Alto Computer, in particular, was proliferating on campus. Thanks to Ethernet, computers were often connected locally, within a building or a cluster of buildings, but no overall network existed throughout the campus. Bosack, Lerner, and other colleagues such as Ralph Gorin and Kirk Lougheed worked on “hacking together” some of these disparate computers into the multi-million dollar broadband network being built on campus. But it was running into difficulties. They needed to develop “bridges” between local area networks and then crude routers to move packets. At the time, routers were not offered commercially. Eventually, their “guerilla network” became the de facto Stanford University Network (SUNet).

Notes

[1] Stanford networking experiments included those in the AI Lab, at SUMEX, and the Institute for Mathematical Studies in the Social Sciences (IMSSS).
[2] Ethernet Patent Number: 04063220 , Metcalfe, et al.
[3] Vinton G. Cerf was involved at Stanford University in developing TCP/IP and later became Senior Vice President, Data Services Division at MCI Telecommunications Corp. His article “Computer Networking: Global Infrastructure for the 21st Century” was published on the www and accessed in June 2003.
[4] Circuit boards for the 6800 TIP developed by Andy Bechtolsheim in the Computer Science Department at Stanford University.

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

The NSFNET is the Internet

Posted on | May 20, 2016 | No Comments

An important intermediary in the transition of the military’s ARPANET into the commercial Internet was the National Science Foundation’s NSFNET. The NSFNET adopted TCP/IP and required all connecting nodes to use them as well compliant network technology, mainly built by a small California startup company called Cisco. With government funding for advanced scientific and military research, the network expanded rapidly to form the initial Internet. Without the NSFNET, the Internet would have grown differently, probably using the X.25 protocols developed by the phone companies. Without specifying the use of TCP/IP protocols the Internet would have emerged with significantly less interoperability and diversity of services.

The NSFNET has its origins at the University of Maryland during the 1982-83 school-year. The university was looking to connect its campus computers as well as network with other colleges. It applied to the National Science Foundation (NSF) for funding but found it was organizationally challenged for such a request. In response, the NSF set up the Division of Networking and Computing Research Infrastructure to help allocate resources for such projects. The Southeastern Universities Research Association Network or SURANET adopted the newly sanctioned TCP/IP protocols, connecting the University of Maryland to other universities. It set a precedent and nearly two years into the project, SURANET connected to IBM at Raleigh-Durham, North Carolina.

The National Science Foundation (NSF) was formed during the 1950s before computer science emerged as a specific discipline. It first established areas of research in biology, chemistry, mathematics, and physics before it became a significant supporter of computing activities. Finally, in 1962, it set up its first computing science program within its Mathematical Sciences Division. At first it encouraged the use of computers in each of these fields and later towards providing a general computing infrastructure, including setting up university computer centers in the mid-1950s that would be available to all researchers. In 1968, an Office of Computing Activities began subsidizing computer networking. They funded some 30 regional centers to help universities make more efficient use of scarce computer resources and timesharing capabilities. The NSF worked to “make sure that elite schools would not be the only ones to benefit from computers.”[1]

During the early 1980s, the NSF started to plan its own national network. In 1984, a year after TCP/IP was institutionalized by the military, the NSF created the Office of Advanced Scientific Computing, whose mandate was to create several supercomputing centers around the US.[2] Over the next year, five centers were funded by the NSF.

  • General Atomics — San Diego Supercomputer Center, SDSC
  • University of Illinois at Urbana-Champaign — National Center for Supercomputing Applications, NCSA
  • Carnegie Mellon University — Pittsburgh Supercomputer Center, PSC
  • Cornell University — Cornell Theory Center, CTC
  • Princeton University — John von Neumann National Supercomputer Center, JvNC

However, it soon became apparent that they would not adequately serve the scientific community.

In 1986, Al Gore sponsored the Supercomputer Network Study Act to explore the possibilities of high-speed fiber optics linking the nation’s supercomputers. They were much needed for President Reagan’s “Star Wars” Strategic Defense Initiative (SDI) and as well as competing against the Japanese electronics juggernaut and its “Fifth Generation” artificial intelligence project.

Although the Supercomputer Network Study Act of 1986 never passed, it stimulated interest interest in the area and as a result the NSF formulated a strategy to assume leadership responsibilities for the network systems that ARPA had previously championed. It took two steps to make networking more accessible. First, it convinced DARPA to expand its packet-switched network to the new centers. Second, it funded universities that had interests in connecting with the supercomputing facilities. In this, it also mandated specific communications protocols and specialized routing equipment configurations. It was this move that standardized the specific set of data communication protocols that caused the rapid spread of the Internet as universities around the country and then around the world. Just as the military had ordered the implementation of Vint Cerf’s TCP/IP protocols in 1982, the NSF directives standardized networking in the participating universities. All who wanted to connect to the NSF network had to buy routers (mainly built by Cisco) and other TCP/IP compliant networking equipment.

The NSF funded a long haul backbone network called NSFNET in 1986 with a data speed of 56Kbps (upgraded to a T1 or 1.5 Mbps the following year) to connect the high-computing power for all its nodes. It also offered to allow other interested universities to connect to it as well. The network became very popular but not because of its supercomputing connectivity but rather because of its electronic mail, file transfer protocols, and its newsgroups. It was the technological simplicity of TCP/IP that made it sprout exponentially over the next few years.[3]

Unable to manage the technological demands of its growth, the NSF signed a cooperative agreement in November 1987 with IBM, MCI, and Merit Network, Inc. to manage the NSFNET backbone. By June of the next year, they expanded the backbone network to 13 cities and developed a modern control center in Ann Arbor Michigan. Soon it grew to over 170 nodes, and traffic was expanding at a rate of 15% a month. In response to this demand, the NSF exercised a clause in their five-year agreement to implement a newer state-of-the-art network with faster speeds and more capacity. The three companies formed Advanced Network & Services Inc. (ANS), a non-profit organization to provide a national high-speed network.

Additional funding by the High Performance Computing Act of 1991 helped expand the NSFNET into the Internet. By the end of 1991, ANS had created a new links operating at T-3 speeds. T-3 traffic moves at speeds up to 45mbps and over the next year ANS replaced the entire T-1 NSFNET with new linkages capable of transferring an equivalent of 1,400 pages of single-spaced typed text a second. The funding allowed the University of Illinois at Urbana Champaign’s NCSA (the National Center for Supercomputing Applications) to support graduate students for $6.85 an hour. A group including Marc Andresson developed a software application called Mosaic for displaying words and images. Mosaic was the prototype for popular web browsers such as Chrome and Internet Explorer.

The NSFNET soon connected over 700 colleges and universities as well as nearly all federally funded research. High schools, libraries, community colleges and other educational institutions were also joining up. By 1993, it also had links to over 75 other countries.[4]

Pressures had been building to allow commercial activities on the Internet, but the NSF had strict regulations against for-profit activities on its network facilities. During the 1980s, the network was subject to the NSF acceptable use policy, including restricting commercial uses of the outcomes of NSF research. Congressman Rick Boucher (D-Virginia) introduced legislation on June 9th, 1992 that allowed commercial activities on the NSFNet.[5] In one of his last executive acts, President Bush finally allowed business to be conducted over its networks and those that were being built around it. Several months into its newly liberalized status, the NSFNET transitioned to an upgraded T3 (45Mbs) backbone status – much, much faster than its original 45Kbs speed.

The legacy of the NSFNET is that it ensured the proliferation of the TCP/IP technologies, protocols and associated hardware. These systems were designed as an open architecture that accepts any computer and connects to any network. It was also miminalist, requiring little from the computer and neutral to applications (e-mail, browsers, FTP) and content running on the network.

Although these are idealistic principles and not always followed in practice, they were largely responsible for the unlocking the tremendous economic growth of the Internet age. For example, Marc Andresson and some of his colleagues soon left the University of Illinois at Urbana and formed Netscape to market their “browser.” Their IPO in 1994 helped spark the massive investments into the Internet that characterized the 1990s and the rise of the “dot.coms.”

Notes

[1] Janet Abbate’s (2000) Inventing the Internet by MIT Press is a classic exploration of the history of the World Wide Web, p. 192.
[2] Abbate, p. 191.
[3] Kahin, B. (ed.) (1992) Building Information Infrastructure: Issues in the Development of a National Research and Education Network. McGraw-Hill Primis, Inc. This book contains a series of papers and appendixes giving an excellent overview of the discussion and legislation leading to the NREN.
[4] Information obtained from Merit, December 1992
[5] Segeller, S. (1998) Nerds 2.0.1 pp. 297-306
Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

We shape our buildings and afterwards our buildings shape us

Posted on | May 9, 2016 | No Comments

One of Marshall McLuhan’s most celebrated intellectual “probes” was a paraphrase of Winston Churchill’s infamous “We shape our buildings, and afterwards our buildings shape us.” Churchill was addressing Parliament some two years after a devastating air raid by the Nazis destroyed the House of Commons and was arguing for its restoration, despite the major challenges of the war.

Churchill’s famous line was paraphrased in the 1960s with a more topical, “We shape our tools and thereafter our tools shape us.” and was included in McLuhan’s classic (1964) recording The Medium is the Massage. With the diffusion of the television and the transistor radio, it was a time when the electronic media was exploding in the American consciousness. McLuhan and others were committed to understanding the role of technology, particularly electronic media in modern society.

The revised quote is often attributed to McLuhan, but it was actually reworded by his colleague, John M. Culkin. Culkin was responsible for inviting McLuhan to Fordham University for a year and subsequently greatly increasing his popularity in the US. Culkin later formed the Center for Understanding Media at Antioch College and started a master’s program to study media. Named after McLuhan’s famous book Understanding Media, the center later moved to the New School for Social Research in New York City after Culkin joined their faculty.

The probe/quote serves in my work to help analyze information technologies (IT), including communications and media technologies (ICT). It provides frames for inquiring into the forces that shaped ICT, while simultaneously examining how these technologies have shaped economic, social and political events and change. IT or ICT means many things but is meant here to traverse the historical chasm between technologies that run organizations and processes and those that educate, entertain and mobilize. This combination is crucial for developing a rich analysis of how information and communications technologies have become powerful forces in their own right.

My concern has to do with technology and its transformative relationship with society and institutions. In particular, the reciprocal effects between technology and power. Majid Tehranian’s “technostructuralist” perspective was instructive because it examined information machines in the context of the structures of power that conceptualize, design, fund, and utilize these technologies. In Technologies of Power (1990) he compared this stance to a somewhat opposite one, a “techno-neutralist” position – the position that technologies are essentially neutral and their consequences are only a result of human agency.

In my series How IT Came to Rule the World, I examine the historical forces that shaped and continue to shape information and related technologies.

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

Digital Spreadsheets – Part 5 – Ease and Epistemology

Posted on | May 6, 2016 | No Comments

To pick up the story, I started this analysis of the spreadsheet looking at the emergence of Lotus 1-2-3 within the context of the 1980s. This included the importance of the personal computer and the Reagan Revolution – characterized the by the commercialization of Cold War technologies and the globalization and increasing financialization of individual and organizational life. Part 2 on “spreadsheet capitalism” showed how spreadsheets came to be used by a new breed of analysts for leveraged buyouts (LBOs) and other financial manipulations. The PC-enabled spreadsheet soon became a fixture of modern organizational practice and management because of its abilities to quickly: store moderate amounts of data; make formulaic calculations and associated projections, and; present this newly manufactured information in intelligible formats.

Part 3 introduced the formal analysis of spreadsheets that examined how this new technology incorporates other media starting with zero and the Indo-Arabic numerals. The base-10 decimal system, within a positional place-value notation, gave the zero-based numerical system a previously unprecedented calculative ability. Other representational systems in Part 4 include alphabetical writing and list technology. Below I introduce the importance of the table as an epistemological technology that is empowered by the computerized spreadsheet and is part of a system of knowledge production that has transformed modern forms of bureaucratic and corporate organization.

Bonnie A. Nardi and James R. Miller of Hewlett-Packard Laboratories noted that the effectiveness of a spreadsheet program derives from two major properties: its ability to provide low-level calculation and computational capabilities to lightly trained users; and its “table-oriented interface” that displays and organizes information in a perceptual geometric space that shows categories and relationships.[1]

The columnar spreadsheet had an important if non-distinguished history as the bedrock of the accounting and bookkeeping professions. It became universally more relevant when it was programmed into the first electronic spreadsheet for the Apple II. VisiCalc was created when its co-inventor, Dan Bricklin, needed to make tedious calculations for an MBA course he was taking.

As its name suggests, VisiCalc combines the two most important components of the electronic spreadsheet – visual display and calculative ability. Tables are both communicative and analytical technologies that provide familiar ways to convey relatively simple or complex information as well as structure data so that new dimensional relationships are produced. These became particularly useful for accounting and sales, as well as finance.[1]

The interactivity of the personal computer keyboard and screen was a prominent advantage in the growing popularity of the electronic spreadsheet. It provided immediate feedback and reduced the learning curve of spreadsheet applications like VisiCalc and later Lotus 1-2-3 for the IBM PC and its clones. Getting immediate feedback rather than waiting hours or days to pick up printouts from the computer center made the PC spreadsheets much more valuable than the mainframe (or mini) produced results. PC users could easily isolate individual cells and input information while also applying formulas that ranged from simple mathematical calculations to more complex statistical analyses.

Later the graphical user interface and the mouse made the Apple Mac and Windows-based PCs even more interactive and user-friendly. Microsoft Excel, originally designed for the Apple Mac, emerged as the dominant spreadsheet program. Drop-down menus that could be accessed with a mouse-controlled cursor provided instructions and options to help use the spreadsheet. Users could quickly learn how to complete various “high-level, task-specific functions” such as adding the values of a range of cells or finding the statistical median in a selected range within a column. More about this in the next post on formulas.

Nardi and Miller also pointed to the visual aspects of the spreadsheet. The table combines an easily viewable display format in a simple geometric space. The tabular grid of the computer spreadsheet interface provides a powerful visual format for entering, viewing, and structuring data in meaningful ways. Users are able to view large quantities data on a single screen without scrolling. Relationships between variables become discernible due to the table display. This facilitates pattern recognition and the ability to isolate specific items. Organized around the intersection of rows and columns at a point called the “cell,” spreadsheets provide relatively simple but important computational and display solutions for information work.

Tables draw deeply on the meaning-creating capabilities of the list, an integral component of the spreadsheet. The list was one of the first uses of writing and an ancient technology of organization and management. It is a visual technology that produces a verbless and decontextualized organization of concepts or things that creates boundaries and thus organizes knowledge in new ways. In the spreadsheet, lists form the rows and columns of the table and can isolates individual items or connect them visually to other items, creating new categories and taxonomies.[2] The synergy of this new formulation in the spreadsheet, a conjunction of vertical lists or “columns” with horizontal lists called “rows” creates a powerful new intellectual dynamism for tracking resources and assigning associated values.

Lists and tables are “remediated” in the spreadsheet.[3] In the tradition of Marshall McLuhan, new media can be seen as the incorporation and recombination of previous media. In other words, the list takes on new significance in the spreadsheet as it is re-purposed horizontally as well as vertically to create the tabular format. The term remediate comes from the Italian remediari and means “to heal.” The list is “healed” as it becomes part of the table because it attempts to provide a more authentic access to reality. In the table, additional dimensions to a list are created, offering new lists and displaying the possibilities of new relationships. The history of media, including spreadsheets, is one of attempting to develop more advanced systems of representation that strive to provide new and better ways to view and understand the world.

The table acts as a problem-solving medium; a cognitive tool that not only classifies and structures data but offers ways to analyze it. The rows and columns provide the parameters of the problem, and individual cells invite input and interpretation. The format of the table also suggests connections, dependencies, and relations between different sets of data. With the table, it is easy to discern categories represented in the vertical and horizontal dimensions and scan for individual data values and to get a sense of the range of values and other patterns such as a rough average.

To recap, the formal analysis of the spreadsheet focuses on the various components of the spreadsheet and how they combine to give it its extraordinary capabilities. The spreadsheet has emerged as both a personal productivity tool and a vehicle for group collaboration. Spreadsheets produce representational frameworks for garnering and organizing labor and material resources. Numerical innovation stemming from the adoption of zero and other systems of mathematical representation have made the computerized spreadsheet a central component of modern life. It has also helped establish the centrality of financial knowledge as the working philosophy of modern capitalism.

In this post, I discuss how spreadsheets in conjunction with other forms of financial and managerial representation not only produce the ongoing flows of knowledge that run the modern economy but have created a technologically-enhanced epistemology for garnering resources, registering value, and allocating wealth.[4]

Notes

[1] Bonnie A. Nardi and James R. Miller (In D. Diaper et al (Eds.), “The Spreadsheet Interface: A Basis for End-user Programming,” Human-Computer Interaction: INTERACT ’90. Amsterdam: North-Holland, 1990. Spring, 1990.
[2] Jack Goody’s (1984) Writing and the Organization of Society is informative on the use of the list as a historical management tool.
[3] Bolter, J. and Grusin R. (2000) Remediation: Understanding New Media. MIT Press.
[4] Hood, John. “Capitalism and the Zero,” The Freeman: Foundation for Economic Education. N.p., 01 Dec. 2000. Web. 10 Jan. 2015.

Share

© ALL RIGHTS RESERVED

AnthonybwAnthony J. Pennings, PhD is the Professor and Associate Chair of the Department of Technology and Society at the State University of New York (SUNY) in South Korea. Previously, he taught at St. Edwards University in Austin, Texas and was on the faculty of New York University from 2002-2012. He also taught at Victoria University in Wellington, New Zealand and was a Fellow at the East-West Center in Hawaii in the 1990s.

Black Friday and Thomas Edison’s Stock Ticker

Posted on | April 22, 2016 | No Comments

This post continues the story about gold and the greenback dollar and how their trading dynamics led to the invention of early electro-mechanical technology to indicate financial prices. During the height of speculative activity in 1869, a young telegrapher, Thomas Edison, came to Wall Street and made a fortune. He didn’t make it on Wall Street, but rather he made it by helping to make “Wall Street,” the global financial phenomenon.

The early life of Thomas Alva Edison provides a useful index of the times. It can be argued that Edison would probably not have risen to prominence without the financial turmoil of the late 1860s. Edison happened to be in New York during the famous gold speculation of 1869 that resulted in the “Black Friday” crash. The crisis was precipitated by the actions of President Grant, who wanted to squash the attempts by two major speculators to corner the market for gold. Edison’s technical expertise as a skilled telegrapher and amateur electrical engineer proved timely during the financial crash, and he was soon given a job that would alter the course of his life.[1]

Edison was born on February 11, 1847, in Milan, Ohio, one of the busiest ports in the world due to the wheat trade. This commerce also made it an important railroad center, and Edison soon obtained a job hawking candies and other foods on the Grand Trunk Railroad. When he, by chance, saved a young boy from a near-certain train death, the boy’s father taught him to be a telegraph operator. At the age of 15, he got his first telegraph job, primarily decoding the news. He even printed and sold his own newspaper for a while.

Edison traveled to various cities working as a telegraph operator, making his way eventually to Boston, considered the hub of cultural and technological interchange at the time. Although working for Western Union, Edison “moonlighted” extensively and even attended lectures at Boston Tech, now known as MIT. He invented and patented a “vote-counter” for the Massachusetts legislature, but it interfered with the negotiating tactics of the minority and was never purchased. In 1869, after 17 months in Boston, Edison quit his job at Western Union, borrowed money from a friend, and took a steamship to New York City.[2]

At the time, New York City was buzzing with financial speculation, as the gold-greenback relationship was quite volatile. The Funding Act of 1866 had required pulling the paper money out of the US economy, but a recession in 1867 had dampened congressional enthusiasm for the move. The North still had a huge debt and wanted to pay off its war bonds in greenbacks that were not specified in gold. The former Union general and new president, Ulysses S. Grant, made his first act as national executive the promise to pay off U.S. bonds in “gold or its equivalent” and later redeem the greenbacks in gold. The precious metal’s value soon dropped to $130 an ounce, a low not seen since the beginning of the war.

Jay Gould, and later James Fisk, perhaps representing the worst of the “Robber Baron” era, were heavily involved in gold speculation leading up to the events of the infamous 1869 “Black Friday.” Jay Gould was a country-store clerk who took his savings of $5,000 and became a financial speculator, eventually defeating Cornelius Vanderbilt and taking control of the Erie Railroad, New York City’s railroads, and for a time, the telegraph giant Western Union. James Fisk worked for a circus in his youth, but made a fortune selling cotton from the South to Northern firms. He also sold Confederate war bonds to the British during the Civil War. He teamed up with Gould and Daniel Drew to take control of the Erie Railroad and then afterward manipulated the stock to the downfall of the gigantic railroad (and Cornelius Vanderbilt), but enriching his own bank account by millions. During the summer of 1869, Gould and Fisk set up their scam to corner the gold market.[3]

Edison was in quite a desperate situation when he searched for work in New York’s financial district. He applied almost immediately at the Western Union office but was forced to wait several days for a reply. While literally begging for food and sleeping in basements, he was also carefully surveying the new uses of the telegraph on Wall Street. It was a time of much speculation as the word was spreading that Jay Gould and James Fisk were attempting to rig the gold market. They were buying up gold and thereby reducing the supply available. Edison had acquaintances at the Gold and Stock Telegraph Company who let him sleep on a cot in the basement and watch the increasing financial trading and expanding bubble.

He was hanging out at the office when panic struck due to equipment failure. Hundreds of “pad shovers” converged on the office, complaining that their broker office’s gold ticker had stopped working. Within a few minutes, over three hundred boys crowded the office. The man in charge panicked and lost his concentration, especially after Dr. S.S. Laws appeared. Laws was President of the Gold Exchange and had also invented the device that displayed the price of gold. Edison made his way to the situation and announced that he could solve the problem, having studied the machine over the previous few days. Edison later claimed that Laws shouted, “Fix it, Fix it, be quick!” [4]

Edison soon discovered that one of the contact springs had fallen and lodged itself within two gears. He removed the spring, “set the contact wheels to zero,” and sent out the company’s repairmen to reset each subscriber’s ticker. Within a few hours, the system was running normally again. As a reward, Dr. Laws offered him a job managing the entire plant for $300 a month, about twice the salary of a good electrical mechanic at the time. Edison took the job and continued to tinker with several technologies, particularly the stock ticker and a “quadruplex transmitter” for telegraphy that could send two messages each way simultaneously.

While Edison was in New York, gold speculation increased. A crucial detail was whether the government was going to release its gold holdings and drive the price down. The Federal government was a significant holder of gold relative to the total market. It was crucial to the speculating duo’s plan that the government refrain from selling off a substantial amount of their reserves. General U.S. Grant, the North’s Civil War hero, had run for the Republican nomination and the Presidency on a hard-money stance. After his election in March 1869, he continued to indicate he was not likely to sell off the nation’s gold. President Grant’s brother-in-law, Abel Rathbone Corbin, arranged a meeting for Gould and Fisk with the President during one of his visits to New York. (Corbin may have been involved earlier in the scheme. Grant expert Joe Rose believes he may even have approached Gould very early) They tried to convince Grant that higher gold prices would benefit the country. Although Grant refused to indicate his plans, the scheming duo told the press that the President was not planning to sell gold. In early September, they began to increase their sales substantially.

The infamous Black Friday came on September 24, 1869. Edison was operating a Laws Gold Indicator in a booth overlooking the Gold Exchange floor as prices fluctuated and the volume of trades grew heavy. The price of gold hovered between US$160 and $162 during the early hours of the day. Fisk was working the floor, claiming to reach $200 by the day’s end. At noon, Grant allowed his Secretary of the Treasury, George Sewell Boutwell, to sell $4 million in federal gold reserves to undermine the scheme of Gould and Fisk. When the news hit the Gold Room, the price fell within minutes to $133. They desperately tried to keep the indicator’s wheels moving, as it acted much like a modern automobile’s odometer. They even added a weight to it, but the technology could not keep up as panic hit the streets, and many people were wiped out financially.

Edison benefitted from the whole affair. Besides his $ 300-a-month job, he was encouraged and supported in improving the stock ticker and related telegraph transmission equipment. As a result, he rose from near starvation to being able to send money home to his parents and begin to build his invention empire. Edison would often recount the time as the most euphoric in his life because he “had been suddenly delivered from abject poverty to prosperity.”

Citation APA (7th Edition)

Pennings, A.J. (2016, Apr 22). Black Friday and Thomas Edison’s Stock Ticker. apennings.com https://apennings.com/telegraphic-political-economy/black-friday-and-thomas-edisons-stock-ticker/

Notes

[1] Edison’s timely circumstances on Wall Street from Edison: His Life and Inventions by Frank Lewis Dyer and Thomas Commerford Martin.
[2] According to an article published in 1901 in the Electrical World and Engineer, by Edward A. Calahan, Edison’s cotton ticker was only a partial success. It was often replaced by one invented by G.M. Phelps with superior workmanship, speed, and accuracy. It’s sales suffered from it being more expensive and delicate, which may account for its limited use. The article was written by the original inventor of the stock ticker and may not have been unbiased.
[3] A good account of Black Friday events appears on the New York Times website section “On This Day”. See http://www.nytimes.com/learning/general/onthisday/harp/1016.htm. Accessed on 2/27/15.
[4] For some background and an overview of related technology, see Introduction to Financial Technology by Roy S. Freedman
[5] The stocktickercompany website is a useful source for the history of the stock indicators and ticker-tape machines.
[6] Information on Gould, Eckert and Edison from Conot’s Thomas A. Edison: A Streak of Luck. pp. 65-69.
[7] It has been difficult to trace the exact timing of Edison’s activities at the time. Ultimately, I decided to follow the patents.
[8] http://heartbeatofwallstreet.com/PDF-Documentation/TheIllustratedHistoryofStockTickers.pdf

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is at the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

Subsidizing Silicon: NASA and the Computer

Posted on | April 13, 2016 | No Comments

“I believe this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the Moon and returning him safely to Earth. No single space project in this period will be more impressive to mankind, or more important for the long-range exploration of space; and none will be so difficult or expensive to accomplish.”

– John F. Kennedy
Special Joint Session of Congress
May 25, 1961

This post is the third of three contributions to the understanding of the microprocessing revolution. The first discussed AT&T’s invention and sharing of the transistor. The second post explored the refinement of the transistor by the US nuclear military strategy known as “Mutually Assured Destruction” (MAD). Finally, the below investigates the Cold War’s “Space Race” that established the microprocessor industry’s foundation by subsidizing the production and quality control of the computer “chip.”

Kennedy’s decision to put a man on the moon by the end of the 1960s was an immediate boost to the transistor and integrated circuit industry. In May of 1961, he set out a national goal of a human-crewed space flight and a landing on the earth’s only natural satellite. It was a Herculean endeavor with many associated tasks seemingly impossible. In particular, the computing needs were immense, as it required extensive calculations and simulations for its success. But NASA persisted and was committed to using computer technologies as part of their strategy to send astronauts into space. NASA conducted three manned space projects during the 1960s: Mercury, Gemini, and Apollo, and each used a different computing approach.

The Mercury capsule held only one astronaut and had little maneuvering control except for altitude control jets. Built by the McDonnell-Douglas Corporation, it had no onboard computer. The Atlas rocket was preprogrammed to deliver the astronaut to the desired altitude. Simultaneously, computers on the ground conducted re-entry calculations and sent instructions to the falling Mercury capsule by data communications in real-time. Although this setup worked for the relatively simple goal of shooting a man into the stratosphere, the complexities of the next two programs required new calculative capabilities aboard the actual spacecraft.

It was the Gemini capsule that was the first spacecraft to have an onboard computer as NASA began to develop more ambitious plans. Gemini’s mission included a second astronaut and plans for a rendezvous with an upper-stage rocket launched separately. Called the Agena, this rocket contained restartable liquid fuel that would allow the Gemini to boost itself to a higher orbit. The Gemini Digital Computer (GDC) was needed for the complex maneuver as well as other activities such as a backup for the launch, the insertion into orbit, and the re-entry back to earth. The computer was needed onboard because the tracking network on the ground could not monitor the Gemini’s entire orbit around the planet.[1]

NASA offered a $26.6 million contract to IBM for an all-digital computer for the Gemini project. Shortly after, engineers from IBM’s Federal Systems Unit in Owego, New York were put on the project. Rather than using integrated circuits that were still in the testing phase, the Gemini Digital Computer (GDC) used discrete semiconductor components, and for memory, it used ferrite magnetic cores originally developed for the Semi-Automatic Ground Environment (SAGE) defense system. The GDC weighed about 60 pounds and operated at more than 7,000 calculations per second. It also had an auxiliary tape memory system. As the programs exceeded the core memory capacity, the tape system was needed to install new programs in-flight. For example, the re-entry program needed to be installed in the core memory just before the descent and took about 6 minutes to load.

Overall, IBM delivered 20 of the 26-pound machines from 1963 to 1965 and solidified its reputation as a major contractor for the space program.[2] But the trip to the moon and back required a more sophisticated computer and NASA turned to Silicon Valley to provide the next generation of onboard computers.

The Apollo project used a vast number of mainframes and minicomputers for planning missions and calculating guidance and navigation (G&N) applications. Still, one of its most crucial objectives was to develop a new onboard computer system. Computers on the ground were able to monitor such information as cabin pressure and detect flight deviations via a data communications link. But these earthbound mainframes were insufficient for the complex requirements of this new goal. While many computations could be conducted on the ground and radioed to the spacecraft using NASA’s Manned Space Flight Network (MSFN), it had been decided early that computational capacity was needed onboard.[3]

The determination that an onboard computer was needed for spaceflight occurred before President Kennedy’s 1961 declaration that landing on the moon would be a national goal. An onboard computer was wanted for several reasons. One – there was a fear that a hostile source might jam the radio signals transmitted to the spacecraft. Two – concerns were raised that multiple concurrent missions could saturate the communications system. Three – it was determined that manned interplanetary missions in the future would definitely require onboard computerization, and it was better to start testing such a system as soon as possible. Four – the physics of transmitting a signal to the moon and back resulted in a 1.5-second delay making quick decisions for a hazardous lunar landing improbable. “The choice, later in the program, of the lunar orbit rendezvous method over a direct flight to the Moon, further justified an on-board computer since the lunar orbit insertion would take place on the far side of the Moon, out of contact with the earth.”[4]

Like the Gemini Digital Computer, it was crucial to develop a computer system for the Apollo spacecraft that was small enough to minimize additional weight, yet powerful enough to coordinate complex activities onboard. A few months after Kennedy’s proclamation, NASA contracted with the MIT Instrumentation Lab for the design of the Apollo’s computer. The MIT lab had worked previously for the government on the guidance systems for the Polaris and Poseidon missiles and the team was moved almost in its entirety to the Apollo project. This was also the first year that Fairchild started serious production of the integrated chip. The ICs were still a very untested technology, but NASA’s commitment meant a nearly unlimited budget to ensure their reliability, low power consumption, as well as guaranteed availability over the duration of the Apollo project. NASA agreed to use Fairchild’s 3-input NOR gate ICs to construct the AGC, despite the high probability that new developments would soon eclipse this technology. NASA committed to the new “chips,” designing both onboard and ground equipment to use them.

However, the IC technology was unproven and required substantial support to transform them into a viable technology for the moon project. Through extensive testing, Fairchild, MIT, and NASA began to identify the main problems resulting in IC malfunctions and develop procedures to reduce and discard defective chips. Stress tests were used to submit the chips to high temperatures, centrifugal gravity forces, and extensive vibration. Early on, the major problem identified was poor workmanship. As a result, dedicated assembly lines were implemented to produce ICs only for the Apollo project’s computers. Apollo crewmembers were brought in to cement a relationship between the astronauts and assembly line workers to ensure high worker motivation. Posters were also displayed around the plant to remind workers of the historic importance of their toil. Afterward, MIT’s Instrumentation Lab conducted rigorous tests to ensure reliability and returned all defective chips. As a result, a failure rate of only 0.0040% per 1000 hours of operation was achieved.[5]

In May of 1962, MIT and NASA chose Massachusetts-based Raytheon, a major military electronics contractor, to build the Apollo Guidance Computer (AGC). Called the “Block I,” the first AGC was used on three Apollo space flights between August 1966 and April 1968. It used a digital display and keyboard, guided the vehicle in real-time, incorporated an all-digital design, and had a compact, protective casing. It operated at a basic machine cycle of 2.048MHz and had a RAM core of 2K. It could perform roughly 20 instructions per second.

When Raytheon built the 12 computers ordered by NASA, they used about 4000 integrated circuits purchased from Fairchild Semiconductor, a major portion of the world’s ICs.[6] Although initially quite expensive, the price of an IC dropped to $25 after Philco-Ford began providing them in 1964. The Block I was a transitional device, but it became an important part of the historical role of the Apollo space program.

A second AGC was designed for the final push to the Moon. A faster and lighter version was needed after mission plans called for the Lunar Module to land men on the Moon’s surface and return them back to the Command Module before returning to earth. While “Block I,” the first version, was a copy of the Polaris missile guidance computer, the second was built up on new integrated chip technology. The “Block II” was a more sophisticated device with a larger memory (37,000 words) and weighed only 31 kilograms. Furthermore, the very need to have the equipment be as light as possible for the Moon drop and extremely reliable for the long trip required the new version. On July 16, 1969, Apollo 11 blasted off from Cape Canaveral, Florida guided by the AGC to its Moon destination. On July 20, the lunar module Eagle undocked from the command module Columbia and its astronauts guided its descent onto the Mare Tranquillitatis (Sea of Tranquility) where it touched down safely some three hours later.

Surprisingly, the chips used in the later Apollo flights were significantly out of date. It was more important for NASA that its technologies were dependable and fit into their existing systems rather than be significantly faster or conduct more instructions per second. Staying with the proven chips created stability and started a revenue base for Silicon Valley as it established itself as a proven technology. The “Block II” AGC with its early integrated chips remained somewhat static throughout its use during the six lunar landings, three Skylab missions, and the linking of the Apollo with the Russian Soyuz spacecraft. Plans to expand the computer to 16K of erasable memory and 65K of fixed memory were never implemented as the Apollo program was shut down after the Apollo-Soyuz Test Project (ASTP) in July 1975. The Space Shuttle would use the IBM AP-101 avionics computer, also used in the B-52 bomber and F-15 fighter and sharing similar architecture with the IBM System/360 mainframes.

While the military and other government funded programs (such as NSA and the CIA) would continue to support microprocessing advances, the private sector (particularly finance) would start to play an increasingly more important role. Wall Street and other financial institutions were in an automation crisis as paper-based transactions continued to accumulate. Volatility wracked the foreign exchange markets as currency controls were lifted in the early 1970s.

Although government spending would continue to support the microelectronics industry, new developments occurred in the 1970s that would drive commercial demand for silicon-based processing. First, international finance’s move away from the gold standard dramatically increased the corporate demand for computer resources. Second, the personal computer would create a new market for Silicon Valley’s fledgling microprocessing industry as computer hobbyists, business users, and then the Internet created a widespread market for devices enabled by this technology.

These new developments would be based on a dramatic new product, the “computer-on-a-chip” by Fairchild spin-off Intel. Instead of just transistor switches on an integrated circuit, this new development would begin to embed the different parts of a computer into the silicon. It was this so-called “microprocessor” that would make computers accessible to the public and general commerce.

Citation APA (7th Edition)

Pennings, A.J. (2016, April 13). Subsidizing Silicon. apennings.com https://apennings.com/how-it-came-to-rule-the-world/the-cold-war/subsidizing-silicon-nasa-and-the-computer/

Notes

[1] NASA has excellent information on the historical aspects of the space program’s use and development of computers in its Computers in Spaceflight: The NASA Experience at www.hq.nasa.gov/office/pao/History/computers/ch1-1.html. Accessed on October 26, 2001.
[2] “Chapter One: The Gemini Digital Computer: First Machine in Orbit,” in its Computers in Spaceflight: The NASA Experience at www.hq.nasa.gov/office/pao/History/computers/Ch1-5.html. Accessed on October 26, 2001.
[3] Information on NASA’s data communications has been compiled and edited by Robert Godwin in The NASA Mission Reports, published by Ontario Canada’s Apogee Books.
[4] Again I am indebted to the online version of Computers in Spaceflight: The NASA Experience at www.hq.nasa.gov/office/pao/History/computers/Ch2-1.html. Information on the need for an on-board computer from “Chapter Two: Computers On Board the Apollo Spacecraft”, accessed on October 26, 2001.
[5] Nathan Ickes’s website on The Apollo Guidance Computer: Advancing Two Industries was a valuable resource given that many of the published books about the space program fail to investigate the importance of the computer for the program’s success and coincidentally, its impact on the computer industry. See http://web.mit.edu/nickes/www/sts035/agc/agc.html (no longer available). For information on Fairchild’s quality control procedures, see “Proving a Technology: Integrated Circuits in the AGC.”
[6] Information on Apollo AGC information garnered by Dr. Dobb’s One Giant Leap: The Apollo Guidance Computer. History of Computing #6. From www.ddj.com. Also the Smithsonian’s National Air and Space Museum has a valuable site on the Apollo Guidance Computer at www.nasm.edu/nasm/dsh/artifacts/GC-ApolloBlock1.htm. Both were accessed on October 26, 2001.

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

The MAD Origins of the Computer Age

Posted on | April 11, 2016 | No Comments

It was the “missile gap” that would impregnate Silicon Valley with the purpose and capital to grow to its famed stature as the center of computer innovation in the world. During the Kennedy and Johnson administrations, when Robert S. McNamara was Secretary of Defense, the US undertook an enormous military buildup, with the intercontinental missile as its centerpiece. The policy of “Mutually Assured Destruction” (MAD) and specifically the advancements in the Minute Man II missile led to the development and refinement of silicon integrated circuits and ultimately the microprocessor “chip.”

The computing and network revolutions were powered by successive developments in information processing capabilities that were subsidized by extensive government spending and increasingly centralized in California’s “Valley of Heart’s Delight,” later known as “Silicon Valley.”

At the center of this transformation was an innovation developed by the telecommunications monopoly, AT&T. The government-regulated monopoly developed the first transistor in the 1940s at its New Jersey-based Bell Labs. This electronic switching technology would orchestrate the amazing machinations of the 1s and Os of the emerging digital age. William Shockley, who won a Nobel-prize for the AT&T sponsored invention, soon decided to return to his California home to set up a semiconductor company.

Shockley had been recruited back to the area by Frederick Terman, the Dean of the Engineering School at Stanford University. Terman was a protégé of Vannevar Bush, the major science and technology director of the New Deal and World War II, including the Manhattan Project that invented the atomic bomb. Terman designed a program of study and research at Stanford in electronics and spearheaded the creation of Stanford Research Park that leased land to high-tech firms. In the early 1950s, Lockheed set up its missile and aerospace subsidiary in the area. IBM also set up facilities for studying data storage during that time.

Shockley’s company did not achieve the success it desired, and some of the employees left and started up companies such as Fairchild Semiconductor (Later Intel), National Semiconductor, Advanced Micro Devices, and Signetics. Luckily, history was on their side as the Sputnik satellite crisis almost immediately resulted in several government programs that targeted guidance miniaturization as a crucial component of the new rocketry programs. Transistors became the centerpiece of a set of technologies that would guide rocket-powered missiles, but also provide the processing power for modern computers.

The Minuteman intercontinental missile project was originally approved on February 27, 1958, about six weeks after Eisenhower requested funds to start ARPA. Both were in response to the previous autumn’s successful flights that delivered two USSR Sputnik satellites into space. The Soviets had captured many German V-2 rocket scientists after the World War II and quickly built up a viable space program. Meanwhile, they denoted an atomic bomb in August 1949 and in 1954 they successfully tested a hydrogen bomb, thousands of times more powerful than the atomic device. While the second of the Sputnik launches had only placed a small dog into space, the fear spread that the USSR could place a nuclear warhead on their rocket. Such a combination could rain a lethal force of radiation upon the US and its allies, killing millions within minutes.

About a year before, in late January 1957, the US had finally reached success with its own space program, using a modified Jupiter C rocket to carry the first satellite into space. Still, the 10.5-pound satellite was in no way capable of lifting the massive weight of a hydrogen warhead, not to mention guiding it to a specific target a half a world away. Sensing the momentousness of the task, Eisenhower started NASA later that year to garner additional support for the development of rocket technology by stirring the imagination of humans being launched into space.

The Minuteman was conceived as an intercontinental ballistic missile system (ICBM) capable of delivering a thermonuclear explosion thousands of miles from its origination. It was meant to be a mass-produced, quick-reacting response to the “perceived” Russia nuclear threat. Named after the American Revolutionary War’s volunteer militia who were ready to take up arms “at a minute’s notice”, the missile was a revolutionary military idea, using new advances in guidance and propulsion to deliver its deadly ordinance.

    Eisenhower left plans for a force of about 40 Atlas missiles and six Polarises at sea; in less than a year Kennedy and McNamara planned 1,900 missiles, consisting primarily of the 1,200 Minuteman missiles and 656 Polarises. Counting the bombers, the United States would have 3,455 warheads ready to fire on the Soviet Union by 1967, according to McNamara’s secret Draft Presidential Memorandum on strategic forces of September.[2]

McNamara was an intense intellectual and considered one of the “Whiz Kids” brought in by Kennedy as part of the promise of the new administration to recruit the “best and the brightest.” He was extremely good with statistics and steeped in management accounting at Harvard. During World War II, he left a position at Harvard to work in the Statistical Control Office of the Army Air Corps where he successfully planned bombing raids with mathematical techniques used in operations research (OR).

After the war, McNamara and other members of his office went to work at Ford Motor Company. Using these OR techniques, he achieved considerable success at the automobile company, ultimately rising to its top.[3] When McNamara was offered the job of Secretary of Treasury by the new Kennedy administration, it is reported that he replied he had more influence on interest rates at his current job as President of Ford. He later took a job as Secretary of Defense. Just a few months into his tenure, he ordered a major buildup of nuclear forces. This resolve came in spite of the fact that intelligence reports indicated that Soviet forces had been overestimated, the so-called “missile gap.”

McNamara was originally from Oakland, just a few miles northeast of the famed “Silicon Valley” and set out to transform US military strategy. In response to a RAND report that US bombers were vulnerable to a Communist first strike, he ordered the retirement of “most of the 1,292 old B-47 bombers and the 19 old B-58s, leaving ‘only’ 500 B-52s, to the surprise and anger of the Air Force.” He stopped production of the B-70 bomber that was estimated to cost $20 billion over the next ten years. Instead, he pushed the Minuteman intercontinental ballistic missile project and continued to refine the notion of “massive retaliation” coined by Eisenhower’s Secretary of State John Foster Dulles in 1954.

The Cuban Missile Crisis in October 1962 seriously challenged this notion when the USSR began installing theatre-size SS-4, SS-5, and R-12 missiles on the Caribbean island in response to the deployment of US missiles in Turkey by the Eisenhower administration. When US surveillance revealed the missile sites, President Kennedy announced that any attack on the US from Cuba would be considered an attack by the USSR and would be responded to accordingly. He also ordered a blockade of the island nation, using the language of “quarantine” outlined by Roosevelt in 1937 in response to Nazi aggression.

On Oct 24, the Strategic Air Command elevated its alert status to Defense Condition 2 (DEF-CON 2), and as the USSR responded in kind, the world teetered on the brink of World War III. Last minute negotiations averted the catastrophe on October 28 when Premier Khrushchev agreed to remove their missiles from Cuba. Kennedy had offered in a secret deal to remove US missiles from Turkey. Perhaps not incidentally, the US Minuteman I program went operational on the same day.

The brains of the Minuteman I missile guidance system was the D-17B, a specialized computer designed by Autonetics. It contained an array of thousands of transistors, diodes, capacitors, and resistors designed to guide the warhead to its target. Guidance software was provided by TRW while the Strategic Air Command provided the actual targeting. Some 800 Minuteman-I missiles were manufactured and delivered by the time Lyndon B. Johnson was sworn in as President in 1965. As the Vietnam War increased tensions between the US and its Communist rivals, research was initiated on new models of the Minuteman missile. While the first Minuteman ICBM program used older transistors for its guidance systems, the later Minuteman II used integrated circuits (ICs) that continued to miniaturize the guidance and other intelligent aspects of the missile by reducing the number of electronic parts.[4]

Submarines carrying nuclear missiles became an extraordinarily lethal force using the new guidance technology. The first successful launch of a guided Polaris missile took place July 20, 1960 from a submerged George Washington class submarine. The USS George Washington was the first fleet ballistic missile submarine, carrying sixteen missiles. President John F. Kennedy came on board 16 November to observe a Polaris A1 launch. He subsequently ordered 40 more subs. These submarine-based missiles required even more sophisticated guidance technology because they had to be launched from one of a multitude of geographical positions. Submarine-launched ballistic missiles (SLBMs) like the Poseidon and Trident eventually developed capabilities that could destroy entire countries.

McNamara started the buildup under Kennedy, but President Johnson urged him to keep up the missile program, arguing that it was the Eisenhower and Republicans who had left the US in weak military position. McNamara himself felt that nuclear missiles had no military purpose, except to “deter one’s opponent from using them,” but he pressed for their development.[1] The missiles required a complex guidance system, however, one that drew on the trajectory of the transistor and then integrated circuits research. As the military philosophy of “Mutually Assured Destruction” (MAD) emerged, the result was a prolonged support of the semiconductor industry leading ultimately to the information technologies of the computer and the Internet.

Minuteman-II production and deployment began with the Johnson administration as it embraced the “Assured Destruction” policy advocated by McNamara. The new model could go farther, pack more deadly force, and pinpoint its targets more accurately. Consequently, it could be aimed at more targets from its silos in the upper Midwest, primarily Missouri, Montana, Wyoming, as well as South and North Dakota. Nicknamed “MAD” for Mutually Assured Destruction, the policy recognized the colossal destructive capabilities of even a single thermonuclear warhead. A second strike could inflict serious damage on the attacker even if only a few warheads survived a first strike. Furthermore, the attempts at both a first and second strike could initiate a nuclear winter, bringing eventual destruction to the entire planet.

Consequently, the defensive strategy changed to building as many warheads as possible and putting them in a variety of positions, both fixed and mobile. Minuteman II production increased dramatically, providing a boost to the new IC technology and the companies that produced them. The D-37C computer installed in the missile was built in the mid-1960s using both the established transistor technology used in the first model as well as small scale integrated circuits recently introduced. Minuteman II consumption of ICs provided the incentive to create volume production lines needed for the 4,000 ICs needed weekly for the MAD missile deployment.[5]

The Minuteman intercontinental ballistic missile (ICBM) and Apollo space programs soon gave Silicon Valley a “liftoff,” as they required thousands of transistors and then integrated circuits. The transition occurred as NASA’s manned and satellite space program demanded the highest quality computing components for its spacecraft. It actively subsidized integrated circuits, a risky new technology where transistors were incorporated into a single wafer or “chip.” By the time Neil Armstrong first stepped on the Moon, more than a million ICs had been purchased by NASA alone.[6] These would become integral to the minicomputer revolution that rocked Wall Street and other industries in the late 1960s.

Citation APA (7th Edition)

Pennings, A.J. (2016, April 11). The MAD Origins of the Computer Age. apennings.com https://apennings.com/how-it-came-to-rule-the-world/the-mad-origins-of-the-computer-age/

Notes

[1] Robert McNamara quoted in Deborah Shapley’s (1993) Promise and Power: a Biography of the Secretary of Defense. published by Little, Brown and Company.
[2] Shapley, D. (1993) Promise and Power. Boston, MA: Little, Brown and Company. p. 108.
[3] Information on McNamara’s past from Paul Ewards, The Closed World.
[4] Information on Minuteman missiles provided by Dirk Hanson in The New Alchemists. p. 99.
[5] IC production lines from Ceruzzi, P.E. (2003) A History of Modern Computing. Second Edition. Cambridge, MA: MIT Press. p. 187.
[6] Reid, T.R., (2001) The Chip. NY: Random House. P. 150.

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor at the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

Emerging Areas of Digital Media Expertise, Part 4: Business Acumen

Posted on | April 10, 2016 | No Comments

In previous posts, I discussed the importance of various skill sets in the emerging digital world: analytics and visualizations, design, global knowledge, technical, and the strategic communication aspects of digital media expertise.

The competitive world of digital activities requires proficiency in an extensive variety of marketing, graphic design, and digital production skills. The area of media management has emerged from broadcast media to embrace digital media, especially as the creative economy and its various manifestations: experience industries, cultural industries, and entertainment industries emerge as crucial parts of the global economy. In this post, I will discuss the centrality of commercial, entrepreneurial and management skills in today’s digital media environment.

Previously, those trained in media skills were primarily focused on the production of entertainment, educational, and cultural products and services. They were somewhat removed from business decisions unless they moved up to editorial or management positions later in their careers. With the transition to digital technologies and processes, business knowledge and media abilities are increasingly intertwined.

Some of the ways digital media production and business acumen overlap:

– Managing creative workers and digital innovation
– Assessing digital threats and opportunities
– Understanding global media and cultural trends
– Marketing content and producing multiple revenue streams
– Ensuring intellectual property rights and obtaining permissions
– Preparing digital content for local and global markets
– Working with diverse in-house and third party partners/vendors to construct working media-related applications
– Developing metrics for key performance, strategic, and management requirements
– Develop continuity plans in case of security, personnel, equipment or environmental failures
– Recruiting, managing and evaluating other key personnel
– Evaluating and implementing various e-payment solutions for both B2B and B2C operations
– Working with senior management and Board of Directors to establish current and long range goals, objectives, plans and policies
– Working with established creativity suppliers such as stars, independent producers, and agents
– Discerning how complex international legal environments influence intellectual property rights, consumer rights, privacy, and a various types of cybercrimes
– Manage remote projects and collaborate with colleagues and third party vendors via mediated conferencing
– Utilize project management skills and monitoring technologies such as Excel spreadsheets and methodologies such as SDLC waterfall, RAD, JAD, and Agile/Scrum
– Licensing and obtaining legal rights to merchandise and characters
– Assessing competitive advantages and barriers to entry
– Implement and monitor advertising and social media campaigns
– Anticipate the influence of macroeconomic events such as business cycles, inflation, as well as changes in interest rates and exchange rates on an organization’s sustainability

Leadership and management skills such as understanding how to facilitate teamwork among colleagues and third party partners/vendors to construct and commercialize working media-related applications are increasingly important. As enterprises and mission-based institutions flatten their organizational structures and streamline their work processes, fundamental knowledge about accounting, finance, logistics, marketing, and sales becomes more valuable.

Professional management certifications are a valuable qualification on contemporary resumes and represent valuable experience as well (4500 hours of project management work with a 4-year degree). Specific project management skills using Excel spreadsheets and the ability to monitor workflows using methodologies such as SDLC waterfall, RAD, JAD, and Agile/Scrum are also valuable to employers as are budgeting, change management, project control, risk assessment, scheduling, and vendor management. The ability to coordinate and manage remote projects with online teleconferencing and instant messaging capabilities is also crucial in a global environment to facilitate collaboration among colleagues, different business units and partnering B2B companies.

Last, but not least, sales expertise is in high demand in new media companies. Last year I was in San Francisco talking to a venture capitalist about needed skills and this category came up strongly. Sales success requires strong product knowledge, communication skills, planning abilities and self-management. Customer relations management has transformed sales management and with it modern retail and wholesale organizations. But just as significantly, customer expectations have risen to anticipate 24/7, global service from any point in the corporate operations.

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

« go backkeep looking »
  • Referencing this Material

    Copyrights apply to all materials on this blog but fair use conditions allow limited use of ideas and quotations. Please cite the permalinks of the articles/posts.
    Citing a post in APA style would look like:
    Pennings, A. (2015, April 17). Diffusion and the Five Characteristics of Innovation Adoption. Retrieved from https://apennings.com/characteristics-of-digital-media/diffusion-and-the-five-characteristics-of-innovation-adoption/
    MLA style citation would look like: "Diffusion and the Five Characteristics of Innovation Adoption." Anthony J. Pennings, PhD. Web. 18 June 2015. The date would be the day you accessed the information. View the Writing Criteria link at the top of this page to link to an online APA reference manual.

  • About Me

    Professor at State University of New York (SUNY) Korea since 2016. Moved to Austin, Texas in August 2012 to join the Digital Media Management program at St. Edwards University. Spent the previous decade on the faculty at New York University teaching and researching information systems, digital economics, and strategic communications.

    You can reach me at:

    apennings70@gmail.com
    anthony.pennings@sunykorea.ac.kr

    Follow apennings on Twitter

  • About me

  • Writings by Category

  • Flag Counter
  • Pages

  • Calendar

    December 2024
    M T W T F S S
     1
    2345678
    9101112131415
    16171819202122
    23242526272829
    3031  
  • Disclaimer

    The opinions expressed here do not necessarily reflect the views of my employers, past or present.