Anthony J. Pennings, PhD


Subsidizing Silicon: NASA and the Computer

Posted on | April 13, 2016 | No Comments

“I believe this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the Moon and returning him safely to Earth. No single space project in this period will be more impressive to mankind, or more important for the long-range exploration of space; and none will be so difficult or expensive to accomplish.”
– John F. Kennedy
Special Joint Session of Congress
May 25, 1961

This is the third of three major posts about the microprocessing revolution. The first discussed AT&T’s invention and sharing of the transistor, the second explored the refinement of the transistor by the US nuclear military strategy known as “Mutually Assured Destruction” (MAD), and the below investigates the Cold War’s “Space Race” that established the foundation of the microprocessor industry by subsidizing the production and quality control of the computer “chip.”

Kennedy’s decision to put a man on the moon by the end of the 1960s was an immediate boost to the transistor and integrated circuit industry. In May of 1961, he set out a national goal of a manned space flight and a landing on the earth’s only natural satellite. It was a Herculean endeavor with many associated tasks seemingly impossible. The computing needs, in particular, were immense as it required extensive calculations and simulations for its success. But NASA persisted and was committed to the use of computer technologies as part of their strategy to send astronauts into space. NASA conducted three manned space projects during the 1960s: Mercury, Gemini, and Apollo; and each used a different computing approach.

The Mercury capsule held only one astronaut and had little maneuvering control except for altitude control jets. Built by the McDonnell-Douglas Corporation, it had no on-board computer. The Atlas rocket was preprogrammed to deliver the astronaut to the desired altitude while computers on the ground conducted re-entry calculations and sent instructions to the falling Mercury capsule by data communications in real-time. Although this setup worked for the relatively simple goal of shooting a man into the stratosphere, the complexities of the next two programs required new calculative capabilities aboard the actual spacecraft.

It was the Gemini capsule that was the first spacecraft to have an onboard computer as NASA began to develop more ambitious plans. Gemini’s mission included a second astronaut and plans for a rendezvous with an upper-stage rocket launched separately. Called the Agena, this rocket contained restartable liquid fuel that would allow the Gemini to boost itself to a higher orbit. The Gemini Digital Computer (GDC) was needed for the complex maneuver as well as other activities such as a backup for the launch, the insertion into orbit, and the re-entry back to earth. The computer was needed on-board because the tracking network on the ground could not monitor the Gemini’s entire orbit around the earth.[1]

NASA offered a $26.6 million contract to IBM for an all-digital computer for the Gemini project. Shortly after, engineers from IBM’s Federal Systems Unit in Owego, New York were put on the project. Rather than using integrated circuits that were still in the testing phase, the Gemini Digital Computer (GDC) used discrete semiconductor components and for memory, it used ferrite magnetic cores originally developed for the Semi-Automatic Ground Environment (SAGE) defense system. The GDC weighed about 60 pounds and operated at more than 7,000 calculations per second. It also had an auxiliary tape memory system. As the programs exceeded the core memory capacity, the tape system was needed to install new programs in-flight. For example, the program for re-entry needed to be installed in the core memory just before it was needed for descent. It took about 6 minutes to load this particular program.

Overall, IBM delivered 20 of the 26-pound machines from 1963 to 1965 and solidified its reputation as a major contractor for the space program.[2] But the trip to the moon and back required a more sophisticated computer and NASA turned to Silicon Valley to provide the next generation of on-board computers.

The Apollo project used a wide number of mainframes and minicomputers for planning missions and calculating guidance and navigation (G&N) applications, but one of its most crucial objectives was to develop a new on-board computer system. Computers on the ground were able to monitor such information as cabin pressure as well as detect flight deviations via a data communications link. But these earthbound mainframes were insufficient for the complex requirements of this new goal. While many computations could be conducted on the ground and radioed to the spacecraft using NASA’s Manned Space Flight Network (MSFN), it had been decided early that computational capacity was needed on-board.[3]

The determination that an on-board computer was needed for spaceflight occurred before President Kennedy’s 1961 declaration that landing on the moon would be a national goal. An onboard computer was wanted for several reasons. One, there was a fear that a hostile source might jam the radio signals transmitted to the spacecraft. Two, concerns were raised that multiple concurrent missions could saturate the communications system. Three, it was determined that manned interplanetary missions in the future would definitely require onboard computerization, and it was better to start testing such a system as soon as possible. Four, the physics of transmitting a signal to the moon and back resulted in a 1.5-second delay making quick decisions for a hazardous lunar landing improbable. “The choice, later in the program, of the lunar orbit rendezvous method over a direct flight to the Moon, further justified an on-board computer since the lunar orbit insertion would take place on the far side of the Moon, out of contact with the earth.”[4]

Like the Gemini Digital Computer, it was crucial to develop a computer system for the Apollo spacecraft that was small enough to minimize additional weight, yet powerful enough to coordinate complex activities onboard. A few months after Kennedy’s proclamation, NASA contracted with the MIT Instrumentation Lab for the design of the Apollo’s computer. The MIT lab had worked previously for the government on the guidance systems for the Polaris and Poseidon missiles and the team was moved almost in its entirety to the Apollo project. This was also the first year that Fairchild started serious production of the integrated chip. The ICs were still a very untested technology, but NASA’s commitment meant a nearly unlimited budget to ensure their reliability, low power consumption, as well as guaranteed availability over the duration of the Apollo project. NASA agreed to use Fairchild’s 3-input NOR gate ICs to construct the AGC, despite the high probability that new developments would soon eclipse this technology. NASA totally committed to the new “chips”, designing both onboard and ground equipment to use them.

The IC technology was unproven, however, and required substantial support to transform them into a viable technology for the moon project. Through extensive testing, Fairchild, MIT, and NASA began to identify the main problems resulting in IC malfunctions and develop procedures to reduce and discard defective chips. Stress tests were used to submit the chips to high temperatures, centrifugal gravity forces, and extensive vibration. Early on, the major problem identified was poor workmanship. As a result, dedicated assembly lines were implemented to produce ICs only for the Apollo project’s computers. To ensure high worker motivation, Apollo crewmembers were brought in to cement a relationship between the astronauts and assembly line workers. Posters were also displayed around the plant to remind workers of the historic importance of their toil. Afterward, MIT’s Instrumentation Lab conducted rigorous tests to ensure reliability and returned all defective chips. As a result, a failure rate of only 0.0040% per 1000 hours of operation was achieved.[5]

In May of 1962, MIT and NASA chose Massachusetts-based Raytheon, a major military electronics contractor, to build the Apollo Guidance Computer (AGC). Called the “Block I,” the first AGC was used on three Apollo space flights between August 1966 and April 1968. It used a digital display and keyboard, guided the vehicle in real-time, incorporated an all-digital design, and had a compact, protective casing. It operated at a basic machine cycle of 2.048MHz and had a RAM core of 2K. It could perform roughly 20 instructions per second.

When Raytheon built the 12 computers ordered by NASA, they used about 4000 integrated circuits purchased from Fairchild Semiconductor, a major portion of the world’s ICs.[6] Although initially quite expensive, the price of an IC dropped to $25 after Philco-Ford began providing them in 1964. The Block I was a transitional device, but it became an important part of the historical role of the Apollo space program.

A second AGC was designed for the final push to the Moon. A faster and lighter version was needed after mission plans called for the Lunar Module to land men on the Moon’s surface and return them back to the Command Module before returning to earth. While “Block I,” the first version, was a copy of the Polaris missile guidance computer, the second was built up on new integrated chip technology. The “Block II” was a more sophisticated device with a larger memory (37,000 words) and weighed only 31 kilograms. Furthermore, the very need to have the equipment be as light as possible for the Moon drop and extremely reliable for the long trip required the new version. On July 16, 1969, Apollo 11 blasted off from Cape Canaveral, Florida guided by the AGC to its Moon destination. On July 20, the lunar module Eagle undocked from the command module Columbia and its astronauts guided its descent onto the Mare Tranquillitatis (Sea of Tranquility) where it touched down safely some three hours later.

Surprisingly, the chips used in the later Apollo flights were significantly out of date. It was more important for NASA that its technologies were dependable and fit into their existing systems rather than be significantly faster or conduct more instructions per second. Staying with the proven chips created stability and started a revenue base for Silicon Valley as it established itself as a proven technology. The “Block II” AGC with its early integrated chips remained somewhat static throughout its use during the six lunar landings, three Skylab missions, and the linking of the Apollo with the Russian Soyuz spacecraft. Plans to expand the computer to 16K of erasable memory and 65K of fixed memory were never implemented as the Apollo program was shut down after the Apollo-Soyuz Test Project (ASTP) in July 1975. The Space Shuttle would use the IBM AP-101 avionics computer, also used in the B-52 bomber and F-15 fighter and sharing similar architecture with the IBM System/360 mainframes.

While the military and other government funded programs (such as NSA and the CIA) would continue to support microprocessing advances, the private sector (particularly finance) would start to play an increasingly more important role. Wall Street and other financial institutions were in an automation crisis as paper-based transactions continued to accumulate. Volatility wracked the foreign exchange markets as currency controls were lifted in the early 1970s.

Although government spending would continue to support the microelectronics industry, new developments occurred in the 1970s that would drive commercial demand for silicon-based processing. First, international finance’s move away from the gold standard would dramatically increase the effective demand (demand that had the money to actually pay for it) for computer resources. Second, the personal computer would create a new market for Silicon Valley’s microprocessing industry as computer hobbyists, business users, and then the Internet, created a widespread popular market for devices enabled by this technology. These new developments though would be based on a dramatic new development, the “computer-on-a-chip” by Fairchild spin-off Intel. Instead of just a number of switches on an integrated circuit, this new development would begin to embed the different parts of a computer into the silicon. It was this so-called “microprocessor” that would make computers accessible to the public and general commerce.


[1] NASA has excellent information on the historical aspects of the space program’s use and development of computers in its Computers in Spaceflight: The NASA Experience at Accessed on October 26, 2001.
[2] “Chapter One: The Gemini Digital Computer: First Machine in Orbit,” in its Computers in Spaceflight: The NASA Experience at Accessed on October 26, 2001.
[3] Information on NASA’s data communications has been compiled and edited by Robert Godwin in The NASA Mission Reports, published by Ontario Canada’s Apogee Books.
[4] Again I am indebted to the online version of Computers in Spaceflight: The NASA Experience at Information on the need for an on-board computer from “Chapter Two: Computers On Board the Apollo Spacecraft”, accessed on October 26, 2001.
[5] Nathan Ickes’s website on The Apollo Guidance Computer: Advancing Two Industries is a valuable resource given that many of the published books about the space program fail to investigate the importance of the computer for the program’s success and coincidentally, its impact on the computer industry. See For information on Fairchild’s quality control procedures, see “Proving a Technology: Integrated Circuits in the AGC.”
[6] Information on Apollo AGC information garnered by Dr. Dobb’s One Giant Leap: The Apollo Guidance Computer. History of Computing #6. From Also the Smithsonian’s National Air and Space Museum has a valuable site on the Apollo Guidance Computer at Both were accessed on October 26, 2001.



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.


Comments are closed.

  • Referencing this Material

    Copyrights apply to all materials on this blog but fair use conditions allow limited use of ideas and quotations. Please cite the permalinks of the articles/posts.
    Citing a post in APA style would look like:
    Pennings, A. (2015, April 17). Diffusion and the Five Characteristics of Innovation Adoption. Retrieved from
    MLA style citation would look like: "Diffusion and the Five Characteristics of Innovation Adoption." Anthony J. Pennings, PhD. Web. 18 June 2015. The date would be the day you accessed the information. View the Writing Criteria link at the top of this page to link to an online APA reference manual.

  • About Me

    Professor and Associate Chair at State University of New York (SUNY) Korea. Recently taught at Hannam University in Daejeon, South Korea. Moved to Austin, Texas in August 2012 to join the Digital Media Management program at St. Edwards University. Spent the previous decade on the faculty at New York University teaching and researching information systems, media economics, and strategic communications.

    You can reach me at:

    Follow apennings on Twitter

  • Traffic Feed

  • Recent Posts

  • Pages

  • RSS – RSS Channel – App Tech Section

    • New lifesaving drone rescues swimmers
      A new lifesaving drone has been used to rescue two teenagers from the rough seas off the coast of Australia's Lennox Head, New South Wales.
    • Untitled
      Two swimmers were in serious trouble off the coast of Australia, until a drone came to their rescue.
    • Untitled
      All stories start somewhere, and the story of the driverless car begins in a research lab in Pittsburgh, where Carnegie Mellon University Professor Red Whittaker was one of the first to develop a fully autonomous driving machine.
    • Untitled
      The programs controlling driverless cars are computers, after all, and all computers are hackable.
    • Untitled
      More than 1.25 million people die every year in auto accidents. Driverless cars could change that.
  • January 2018
    M T W T F S S
    « Dec    
  • Crossword of the Day

  • Disclaimer

    The opinions expressed here do not necessarily reflect the views of my employers, past or present.