Vacation
Posted on | August 28, 2016 | No Comments
Enjoyed a month of vacation, with some work thrown in of course. San Francisco to meet with my favorite venture capitalist, and home to Austin. Took a desert road trip with family from Phoenix to Las Vegas with a lovely stop in Sedona. Took my daughter to Miami, the Bahamas, Cape Canaveral and Cocoa Beach, Florida. Then New York for a few days of lake life in the Catskills, minus a day trip to Stony Brook University on Long Island. Manhattan and back to Washington Square Park where we lived for 9 years. Farewell dinner with friends at the restaurant in Korea town where my daughter decided to be born 10 days early (spicy food!). Got an extra day in Austin when my flight was canceled due to a typhoon in Japan. Many thanks to all that contributed to our trip. Too many to mention, but so much for which to be thankful.
© ALL RIGHTS RESERVED
Xbox One – Extending Virtual Reality and Multi-Player Games
Posted on | August 2, 2016 | No Comments
Today we got the first look at the new Xbox game console, the Xbox One S. It’s been three years since the original Xbox One was introduced in a broadcast live on Spike TV from the Microsoft campus in Bellevue, Washington when representatives from Microsoft’s Xbox team and strategic partners such as Activision and EA demonstrated and hyped the new system. The new Xbox One S is smaller and lighter with more processing power and 2TB of internal storage. More relevant for this discussion is the backward compatibility (including Xbox 360 classics) and the continuing trend towards immersion and augmented/virtual environments.
It’s clear that Microsoft doubled down on the living room and large screen 1080p HDTV. Despite the recent popularity of mobile games, the Xbox One series was designed to return us to an immersive gaming experience within the home environment that integrates games with movies, music, live TV, Skype, and web browsing.
It’s too early to evaluate the Xbox One S, but I wanted to review one related technological trajectory, virtual reality (VR) and its relationship to the gaming experience and related industries. The new Xbox One Architecture combined the Xbox OS with Windows and a new connective tissue that works with Kinect to respond to voice, gestures, and body movements. With games such as Call of Duty: Ghosts designed for it, Microsoft promised a whole new level of immersive gameplay.
For a variety of reasons, Kinect was cancelled by Microsoft in late 2017.
Virtual reality began as an idealistic notion of the early 1990s, popularized through avant-garde magazines such as MONDO 2000, non-fiction best-sellers like Howard Rheingold’s (1991) Virtual Reality, sci-fi novels like Neal Stephenson’s (1992) Snow Crash and cyberfiction movies like The Lawnmower Man (1992). Star Trek: Next Generation provided the most dramatic example of what virtual reality could be like with its Holodeck. But VR’s future, at least its immediate future, was in the gaming industry.
Drawing on flight simulation technology and research, VR captured the imagination of pre-Web techno-enthusiasts deliberating the future of what William Gibson’s termed “cyberspace”. Conceptualized with electronic accessories such as high definition LCD goggles, surround sound, fiber-optic laced gloves and pressure sensitive body suits, VR was designed to simulate the world in a computerized artificial reality. It was conceived as a system that would suspend the viewer’s belief that the environment is produced and immerse them in a highly responsive, multi-sensory apparatus. The Renaissance invention of perspectival art, with its vanishing point creating a first-person view, proved to be one of the most important drivers of VR as it focuses attention and reinforces the ego.
While most of the technology and associated software was developed for various simulation devices, it was the digital game industry that fully capitalized on this innovation. Castronova pointed to three reasons for this new path.
- One was that the digital game environment focused for the most part on software, not hardware. From Magnavox’s Odyssey in 1972 to Microsoft’s Xbox 360 in 2005, the game console proved to be a crucial platform for video game play, but it was the “killer app” software applications like Pong and Pac-man that propelled the industry’s success. VR development, on the other hand, was dominated by gadgetry such as the goggled helmet, the force-feedback glove and the sensor-laden body suit. The console was a major contributor to the video game explosion, but it was a series of good games that propelled the development of virtual game environments.
- The second reason VR was less successful than game virtual environments was that the virtual reality industry was pushed more by research concerns than by commercial concerns. The game industry on the other hand had no compunctions about its profit-making origins and goals.
- The third reason “the game version of VR” proved more successful was that it focused “on communities, not hardware.” From shoot-em-up Quake II free-for-alls on networked PCs to the programmed pandemonium of Atari Test Drive on Xbox Live, the social experience has been central to the success of the game experience.[1]
It was a young company named id Software that pioneered many of the virtual environment features that characterize the contemporary game environment. The small Texas-based company used the ego-centric perspective to create the first person shooter (FPS) game, Wolfenstein, in May of 1991. Id followed with the extraordinarily successful DOOM in December 1994. The game extended an image of a weapon into the vanishing point to orient the player’s perspective as they hunted a variety of monsters through a research facility on an alien planet. DOOM combined a shareware business model with the nascent distribution capabilities of the Internet. Just two months after Netscape introduced its first browser as freeware over the Web, DOOM enthusiasts by the droves were downloading the game by FTP to their PCs, many of them with a 14.4 kb modem. The first third of the game was freeware while another 27 levels and several new weapons could be purchased for a modest sum.
In a prescient move, id decided to make DOOM’s source code available to its users. This allowed their fans to create their own 2.5D (not quite 3-D) levels and distribute them to other players. Making the code available also allowed new modifications of the game called “mods”, including a popular one that involved the characters from the Simpsons’ animated TV show running around the DOOM environment with Homer Simpson able to renew his health by finding and eating donuts. The US military created a version called Marine DOOM designed to desensitize soldiers to the idea of killing. Many of the company’s new employees were recruited because they had developed expertise by designing their mods.
Id’s innovation streak didn’t stop there as they also pioneered multiplayer capability. While other games had developed an interactive mode between two players, DOOM allowed up to eight players over local area networks (LANs) or modems. Their next few games, QUAKE and QUAKE II, increased the capacity to 32 players while using true 3-D graphics to create a virtual world of stunningly immersive environments and player mobility.
Multiplayer games took off with QUAKE II and have since morphed into multiple variations including the Massively Multiplayer Online Game (MMOG) that can involve hundreds of players at a time. One of major early innovators of the MMOG was Archetype Interactive who conceived Meridian 59 using DOOM graphics technology from id and sold it to 3DO who coined the term “Massively Multiplayer” to market the innovative game.[3] It was Ultima Online that proved there was a market for online multiplayer games. Based on the popular Ultima game, its subscriber base grew to over 200,000 in over 100 countries. But Ultima Online was also the first to face a number of technical and community problems including synchronizing the game experience for all participants and establishing a system of player etiquette. In 1999, Sony Entertainment Online (SOE) opened up its Everquest universe online. It made national news when players started selling virtual items on eBay and established the validity of an online 3D role playing game. Motivated by Everquest’s success, Microsoft pushed up the release its Asheron’s Call on its Zone.com gaming site.
Virtual worlds have morphed into a wide variety of environments and games for all ages. MMOGs (Massively Multiplayer Online Games) emerged as one the biggest revenue producers of online games and are expected to remain so in the near future with on-demand games running a fairly close second. These games connect hundreds to thousands of game players in a virtual environment that often includes its own internal economy. The “fairies and elves” genre and particularly World of Warcraft reigned. At its peak it had upwards of 12 million subscribers sending in $30 million a month in subscription fees. But other games like RuneScape are challenging its dominance and sci-fi games like Eve Online and Planet Calypso have also prospered lately.
So what is the future of gaming in virtual reality or what the Xbox people are calling “living and persistent worlds“?[4] Nintendo released the WII on November 19, 2006 that was notable for a remote that could be used as a handheld pointing device. Gamers flocked to their sports package with games like tennis and baseball that could be played virtually using the Wii Remote as a racket or a bat. Microsoft responded with the Kinect in 2010 that could sense body movements. It immediately broke records by selling over 8 million units in its first two months. The Xbox One has an improved Kinect that reads its environment with a HD camera, taking in some 2 GB of photonic information every second with its Time of Flight (TOF) technology. Its algorithms allow it to register the details of each body it scans, gauging the direction and balance of the skeletal system, the energy of each motion, the force of each impact, and even monitoring the heart rates of each player.
The Xbox may not be living up to VR ideals of sensory force feedback and other forms of haptic connectivity, but the level of popularity suggests that successful gameplay is often achieved. Kinect provides a level of bodily interaction that have made games like Dance Central and Dance Central 2 quite popular. The Xbox controller, despite a relatively steep learning curve and limited body engagement, provides a number of options that once learned, adds levels of complexity that reward those who master them.
For a successful virtual engagement, it appears that what is most important is that a level of psychic/cognitive stimulation is achieved by participating in an artificial challenge or conflict that operates within defined parameters or rules, and results in an observable change or quantifiable result. In other words – a game.[5] As long as these conditions are being met we can expect a rich pattern of future innovation in this area.
Notes
[1] From Castronova’s “Appendix: A Digression on Virtual Reality”, in Synthetic Worlds: The Business and Culture of Online Games. p. 285.
[2] Anthony J. Pennings, “The Telco’s Brave New World: IPTV and the “Synthetic Worlds” of Multiplayer Online Games” for the Pacific Telecommunications Council Proceedings. January 15-18, 2005 Honolulu, Hawaii.
[3] Information on the first MMOGs from “Alternate Reality: The History of Massively Multiplayer Online Games”, By Steven L. Kent Sept. 23, 2003. Located at Gamepsy.com on November 28, 2005.
[4] Marc Whitten’s presentation on the technical aspects of Xbox One was broadcast live on Spike TV.
[5] For a great explanation of games and gameplay read Rules of Play: Game Design Fundamentals by Katie Salen and Eric Zimmerman.
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Doom > first person view > id Software > Kinnect > Lawnmower Man > Massively Multiplayer Online Game (MMOG) > Neuromancer > QUAKE > QUAKE II > Ultima Online > vanishing point > virtual reality > William Gibson > Wolfenstein > Xbox One > Xbox One S
CONTENDING “INFORMATION SUPERHIGHWAYS”
Posted on | July 16, 2016 | No Comments
During the 1980s, before the reality of the Internet, a new communications infrastructure was initiated based on digital technologies. Propelled largely by growing demand for new microprocessor-based business services and fuelled by the availability of low-grade “junk bonds,” companies like MCI, Tele-Communications Inc. (TCI), Turner Broadcasting, and McCaw Cellular raised over $20 billion dollars to lay a fiber optic networks and implement new digital services such as videotext and interactive television.
Soon several modes of telecommunications were competing for the title of “information superhighway,” a popular metaphor for the changes happening to data communications and the potential for expanded telecommunications services such as interactive television. Generally attributed to then Senator Al Gore, the term was co-opted by the Bell companies who finally saw new opportunities coming with the digital revolution. For instance, Bell Atlantic and TCI attempted to form a merger that would offer interactive information services and video-on-demand over both cable and telephone lines.
Although the Internet was already twenty years old, it still had not achieved the type of technical robustness needed to capture popular and commercial attention. Wireless communication was growing, but still primarily used by an elite business class due to the lack of dedicated spectrum and wide-scale infrastructure problems. Cable TV was also a contender, with an extensive subscriber base and having built up its coaxial and fiber infrastructure during the late 1980s. Satellite was also in the running, with dramatically increased power capabilities resulting from the continuing development of solar power. The ability to efficiently transform sunlight into signal radiating power allowed smaller and smaller “earth station” antennas to pick up broadcast and narrowcast signals. A longshot were the power utilities that had developed technology to transmit data along its electrical lines. They lacked installed capacity but had good maintenance teams, billing systems, and ready access to homes and other buildings.
Which technology was going to rise to this status? Despite two decades of existence, the Internet was relatively archaic with no World Wide Web and few high-speed backbone networks. Wireless systems lacked the spectrum or infrastructure for broadband transmission over significant geographic domains. Interactive television was becoming a possibility as the FCC rolled back restrictions on the common carriers providing content, but despite ADSL over copper and fiber-to-the-home, software and content factors proved major limitations.
Interactive consumer services got their start with videotext offerings, but the terminals were large and awkward, and it only displayed textual information. Telephone companies soon began testing other electronic services. For instance, Bell Atlantic and TCI attempted to form a merger that would offer interactive information services and video on demand.
“Telcos” created a new technology for enhancing telephone lines called ADSL (Asynchronous Digital Subscriber Line) that could provide video over existing copper lines to the home. They were also intensively lobbying Washington DC to create a favorable regulatory environment for their new endeavors, specifically trying to derail the 1984 Cable TV Act that excluded them from offering television services.
Divested from ATT in the early 1980s and deprived of the lucrative long-distance services, the Regional Bell Operating Companies (RBOCs) such as Ameritech, Bell Atlantic, BellSouth, and others sought to take advantage of their monopolies over local telecommunications by providing such services as ISDN and interactive television.
By the early 1990s, the Baby Bells were conducting tests using ADSL (Asynchronous Digital Subscriber Line) to provide video over existing copper lines to the home. Disillusioned by the costs of providing fiber to the home, telcos looked to leverage their existing plant. ADSL could send compressed video over the established telephone lines. It was suitable to this task because it could send data downstream to the subscriber faster (256Kbps-9Mbps) than upstream (64Kbps-1.54Kbps) to the provider.[1]
The Bell telcos were also intensively lobbying Washington DC to create a favorable regulatory environment, specifically trying to derail the Cable Communications Policy Act of 1984 that excluded them from offering television services. Bell Atlantic even attempted to merge with cable TV giant TCI in anticipation of their control of the new information highways.
The Cable Communications Policy Act of 1984 triggered strong concerns that the cable TV industry was becoming too strong in relation to other parts of communications/media industry. Growing horizontal and vertical integration as well as a subscriber rate encompassing over 60% of American homes threatened the telecommunications companies, which began to press their own claim to the household imagination.[2]
The information superhighway, as envisioned by the Bell Atlantic-TCI merger, ran into a roadblock when Congress overrode President Bush’s veto of the 1992 Cable Act. The new rules allowed Clinton’s FCC administration to lower cable rates. TCI’s stock dropped and the deal fell through. The FCC had implemented new econometric models that allowed them to reduce cable TV rates in select markets around the country without reducing cable companies revenue. When the FCC announced its new rulings in February 1994, both companies announced that the new regulations had killed their deal.[3]
Well, we know how this story turns out. With its TCP/IP software, the Internet became the world’s “information superhighway.” Its ability to connect differing computers and operating systems had given it unprecedented connectivity in the computer world, and over the course of the 1990s, it became the preferred conduit for communications and netcentric commerce.
Notes
[1] Speed rates on ADSL are listed as their more current rates as provided by Heidi V. Anderson, “Jump-Start Your Internet Connection with DSL,” in How the Internet Works, Part I. SMART COMPUTING REFERENCE SERIES. p. 105
[2] Logue, T. (1990) “Who’s Holding the Phone? Policy Issues in an Age of Transnational Communications Enterprises,” PTC ’90 Pacific Telecommunications Council Annual Conference Proceedings, Honolulu, Hawaii. p. 96.
[3] Hundt, R.E (2000) You Say You Want a Revolution: A Story of Information Age Politics. Yale University Press. pp. 30-34.
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: ADSL > Asynchronous Digital Subscriber Line > Bell Atlantic-TCI merger > Cable Communications Policy Act of 1984 > information superhighway > TCP/IP > Tele-Communications Inc. (TCI)
Digital Games and Meaningful Play
Posted on | June 30, 2016 | No Comments
What is a game? What makes it fun? How can you design a game to provide a meaningful and rewarding experience? Rules of Play: Game Design Fundamentals by Katie Salen Tekinba and Eric Zimmerman is a great blend of theory and practical application and helps us understand the importance of “gameplay” – the emotional relationship between player actions and game outcomes. The book helps explain what makes games, from baseball to virtual reality games, effective and meaningful. In this post, I look at some of the key ideas involved in understanding games, using baseball as a primary example.
A game is a structured form of playing and involves choices of action. It invokes an organized way of making choices, taking action, and experiencing some kind of feedback. In other words, game players take some visible action and the game responds with information that provides feedback to the player and subsequently changes the status of the game. Below is a picture of my daughter playing a virtual reality game of baseball.
The actions and subsequent outcomes need to be discernible – understandable and visible. And they need to be integrated into the game. In baseball, for example, a batter makes a decision to swing at a ball thrown by the pitcher. Several things can happen based on the trajectory of the pitch and the way the batter swings. She can swing and miss, or hit the ball for one of several results: foul ball, base hit, home run, pop out, etc.
The result of the action needs to be evident and contribute to the game. The foul ball is registered as a strike; a base hit can move runners or at least get the batter on base. A home run is an ultimate action in baseball as it adds immediately to the final score of the game.
The many recognizable actions of a game is one of the reasons “datatainment” is a prominent part of baseball. Hits, home runs, ERA, strikeouts are all significant acts that can be distinguished and statistically registered on a baseball scoreboard or the season “stats” of a player. Not only are players evaluated based on these measures, fans of professional sports often take a keen interest in these numbers as part of an identification process with players. Sports teams look to deepen fan engagement by going beyond box scores to digitally-enabled fantasy sports and other forms of social involvement and entertainment.
Choices and actions change the game and create new meanings. They move the game forward. Strikes end batters; outs end innings. As the game moves forward, new meanings are created. Heroes emerge, a team pulls ahead, a team comes from behind. A good game drives emotional and psychological interest, either through a tribal allegiance to a team, an interest in a player, or a recognition of the stakes of a game, as in a championship such as the World Series. But in every case, the game must have discernible actions that have a meaningful impact on the progress and result of the game.
Citation APA (7th Edition)
Pennings, A.J. (2016, Jun 30). Digital Games and Meaningful Play. apennings.com https://apennings.com/meaningful_play/games-and-meaningful-play/
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is a Professor at the (SUNY) State University of New York, Korea. When not in Korea he lives in Austin, TX where he taught in the Digital Media MBA program at St. Edwards University. At New York University, where he spent most of his career, he created the BS in Digital Communications and Media. Previously, he taught at Victoria University in New Zealand and also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: datatainment > games > meaningful play > Rules of Play > Zimmerman
Xanadu to World Wide Web
Posted on | June 11, 2016 | No Comments
Tim Berners-Lee, a British citizen and a software consultant at CERN, or Centre European pour la Recherche Nucleaire developed what came to be known as the World Wide Web (WWW). Located in Switzerland, CERN was Europe’s largest nuclear research institute, although the name was changed to European Laboratory for Particle Physics to avoid the stigma attached to nuclear research.
In March of 1989, Berners-Lee proposed a project to create a system for sharing information among CERN’s dispersed High Energy Physics research participants. This information management system would form the basis of the global Internet, especially after 1994, when he founded the World Wide Web Consortium (W3C), a standards organization that began to guide the Internet’s interoperable technologies with specifications, guidelines, software, and tools for web addresses (URLs), Hypertext Transfer Protocol (HTTP) and Hypertext Markup Language (HTML). These technologies allowed web browsers like Mosaic, Netscape, Internet Explorer, and later Firefox and Chrome, to access data and display web pages.
Berners-Lee wanted to create a system where information from various sources could be linked and accessed, creating a “pool of human knowledge.” Using a NeXT computer built by Steve Job’s post-Apple company, he wrote the prototype for the World Wide Web and a basic text-based browser called Nexus. The Next computer had a UNIX operating system, built-in Ethernet and a version of the Xerox PARC graphical user interface that Jobs had transformed into the Apple Mac. Berners-Lee credited the NeXT computer with having the functionality to speed up the process, saving him perhaps a year in the coding process.
Dissatisfied with the limitations of the Internet, Berners-Lee developed this new software around the concept of “hypertext,” which had originated in Ted Nelson’s Computer Lib, a 1974 manifesto about the possibilities of computers. Nelson warned against leaving the future of computing to a priesthood of computer center guardians that served the dictates of the mainframe computer.
Ted Nelson’s Xanadu project allowed him to coin the terms “hypertext” and “hypermedia” as early as 1965. Xanadu is the original name for Kublai Khan’s mythical summer palace, described by the enigmatic Marco Polo. “There is at this place a very fine marble palace, the rooms of which are all gold and painted with figures of men and beasts and birds, and with a variety of trees and flowers, all executed with such exquisite art that you regard them with delight and astonishment.” Nelson strove to transform the computer experience with software and display technology that would make reading and writing an equally rich “Xanadu” experience.
An important transition technology was HyperCard, a computer application that allowed the user to create stacks of connected cards that could be displayed as visual pages on an Apple Macintosh screen. Using a scripting language called HyperTalk, each card could show text, tables, and even images. “Buttons” could be installed on each card that linked it to other cards within the stack with a characteristic “boing” sound clip. Later, images could be turned into buttons. Hypercard missed out on historical significance because of Apple’s “box-centric culture,” according to HyperCard inventor Bill Atkinson. He later lamented, “If I’d grown up in a network-centric culture, like Sun, HyperCard might have been the first Web browser.” [1]
Berners-Lee accessed the first web page, on the CERN web server on Christmas Day, 1990. He spent the next year adding content and flying around the world to convince others to use the software. Concerned that a commercial company would copy the software and create a private network, he convinced CERN to release the source code under a general license so that it could be used by developers freely. One example was a group of students at the University of Illinois at Urbana-Champaign’s National Center for Supercomputing Applications that was part of the NSFNET. Marc Andreessen and other students created the Mosaic browser that they distributed for free using the Internet’s FTP (File Transport Protocol). They soon left for Silicon Valley where they got venture capital to create Netscape, a company designed around their Web browser called Netscape Navigator.[2]
Berners-Lee designed the WWW with several features that made it extremely effective.
First, it was based on open systems that allowed it to run on many computing platforms. It was not designed for a specific proprietary technology but rather would allow Apples, PCs, Sun Workstations, etc. to connect and exchange information. Berners-Lee compared it to a market economy where anyone can trade with anyone on a voluntary basis.
Second, it actualized the dream of hypertext, the linking of a “multiverse” of documents on the WWW. While Ted Nelson would criticize its reliance on documents, files, and traditional directories, the “web” would grow rapidly.[3]
Third, it used a hypertext transfer protocol (HTTP) to create a direct connection from the client to the server. With this protocol and taking advantage of packet-switching data communications, the request for a specific document is sent to the server and either the requested document is sent or the client is notified that the document does not exist. The power of this system meant that the connection was closed quickly after the transaction, saving bandwidth and freeing the network for other connections.
Fourth, it also worked with the existing Internet infrastructure and integrated many of its basic protocols including FTP, Telenet, Gopher, e-mail, and News. FTP was particularly important for the distribution of software, including browsers. Newsgroups informed people all around the NSFNET that the technology and associated browsers were available.
Another crucial feature was that content could be created using a relatively easy to use interpretation language called Hypertext Markup Language (HTML). HTML was a simplified version of another markup language used by large corporations called Standard Generalized Markup Language (SGML). HTML was more geared towards page layout and format, while SGML was better for document description. Generalized markup describes the document to whatever system it works within. HTML and SGML would form a symbiotic relationship and eventually lead to new powerful languages for e-commerce and other net-centric uses like XML (eXensible Markup Language) and HTML 5.
Finally, Berners-Lee developed the uniform resource locator (URL) as a way of addressing information. The URL gave every file on the WWW, whether it was a text file, an image file, or a multimedia file, a specific address that could be used to request and download it.[4]
Together, these features defined a simple transaction that was the basis of the World Wide Web. In summary, the user or “client” establishes a connection to the server over the packet-switched data network of the Internet. Using the address, or URL, the client issues a request to the server specifying the precise web document to be retrieved. Next, the server responds with a status code and, if available, the content of the information requested. Finally, either the client or the server disconnects the link.
The beauty of the system was that its drain on the Internet was limited. Rather than tying up a whole telecommunications line as a telephone call would do, the HTTP allowed for the information to be downloaded (or not) and then the connection would be terminated. The World Wide Web began to allow unprecedented amounts of data to flow through the Internet, changing the world’s economy and it’s communicative tissue.
In another post I discuss how important hypertext clicks are to the advertising industry through the tracking of various metrics.
Notes
[1] Kahney, L. (2002, August 14). HyperCard: What Could Have Been. Retrieved June 10, 2016, from http://www.wired.com/2002/08/hypercard-what-could-have-been/
[2] Greenemeier, L. (2009, March 12). Remembering the Day the World Wide Web Was Born. Retrieved June 11, 2016, from http://www.scientificamerican.com/article/day-the-web-was-born/
[3] Banks, L. (2011, April 15). Hypertext Creator Says Structure of World Wide Web ‘Completely Wrong’ Retrieved June 11, 2016. Also Greenemeier, Larry. “Remembering the Day the World Wide Web Was Born.” Scientific American. N.p., 11 Mar. 2009. Web. 13 May 2017.
[4] Richard, E. (1995) “Anatomy of the World Wide Web,” INTERNET WORLD, April. pp. 28-20.
Citation APA (7th Edition)
Pennings, A.J. (2016, Jun 11). From Xanadu to World Wide Web. apennings.com https://apennings.com/how-it-came-to-rule-the-world/xanadu-to-world-wide-web/
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: HTML > HTML5 > HTTP > HyperCard > NeXt Computer > Ted Nelson > Tim Berners-Lee > URL > World Wide Web > Xanadu Project
Cisco Systems: From Campus to the World’s Most Valuable Company, Part One: Stanford University
Posted on | May 24, 2016 | No Comments
Cisco Systems emerged from employees and students at Stanford University in the early 1980s to become the major supplier of the Internet’s enigmatic plumbing. In the process, it’s stock value increased dramatically and it became the largest company in the world by market capitalization. Cisco originally produced homemade multi-platform routers to connect campus computers through an Ethernet LAN and throughout the 1980s, they built the networking technology for the National Science Foundation’s NSFNET. As the World Wide Web took off during the 1990s, they helped countries around the world transit their telecommunications systems to Internet protocols. Cisco went public on February 4, 1990, with a valuation of $288 million. By 2002, Cisco Systems was calculated to be the world’s most valuable company, worth $579.1 billion to second place Microsoft’s $578.2 billion. Microsoft had replaced General Electric’s No. 1 ranking in 1998.
This post will present the early years of Cisco Systems development and the creation of networking technology on the Stanford University campus. The next post will discuss the commercialization and success of Cisco Systems as it helped to create the global Internet by first commercializing multi-protocol routers for local area networks.
Leonard Bosack and Sandra K. Lerner formed Cisco Systems in the early 1980s and were the driving forces of the young company. In the beginning, they each were the heads of different computer centers on campus and incidentally, (or perhaps consequently) dating. The couple met on the Stanford campus (Bosack earned a master’s in computer science in 1981 and Lerner received a master’s in statistics in 1981) while managing the computer facilities of different departments on the Stanford campus. The two faculties were located at different corners of the campus and the couple began to work together to link them, and to other organizations scattered around the campus. Drawing on work being conducted at Stanford and Silicon Valley, they developed a multi-protocol router to connect the departments. Bosack and Lerner left Stanford University in December 1984 to launch Cisco Systems.
Robert X. Cringely, author of Accidental Empires: How the Boys of Silicon Valley Make Their Millions, Battle Foreign Competition, and Still Can’t Get a Date interviewed both founders for his PBS video series, Nerds 2.0.1
Bosack and Lerner happened on their university positions during a very critical time in the development of computer networks. The Stanford Research Institute (SRI) was one of the four original ARPANET nodes and the campus later received technology from Xerox PARC, particularly the Alto computers and the Aloha Network technology, now known as Ethernet.[1] This technology, originally developed at the University of Hawaii to connect different islands, was improved by Robert Metcalfe and other Xerox PARC researchers and granted to the Stanford University in late 1979.[2] Ethernet technologies needed router technology to network effectively and interconnect different computers and Ethernet segments.
A DARPA-funded effort during the early 1970s at Stanford had involved research to design a new set of computer communication protocols that would allow several different packet networks to be interconnected. In June of 1973, Vinton G. Cerf started work on a novel network protocol with funding from the new IPTO director, Robert Kahn. DARPA was originally interested in supporting command-and-control applications and in creating a flexible network that was robust and could adjust to the changing situations to which military officers are accustomed. In July 1977, initial success led to a sustained effort to develop the Internet protocols known as TCP/IP (Transmission Control Protocol and Internet Protocol). DARPA and the Defense Communications Agency, which had taken over the operational management of the ARPANET, supplied sustained funding the project.[3]
The rapidly growing “Internet” was implementing the new DARPA-mandated TCP/IP protocols. Routers were needed to “route” packets of data to their intended destinations. Every packet of information has an address that helps it find its way through the physical infrastructure of the Internet. Stanford had been one of the original nodes on ARPANET, the first packet-switching network. In late 1980, Bill Yeager was assigned to work on a router as part of the SUMEX (Stanford University Medical Experimental) initiative at Stanford University to build a network router. Using a PDP-11, he first developed a router and TIP (Terminal Interface Processor). Two years later he developed a Motorola 68000-based router and TIP using experimental circuit boards that would later become the basis for the workstations sold by SUN Microsystems.[4]
Bosack and Lerner had operational rather than research or teaching jobs. Len Bosack was the Director of Computer Facilities for Stanford’s Department of Computer Science, while Sandy Lerner was Director of Computer Facilities for Stanford’s Graduate School of Business. Despite their fancy titles, they had to run wires, install the protocols, and get the computers to work. They were in charge of getting the computers and the networks to work and make them usable for the university. Bosack had worked for DEC, helping to design the memory management architecture for the PDP-10. At Stanford, many different types of computers: mainframes, minis, and the microcomputers were all in demand and used by faculty, staff, and students.
Some 5000 computers were scattered around the campus. The Alto Computer, in particular, was proliferating on campus. Thanks to Ethernet, computers were often connected locally, within a building or a cluster of buildings, but no overall network existed throughout the campus. Bosack, Lerner, and other colleagues such as Ralph Gorin and Kirk Lougheed worked on “hacking together” some of these disparate computers into the multi-million dollar broadband network being built on campus. But it was running into difficulties. They needed to develop “bridges” between local area networks and then crude routers to move packets. At the time, routers were not offered commercially. Eventually, their “guerilla network” became the de facto Stanford University Network (SUNet).
Notes
[1] Stanford networking experiments included those in the AI Lab, at SUMEX, and the Institute for Mathematical Studies in the Social Sciences (IMSSS).
[2] Ethernet Patent Number: 04063220 , Metcalfe, et al.
[3] Vinton G. Cerf was involved at Stanford University in developing TCP/IP and later became Senior Vice President, Data Services Division at MCI Telecommunications Corp. His article “Computer Networking: Global Infrastructure for the 21st Century” was published on the www and accessed in June 2003.
[4] Circuit boards for the 6800 TIP developed by Andy Bechtolsheim in the Computer Science Department at Stanford University.
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Cisco Systems > Leonard Bosack > Sandra K. Lerner > Silicon Valley > SUNet
The NSFNET is the Internet
Posted on | May 20, 2016 | No Comments
An important intermediary in the transition of the military’s ARPANET into the commercial Internet was the National Science Foundation’s NSFNET. The NSFNET adopted TCP/IP and required all connecting nodes to use them as well compliant network technology, mainly built by a small California startup company called Cisco. With government funding for advanced scientific and military research, the network expanded rapidly to form the initial Internet. Without the NSFNET, the Internet would have grown differently, probably using the X.25 protocols developed by the phone companies. Without specifying the use of TCP/IP protocols the Internet would have emerged with significantly less interoperability and diversity of services.
The NSFNET has its origins at the University of Maryland during the 1982-83 school-year. The university was looking to connect its campus computers as well as network with other colleges. It applied to the National Science Foundation (NSF) for funding but found it was organizationally challenged for such a request. In response, the NSF set up the Division of Networking and Computing Research Infrastructure to help allocate resources for such projects. The Southeastern Universities Research Association Network or SURANET adopted the newly sanctioned TCP/IP protocols, connecting the University of Maryland to other universities. It set a precedent and nearly two years into the project, SURANET connected to IBM at Raleigh-Durham, North Carolina.
The National Science Foundation (NSF) was formed during the 1950s before computer science emerged as a specific discipline. It first established areas of research in biology, chemistry, mathematics, and physics before it became a significant supporter of computing activities. Finally, in 1962, it set up its first computing science program within its Mathematical Sciences Division. At first it encouraged the use of computers in each of these fields and later towards providing a general computing infrastructure, including setting up university computer centers in the mid-1950s that would be available to all researchers. In 1968, an Office of Computing Activities began subsidizing computer networking. They funded some 30 regional centers to help universities make more efficient use of scarce computer resources and timesharing capabilities. The NSF worked to “make sure that elite schools would not be the only ones to benefit from computers.”[1]
During the early 1980s, the NSF started to plan its own national network. In 1984, a year after TCP/IP was institutionalized by the military, the NSF created the Office of Advanced Scientific Computing, whose mandate was to create several supercomputing centers around the US.[2] Over the next year, five centers were funded by the NSF.
- General Atomics — San Diego Supercomputer Center, SDSC
- University of Illinois at Urbana-Champaign — National Center for Supercomputing Applications, NCSA
- Carnegie Mellon University — Pittsburgh Supercomputer Center, PSC
- Cornell University — Cornell Theory Center, CTC
- Princeton University — John von Neumann National Supercomputer Center, JvNC
However, it soon became apparent that they would not adequately serve the scientific community.
In 1986, Al Gore sponsored the Supercomputer Network Study Act to explore the possibilities of high-speed fiber optics linking the nation’s supercomputers. They were much needed for President Reagan’s “Star Wars” Strategic Defense Initiative (SDI) and as well as competing against the Japanese electronics juggernaut and its “Fifth Generation” artificial intelligence project.
Although the Supercomputer Network Study Act of 1986 never passed, it stimulated interest interest in the area and as a result the NSF formulated a strategy to assume leadership responsibilities for the network systems that ARPA had previously championed. It took two steps to make networking more accessible. First, it convinced DARPA to expand its packet-switched network to the new centers. Second, it funded universities that had interests in connecting with the supercomputing facilities. In this, it also mandated specific communications protocols and specialized routing equipment configurations. It was this move that standardized the specific set of data communication protocols that caused the rapid spread of the Internet as universities around the country and then around the world. Just as the military had ordered the implementation of Vint Cerf’s TCP/IP protocols in 1982, the NSF directives standardized networking in the participating universities. All who wanted to connect to the NSF network had to buy routers (mainly built by Cisco) and other TCP/IP compliant networking equipment.
The NSF funded a long haul backbone network called NSFNET in 1986 with a data speed of 56Kbps (upgraded to a T1 or 1.5 Mbps the following year) to connect the high-computing power for all its nodes. It also offered to allow other interested universities to connect to it as well. The network became very popular but not because of its supercomputing connectivity but rather because of its electronic mail, file transfer protocols, and its newsgroups. It was the technological simplicity of TCP/IP that made it sprout exponentially over the next few years.[3]
Unable to manage the technological demands of its growth, the NSF signed a cooperative agreement in November 1987 with IBM, MCI, and Merit Network, Inc. to manage the NSFNET backbone. By June of the next year, they expanded the backbone network to 13 cities and developed a modern control center in Ann Arbor Michigan. Soon it grew to over 170 nodes, and traffic was expanding at a rate of 15% a month. In response to this demand, the NSF exercised a clause in their five-year agreement to implement a newer state-of-the-art network with faster speeds and more capacity. The three companies formed Advanced Network & Services Inc. (ANS), a non-profit organization to provide a national high-speed network.
Additional funding by the High Performance Computing Act of 1991 helped expand the NSFNET into the Internet. By the end of 1991, ANS had created a new links operating at T-3 speeds. T-3 traffic moves at speeds up to 45mbps and over the next year ANS replaced the entire T-1 NSFNET with new linkages capable of transferring an equivalent of 1,400 pages of single-spaced typed text a second. The funding allowed the University of Illinois at Urbana Champaign’s NCSA (the National Center for Supercomputing Applications) to support graduate students for $6.85 an hour. A group including Marc Andresson developed a software application called Mosaic for displaying words and images. Mosaic was the prototype for popular web browsers such as Chrome and Internet Explorer.
The NSFNET soon connected over 700 colleges and universities as well as nearly all federally funded research. High schools, libraries, community colleges and other educational institutions were also joining up. By 1993, it also had links to over 75 other countries.[4]
Pressures had been building to allow commercial activities on the Internet, but the NSF had strict regulations against for-profit activities on its network facilities. During the 1980s, the network was subject to the NSF acceptable use policy, including restricting commercial uses of the outcomes of NSF research. Congressman Rick Boucher (D-Virginia) introduced legislation on June 9th, 1992 that allowed commercial activities on the NSFNet.[5] In one of his last executive acts, President Bush finally allowed business to be conducted over its networks and those that were being built around it. Several months into its newly liberalized status, the NSFNET transitioned to an upgraded T3 (45Mbs) backbone status – much, much faster than its original 45Kbs speed.
The legacy of the NSFNET is that it ensured the proliferation of the TCP/IP technologies, protocols and associated hardware. These systems were designed as an open architecture that accepts any computer and connects to any network. It was also miminalist, requiring little from the computer and neutral to applications (e-mail, browsers, FTP) and content running on the network.
Although these are idealistic principles and not always followed in practice, they were largely responsible for the unlocking the tremendous economic growth of the Internet age. For example, Marc Andresson and some of his colleagues soon left the University of Illinois at Urbana and formed Netscape to market their “browser.” Their IPO in 1994 helped spark the massive investments into the Internet that characterized the 1990s and the rise of the “dot.coms.”
Notes
[1] Janet Abbate’s (2000) Inventing the Internet by MIT Press is a classic exploration of the history of the World Wide Web, p. 192.
[2] Abbate, p. 191.
[3] Kahin, B. (ed.) (1992) Building Information Infrastructure: Issues in the Development of a National Research and Education Network. McGraw-Hill Primis, Inc. This book contains a series of papers and appendixes giving an excellent overview of the discussion and legislation leading to the NREN.
[4] Information obtained from Merit, December 1992
[5] Segeller, S. (1998) Nerds 2.0.1 pp. 297-306
Share
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();
Tags: Advanced Network & Services Inc. > Al Gore > ANS > Congressman Rick Boucher > High Performance Computing Act of 1991 > Marc Andresson > Netscape > NSFNET > Supercomputer Network Study Act of 1986 > TCP/IP > The National Science Foundation > University of Illinois at Urbana Champaign’s NCSA (the National Center for Supercomputing Applications) > Vint Cerf’s TCP/IP protocols
We shape our buildings and afterwards our buildings shape us
Posted on | May 9, 2016 | No Comments
One of Marshall McLuhan’s most celebrated intellectual “probes” was a paraphrase of Winston Churchill’s infamous “We shape our buildings, and afterwards our buildings shape us.” Churchill was addressing Parliament some two years after a devastating air raid by the Nazis destroyed the House of Commons and was arguing for its restoration, despite the major challenges of the war.
Churchill’s famous line was paraphrased in the 1960s with a more topical, “We shape our tools and thereafter our tools shape us.” and was included in McLuhan’s classic (1964) recording The Medium is the Massage. With the diffusion of the television and the transistor radio, it was a time when the electronic media was exploding in the American consciousness. McLuhan and others were committed to understanding the role of technology, particularly electronic media in modern society.
The revised quote is often attributed to McLuhan, but it was actually reworded by his colleague, John M. Culkin. Culkin was responsible for inviting McLuhan to Fordham University for a year and subsequently greatly increasing his popularity in the US. Culkin later formed the Center for Understanding Media at Antioch College and started a master’s program to study media. Named after McLuhan’s famous book Understanding Media, the center later moved to the New School for Social Research in New York City after Culkin joined their faculty.
The probe/quote serves in my work to help analyze information technologies (IT), including communications and media technologies (ICT). It provides frames for inquiring into the forces that shaped ICT, while simultaneously examining how these technologies have shaped economic, social and political events and change. IT or ICT means many things but is meant here to traverse the historical chasm between technologies that run organizations and processes and those that educate, entertain and mobilize. This combination is crucial for developing a rich analysis of how information and communications technologies have become powerful forces in their own right.
My concern has to do with technology and its transformative relationship with society and institutions. In particular, the reciprocal effects between technology and power. Majid Tehranian’s “technostructuralist” perspective was instructive because it examined information machines in the context of the structures of power that conceptualize, design, fund, and utilize these technologies. In Technologies of Power (1990) he compared this stance to a somewhat opposite one, a “techno-neutralist” position – the position that technologies are essentially neutral and their consequences are only a result of human agency.
In my series How IT Came to Rule the World, I examine the historical forces that shaped and continue to shape information and related technologies.
© ALL RIGHTS RESERVED
Anthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.
var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-20637720-1']); _gaq.push(['_trackPageview']);
(function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();