Anthony J. Pennings, PhD

WRITINGS ON DIGITAL ECONOMICS, ENERGY STRATEGIES, AND GLOBAL COMMUNICATIONS

Banner Years – The Resurgence of Online Display Ads

Posted on | March 5, 2013 | No Comments

Although second to keyword advertising, display ads continue to be a significant revenue source for web publishers.[1] While search-based keyword advertising continues its astonishing ascendancy, “banner ads” continue to be a workhorse for many marketing efforts. Revenues continue to rise, and the addition of Facebook as a potent new advertising vehicle has added a new competitive spirit into the mix.

Part of the argument is that display ads do more for brands than initially thought and that click-through rates (CTR) are not a full measure of their value. Display ads are also becoming more transparent, allowing advertisers to view ads and control for ads that are not actually seen. Probably most important is that the online infrastructure for selling and buying ads has become increasingly virtualized with online markets connecting publishers and advertisers in computerized, real-time auction environments.

Online ads began with the web page and with its hypertext links and ability to display .gif or .jpeg images. This combination soon gave birth to the controversial banner ad. Through the use of hypertext markup language’s (HTML) IMG SRC tag, images could be presented and with a little more coding contain links to other websites. Marc Andreessen proposed the IMG code on February 25, 1993, when he was working on the NCSA Mosaic web browser, the precursor to the prolific Netscape browser that kickstarted the World Wide Web and the explosion of the “dot.com” companies.

The term “banner ad” was coined by HotWired when it sold the first web banner under its revenue model of “corporate sponsorship”. Designed to go to a site that promoted seven art museums, the first ad went online on October 27, 1994 and was paid for by for the AT&T Corp. Hotwired, an offshoot of Wired Magazine, also pioneered “HotStats”, the first real-time web analytics. The original ad is presented below.[2]

First Banner Ad Paid for by ATT

Advertising on the Internet was not a sure thing. It was banned on the early network because of the ethos of the early Internet community which was either military or academic. It was also officially restricted because the Internet rested on the National Science Foundation network (NSF) and it explicitly forbid any advertising on the network. It took an act of Congress to allow commercial activity on the Internet.

As the web exploded throughout the rest of the 1990s, ad banners grew in popularity. They had the benefit of not only presenting brand information but facilitating the capability to click on the link on to go to a sponsor’s website. There, they could participate in an “end action” such as signing up for additional information, downloading an application, or even making a purchase.

As the population of websites grew, more advertisers saw value in purchasing ad space. The product, a display of an ad on a webpage, was increasingly known as an “impression” or an ad view. A web page can contain several ad banners with different sizes and located on a different part of the browser property. The .gif file format was particularly useful as it allowed several layers that could be timed into an animation.

Inefficiencies in connecting publishers and ad buyers quickly revealed themselves and in response, a number of third-party solutions emerged. Ad networks emerged to aggregate blocks of similar content product and market these packages to advertisers. This was particularly attractive to smaller websites with niche audiences as they could be sold with other sites that appealed to the same groups. They had faults though as advertisers complained about a lack of transparency and flexibility as it was difficult to determine where their ads were being placed and to make campaign adjustments. Publishers complained because they couldn’t connect with the best advertisers and also lost revenue to intermediaries. Also overall metrics were lacking for all parties to the transaction.

More recently, ad exchanges are proving to be a more nimble intermediary. These computerized markets directly connect advertisers and publishers and allow real-time bidding on more targeted ad spaces. Advertisers and agencies can be more selective in choosing the web publishers that reach their preferred audiences. Ad exchanges tend to be very technology-intensive so it is not surprising we see the larger advertising tech companies like Google, Microsoft and Yahoo! take the lead.

The major ad exchanges:

– AdBrite – ceased operations on February 1st, 2013.
AdECN (Microsoft), now Bing Ads (formerly Microsoft adCenter and MSN adCenter).
– ContextWeb merged with Datran to form Pulsepoint.
DoubleClick Ad Exchange was bought by Google.
– Facebook’s FBX draws on a billion users.
OpenX.
Right Media was bought by Yahoo! and is making everyone commute.

Even Amazon announced their plans to enter the ad exchange market in late 2012. Amazon would drop cookies about your visits into your browser that would be acted on when you visit other sites in Amazon’s exchange network such as IMDb, DPReview, and other ad exchanges and publishers with relationships to Amazon.

Notes

[1] Emarketer.com has useful statistics on the display ad market.
[2] Hotwired.com is generally acknowledged as having the first banner ad although some contention exists.
[3] Information on Amazon and Facebook from http://www.businessinsider.com

© ALL RIGHTS RESERVED

Share

Anthony

Anthony J. Pennings, PhD recently joined the Digital Media Management program at St. Edwards University in Austin TX, after ten years on the faculty of New York University.

Working Big Data – Hadoop and the Transformation of Data Processing

Posted on | February 15, 2013 | No Comments


One day Google downloaded the Internet, and wanted to play with it.

Well, that is my version of an admittedly mythologized origin story for what is now commonly called “Big Data.

Early on, Google developed a number of new applications to manage a wide range of online services such as advertising, free email, blog publishing, and free search. Each required sophisticated telecommunications, storage and analytical techniques to work and be profitable. In the wake of the dot.com and subsequent telecom crash, Google started to buy up cheap fiber optic lines from defunct companies like Enron and Global Crossing to speed up connection and interconnection speeds. Google also created huge data centers to collect, store and index this information. Their software success enabled them to become a major disruptor of the advertising and publishing industries and turned them into a major global corporation now making over US$50 billion a year in revenues. These innovations would also help drive the development of Big Data – the unprecedented use of massive amounts of information from a wide variety of sources to solve business and other problems.

Unable to buy the type of software they needed from any known vendor, Google developed its own software solutions to fetch and manage the petabytes of information they were downloading from the World Wide Web on a regular basis. Like other Silicon Valley companies, Google drew on the competitive cluster’s rich sources of talent and ideas, including Stanford University. Other companies such as Teradata were also developing parallel processing technology for data center hardware and software technologies, but Google was able to raise the investment capital to attract the talent to produce an extraordinary range of proprietary database technology. Google File System was created to distribute files securely across its many inexpensive commodity server/storage systems. A program called Borg emerged as an automated methodology to distribute the workload for data coming in amongst its myriad of machines in a process called “load-balancing”. Bigtable scaled data management and storage to enormous sizes. Perhaps the most critical part of the software equation was MapReduce, an almost Assembly-like piece of software that allowed Google to write applications that could take advantage of the large data-sets distributed across their “cloud” of servers.[1] With these sets of software solutions, Google began creating huge warehouse-sized data centers to collect, store and index information.

When Google published the conceptual basis for MapReduce, most database experts didn’t comprehend its implications, but not surprising, a few at Yahoo! were very curious. By that year the whole area of data management and processing was facing new challenges, particularly those managing data warehouses for hosting, search and other applications. Data was growing exponentially; it was dividing into many different types of formats; data models or schemas were evolving; and probably most challenging of all was that data was becoming ever more useful and enticing for businesses and other organizations, including those in politics. While relational databases would continue to be used, a new framework for data processing was in the works. Locked in a competitive battle with Google, Yahoo! strove to catch up by developing their own parallel-processing power.[2]

At Yahoo! a guy named Doug Cutting was also working on software that could “crawl” the Web for content and then organize it so it can be searched. Called Nutch, his software agent or “bot” tracked down URLs and selectively downloaded the webpages from thousands of hosts where it would be indexed by another program he created called Lucene. Nutch could “fetch” data and run on clusters of 100s of distributed servers. Nutch and Lucene led to the development of Hadoop, which drew on the concepts which had been designed into Google’s MapReduce. With MapReduce providing the programming framework, Cutting separated the “data-parallel processing engine out of the Nutch crawler” to create Apache Hadoop, an open source project created to make it faster, easier and cheaper to process and analyze large volumes of data.[3]

Amr Awadallah of Cloudera is one of the best spokesmen for Hadoop.

By 2007, Hadoop began to circulate as a new open source software engine for Big Data initiatives. It was built on Google’s and Yahoo!’s indexing and search technology and adopted by companies like Amazon, Facebook, Hulu, IBM, and the New York Times. Hadoop, in a sense, is a new type of operating system directing workloads, performing queries, conducting analyses, but at a totally unprecedented new scale. It was designed to work across multiple low-cost storage/server systems to manage large data-sets and run applications on them. As an operating system, it works across a wide range of servers in that it is a system for managing files and also for running applications on top of those files. Hadoop made use of data from mobile devices, PCs, and the whole Internet of “things” such as cars, cash registers, and home environmental systems. Information from these grids of data collection increasingly became fodder for analysis and innovative value creation.

In retrospect, what happened in the rise of Big Data was a major transition in the economics and technology of data. Instead of traditional database systems that saved information to archival systems like magnetic tape, which made it expensive to retrieve and reuse the data, low-cost servers became available with central processing units that could run programs within that individual server and across an array of servers. Large data centers emerged with networked storage equipment that made it possible to perform operations across tens of thousands of distributed servers and produce immediate results. Hadoop and related software solutions that could manage, store and process these large sets of data were developed to run data centers and access unstructured data such as video files from the larger world of the Internet. Big Data emerged from its infancy and began to farm the myriad of mobile devices and other data producing instruments for a wide range of new analytical and commercial purposes.

Share

Notes

[1] Steven Levy’s career of ground-breaking research includes this article on Google’s top secret data centers.
[2] Amr Awadallah listed these concerns at Cloud 2012 in Honolulu, June 24.
[3] Quote from Mike Olsen, CEO of Cloudera.

Citation APA (7th Edition)

Pennings, A.J. (2013, Feb 15). Working Big Data – Hadoop and the Transformation of Data Processing. apennings.com https://apennings.com/data-analytics-and-meaning/working-big-data-hadoop-and-the-transformation-of-data-processing/

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a Professor at the Department of Technology and Society, State University of New York, Korea, where he teaches broadband technology and policy for sustainable development. From 2002 to 2012, he was on the faculty of New York University, teaching information systems management and digital economics. He also taught in the Digital Media MBA at St. Edwards University in Austin, Texas, where he lives when not in Korea.

Apollo 13: The Write Stuff

Posted on | January 27, 2013 | No Comments

I recently had the chance to visit the Johnson Space Center near Houston, Texas, with my family. I couldn’t help remember those famous words, “Houston, we have a problem” as I toured the facility. Uttered when the crew of Apollo 13 discovered “a main B bus undervolt,” indicating a loss of power from their fuel cells and an associated gas leak. These technical failures changed the spacecraft’s mission from exploration to survival. Their plight was cinematized by Director Ron Howard in his (1995) movie Apollo 13 and is having a resurgence with a new National Geographic version of “The Right Stuff,” Tom Wolfe’s (1979) book, The Right Stuff.

saturn rockets with burnt Command Module

I wrote the essay below to point attention to the new literacies and simulation techniques created and enhanced by NASA programs to guide and test the space vehicles on their historic journeys. The cybernetic process of guiding a spacecraft to the Moon is exemplified by some clever F/X and acting in this movie.

Still, more than that, it tells the story of a certain break with “reality” and a new trust in the techniques and instrumentalities of hyperreal simulation. Apollo 13 as well as the more recent Hidden Figures (2019) about the black women who contributed so much to the engineering and mathematics needed for success in the space race.

Apollo 13 was the third spacecraft scheduled to land humans on our orbiting Moon. Shortly after its liftoff on April 11, 1970, one of its oxygen tanks ruptured and destroyed several fuel cells and caused a small leak in the other main oxygen tank. Immediately NASA knew the mission was no longer landing on the Moon. The problem became one of returning the astronauts to terra firma before they either froze to death or died of oxygen asphyxiation (or CO2 poisoning), not to mention the problems associated with navigating back with barely any electrical energy left to run the computer or even the radio.

Unlike the macho heroics of The Right Stuff (1979) based on Tom Wolfe’s book of the same name, Apollo 13 celebrated not just the obvious bravery of the endeavor, but a new type of technical/scientific literacy. The “ground controllers” in Houston had to recalibrate the mission trajectories and develop a new set of procedures to be tested and written for the crew in space. These were done largely using the multimillion dollar simulators that the astronauts had trained in before the actual launch.

A fascinating example was when the ground crew developed the procedures for using some additional lithium hydroxide canisters for taking the CO2 out of the air. The astronauts were faced with a very real problem of being poisoned by their own exhalations when the square carbon dioxide tubing from the Command Module was not compatible with the round openings in the Lunar Module environmental system (they had been forced to move to the Lunar Module when the explosion in the Command Module occurred).

A group of the Ground Crew got together with all the supplies they knew were expendable in the spacecraft and devised a solution. They configured a way to attach the square canisters to the round openings by using plastic bags, cardboard, tape, etc. Finally, they wrote up the procedures which were transmitted to the crew just in time to avoid asphyxiation.

The movie is a very interesting historical representation of the use of systems and written procedures within an organization. To some extent the Moon landings provided the triumphant background narrative for the new developments in computers and simulation. Their successes provided the aura of certainty needed for a whole host of new technological investments from CAD/CAM 3-D production, pilot-less drone warfare, space-based remote sensing and mapping, and the Bloomberg/Reuters worldwide system of electronic financial markets.

© ALL RIGHTS RESERVED

Share



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

Virality and the Diffusion of Music Videos

Posted on | January 10, 2013 | No Comments

I’m talking at the Viral Summit next week in Las Vegas so I thought I’d finish up on some topics I’ve been working on that address viral marketing and the music industry.

With over 1.148 billion views since July of 2012, the Gangnam Style music video has us all scratching our heads. The parody of South Korea’s ritzy Gangnam district in Seoul has rocketed its Asian metrosexual singer to immediate international stardom. Park Jae Sang, better known as PSY, has gone from relatively well-known rapster in his home country to international celebrity, even making an appearance in Madonna’s latest NYC concert.

Gangnam Style also highlights the power of viral marketing. With nearly 36 million shares since its release last summer, primarily via Facebook (33,886,323 shares) but also through Twitter (1,790,190 shares), it already features second on the all time viral chart. The graph below tracks the Gangnam Style “epidemic”.[1]

gangnamdailylinkgraph

Virality refers to the diffusion of messages through the help of cooperating individuals. Often referred to as a word-of-mouth (WOM) process, it has received new emphasis with the decline of broadcasting and the rise of network effects on the Internet. The name derives from the term “virus” and their epidemiological spread from person to person until a critical mass erupts into a major outbreak.

According to Unruly Media, the top spot on the list of all-time viral shares belongs to the video by Jennifer Lopez – On The Floor featuring Hispanic-American rapper Pitbull. The disco duet leads the virality list with 37,405,834 Facebook shares and 271,177 Twitter shares since March of 2011.

The success of a viral message depends on such factors as the interest in the item, the timing of message, the network structures available, and the cost and ease of moving the message forward. Good content is obviously a key and it should be no surprise that creative composition, humor and sex appeal are important. Also important is taking advantage of topics that are trending. In addition, knowing how and where to seed content into a target audience on the web through opinion leaders is crucial to a successful viral campaign.[2]

The attention given to music videos had been on a steady decline since their heyday during the 1980s on MTV and VH1 but social media has provided a fascinating new venue to entice audiences and distribute musical creations. Youtube has provided the main new distribution channel but it has been Facebook and Twitter that have provided the network mechanism to propel music content out to their intended and unintended audiences.

Compared with traditional advertising, viral marketing offers music videos better audience targeting, lower communication costs, and faster diffusion. But will it make money? Music piracy has been plaguing the industry since Napster was introduced in the 1990s. A newer challenge has been the number of software applications have been developed that allows MP3s to ripped from Youtube, but ITunes, Amazon MP3, and GooglePlay have now provided easy-to-use platforms to search, sample and buy music. The real test for viral marketing is whether the sharing of music videos will circle consumers back to sites that will monetize music products for the artists.

Notes

[1] Stats on viral shares from Unruly Media’s Viral Video Chart.
[2] Check out these tips on how to make a music video go viral.
[3] Mashable maintains a top viral media list.

© ALL RIGHTS RESERVED

Share

Anthony

Anthony J. Pennings, PhD recently joined the Digital Media Management program at St. Edwards University in Austin TX, after ten years on the faculty at New York University.

E-Commerce’s Billion Dollar Mondays Still Mark Holiday Shopping Season

Posted on | December 15, 2012 | No Comments

Reflecting increasing consumer confidence in a recovering U.S. economy, online shopping for the 2012 holiday season is off to a strong start.[1]

According to web analytics company Comscore, some $30 billion has been spent online by mid-December 2012, a 13-percent increase over the same period in 2011. Shopping from home and shopping from the work environment were statistically, roughly the same. Digital content and subscriptions, mainly comprised of digital books, music MP3s, and video downloads, gained the most with a 28% increase over last year; but products requiring physical delivery also showed significant growth. Toys (18%) showed a significant increase and Consumer Electronics such as tablets (15%) and Video Game Consoles & Accessories (15%) showed equal jumps.[2]

Mondays continue to be the best days for online shopping, led by Cyber Monday with a 17% increase over last year. November 26, this year’s Cyber Monday was the best day of online spending on record with $1.46 billion in sales. Green Monday also had a major increase to $1,275 billion of e-commerce sales.

Green Monday was been identified as the Monday with at least 10 days prior to Christmas and has been statistically the second busiest online shopping day of the season after Cyber Monday. Falling on December 10 this year, its 13% rise came from a combination of more buyers (up 7% to 9 million) as well as more spending per buyer (up 6% to $140.95) that came from an increase in the average number of purchases per buyer (up 5% to 1.76) rather than an increase in the amount spent per transaction which rose only 2% to $80.11.[3]

A number of trends are influencing holiday e-commerce patterns. Over 52 percent of e-commerce transactions that require shipping are being delivered for free. Competition is forcing retailers to lower the price threshold per item for free shipping. FedEx expects to ship about 280 million packages between Thanksgiving and Christmas day, an increase of 13% over figures from 2011 while UPS expects to move a whopping 528 million items. Smartphones and tablets also make it easier for consumers to search for deals and price information, if not actually facilitating transactions.[3]

Lastly, we might see the end of the trend towards “Monday shopping” as this year Tuesday, December 4 moved ahead of Green Monday with $1.362 billion in online sales. With the deadline for guaranteed delivery for Christmas as late as December 22 and with mobile solutions making shopping much more seamless in terms of time and space convenience; online holiday shopping is likely to be more evenly dispersed across the season.

Notes

[1] According to Lynn Franco, Director of Economic Indicators at The Conference Board, the Consumer Confidence Index reached its highest level since February of 2008.
[2] Comscore continues to be a leader in web and e-commerce statistics. Information from “U.S. Online Holiday Retail Sales Reach $29 Billion” Web. 14 Dec. 2012 was particularly useful for this report. Also, see The ComScore Data Mine.
[3] “Green Monday” was originally coined by eBay who recognized the statistical spike and refers to the cash not the environment. Statistics listed here from Comscore.
[4] Transworld Business lists historical details of online holiday shopping.
[5] Chase Holiday Pulse is also an useful source of information on holiday shopping.

© ALL RIGHTS RESERVED

Share

Anthony

Anthony J. Pennings, PhD recently joined the Digital Media Management program at St. Edwards University in Austin TX, after ten years on the faculty of New York University.

The Qualtrics Conundrum

Posted on | December 9, 2012 | No Comments

While tech giants Facebook and Google get most of the public’s consideration, another upcoming company called Qualtrics deserves a slice of our attention. This company has developed an online platform for research that makes it much easier to conceptualize, construct, distribute, analyze and visualize projects. Qualtrics is moving quickly ahead of competitors Intellisurvey and SurveyMonkey and is having a dramatic and disruptive effect on the commercial and academic state of gathering “unstructured” external data for various types of research.

Qualtrics was founded in 2002 in Utah by the Smith family, who have already rejected one $500 million dollar buyout offer for their privately held company.[1] Scott, the father, was a professor of marketing at Brig­ham Young University for 30 years. He developed his ideas for an online research platform while on an extended sick leave. When his sons Ryan (now the CEO) and Jared developed an interest, they built the company into one of the top 40 companies on Business Insider’s top 100 list of private tech companies.[2]

Their mantra, ‘Sophisticated enough for a PhD, easy enough for an intern,’ points to the trend to make research easier and quicker. This allows companies to “insource” the production of vital information on customers, employees, B2B partners and suppliers. Previously, they hired outside firms to conduct this research, which can get pricey. Now companies are enabled by Qualtrics and other survey companies that provide research and analytics software to increasingly do their research work in-house. Now nearly 5000 paying customers systematically collect “unstructured external data” using these online tools. The list includes Chevron, eBay, ESPN, FedEx, Gieco, Microsoft, Neiman Marcus, Prudential, Thomson Reuters, Royal Caribbean, and Southwest Airlines, as well as some 600 universities and a large number of state and federal agencies.

CEO Ryan Smith returned to his alma mater at BYU to give a talk on Qualtrics at the Rollins Center for Entrepreneurship & Technology.

I started to develop a working familiarity with Qualtrics while I was the chair of a program at New York University offering a MS in Management and Systems. They had a strong thesis program and many students conducted empirical and highly quantitative research. Besides actually supervising thesis students I also distributed surveys for the students to the larger student body and noticed that many of the students started to adopt the Qualtrics software for their data collection.

It’s beyond my scope here to go into a lot of detail about the characteristics and workings of Qualtrics, but below is a list of some of the features it provides.

– over 100 different types of questions (i.e. multiple choice, T/F);
– a choice of Likert scales (i.e. Very Unlikely to Very Likely);
– easily embed video and audio clips;
– choose images from a computer or download from the web;
– use questions by professionals from previously designed surveys;
– make surveys quickly using templates;
– personalize surveys with a respondent’s name and other characteristic information;
– randomize the order of choices and questions;
– create a library of questions, letters, surveys, and media for future use.[4]

Although the capabilities of the Qualtrics platform are quite extraordinary, they also raise a number of issues. One concern is that the facility of Qualtrics is no guarantee for the quality of the research. Drawing questions from a theoretical base, articulating research hypoteses and identifying dependent and independent variables can take incisive minds sharpened by years of education and immersion in the area to do well. The IT mantra “Garbage In, Garbage out” is applicable here as a poorly designed research survey can produce incomplete or misleading results.

Also, the Qualtrics solution raises questions about the value of quantitative methodologies that rely on a decontextualizing process that ignores more meaningful frameworks such as business outcomes in the commercial world and social context in academia. The writings of web analytics experts such as Avinash Kaushik and John Lovett, who both have spent the last decade using these metrics produced by social media and other Internet activities confirm the dire need to separate the clicks that matter from those that don’t and also how to tie them into an organization’s objectives and purpose.

Much of this research leaves decision-makers and policy-formulators hungry for the perspectives that make this type of information more significant. The challenge of this new age of research will be the articulation and visualization of the meanings these metrics produce that address and inform higher levels of purpose and understanding.

Notes

[1] Victoria Barret wrote an extensive piece, “Qualtrics: Tech’s Hidden Gem in Utah.” Forbes. Forbes Magazine, 15 May 2012. Web. Accessed 06 Dec. 2012. <http://www.forbes.com/sites/victoriabarret/2012/05/15/qualtrics-techs-hidden-gem-in-utah/>.
[2] “The Next 25 Big Enterprise Startups.” Business Insider. N.p., n.d. Web. Accessed 08 Dec. 2012. <http://www.businessinsider.com/the-hottest-enterprise-startups-that-venture-capitals-say-they-love-2012-7>.
[3] “Qualtrics Raises $70M From Accel And Sequoia: The Biggest Software Company You Haven’t Heard Of?” TechCrunch RSS. N.p., n.d. Web. Accessed 09 Dec. 2012. <http://techcrunch.com/2012/05/15/qualtrics-raises-70m-accel-sequoia/>.
[4] This list was composed in part from information from “Building Surveys.” Qualtrics. N.p., n.d. Web. 09 Dec. 2012. .

© ALL RIGHTS RESERVED

Share

Anthony

Anthony J. Pennings, PhD recently joined the Digital Media Management program at St. Edwards University in Austin TX, after ten years on the faculty of New York University.

Lincoln and the Telegraphic Civil War

Posted on | November 18, 2012 | No Comments

I’m looking forward to seeing Steven Spielberg’s Lincoln (2012) about the 16th president’s efforts to end slavery and win the Civil War. Abraham Lincoln was a complex and troubled corporate lawyer and state senator who found his voice and purpose as he confronted the economic, political, and humanitarian challenges of this time. As his drive led him to the White House, he summoned the technological resources of his time to aid his purpose.

Lincoln was the first president to understand the railroad and telegraph as strategic tools to wage both politics and war. As a railroad lawyer in the 1850s, Lincoln became aware of the powerful communication and economic changes brought on by these two converging technologies, particularly as they pushed outward to expand trade and political influence into the western frontier. As president and commander-in-chief, he studied the use of these young technologies by Napoleon III in the 1859 campaign to support the unification of Italy. Lincoln aggressively executed the war and worked with his generals to use the telegraph and railroads to mobilize troops and supplies effectively and conduct successful wartime operations.

Despite its promise, technological infrastructure lagged in the US until the Civil War and Lincoln’s resolve to secure the West. He was elected president in the fall of 1860 and took office the next spring with the southern states threatening to secede. He saw the preservation of the Union not only in North-South terms but also in incorporating the vast expanses of the West. The 1859 discovery of silver in Nevada and the gold rush’s expansion to Colorado’s Pike’s Peak provided additional motivation. He supported the completion of the cross-country telegraphic link that had to traverse hostile Indian territory and rugged mountain ranges. Finally, cables from two private telegraph companies met at Salt Lake City, and President Lincoln received the first message in Washington DC from Sacramento, California on October 24, 1861.[1]

A telegraph room was set up in the War Department next to the White House as one of the first command and control centers using electric technology. Lincoln would often linger there for long periods during the war, awaiting reports from various battlefields and writing memos to be transmitted. The US Military Telegraph (USMT) was created in 1862, taking over from the Union’s Signal Corps. They took over the emerging long distance lines, but rather than militarize all aspects of the telegraphy system, they chose to supervise the private sector’s telegraph operators and required them to make military messages their priority. They also helped subsidize the construction of new lines.[2]

The Union proved more successful with the telegraph than the Confederate South. The telegraph served to coordinate troop movements and critical logistical supply in conjunction with railroads. General Herman Haupt designed the responsibilities of the North’s Department of Military Railroads: inventory the railroads and their distances, assess their condition, and determine the availability and prices of materials and labor for building and maintaining the lines. The railroads were used as crucial supply lines for ammunition, other equipment, and food. They were also used to move and to move large numbers of troops quickly to the battlefield. General Sherman, notorious for the destructive “Sherman’s March” through the south, estimated that to keep an army of 100,000 men and their animals supplied it would take 160 railroad cars of supplies a day or over 36,000 wagons drawn by team of six mules each.[3] The South never garnered the technological sophistication or supply to adequately coordinate its railroads and thus its troops suffered severe strategy and supply problems.


Notes

[1] I highly recommend Lubrano’s (1997) book, The Telegraph: How Technology Innovation Caused Social Change. NY: Garland Publishing. p. 10.
[2] Lincoln in the Telegraph Office: Recollections of the United States Military Telegraph Corps during the Civil War is a very old book by David Homer Bates.
[3] Information on the transcontinental link from Eicher, D.J. (2002) The Longest Night: A Military History of the Civil War. NY: Simon and Schuster. This is one of the few accounts of the Civil War that addresses the issues of communications and transportation.
[4] Eicher, D.J. (2002) The Longest Night: A Military History of the Civil War. NY: Simon and Schuster. One of the few accounts of the Civil War that addresses the issues of communications and transportation.

© ALL RIGHTS RESERVED

Share

Anthony

Anthony J. Pennings, PhD recently joined the Digital Media Management program at St. Edwards University in Austin TX, after ten years on the faculty of New York University.

Markets Fail

Posted on | November 16, 2012 | No Comments

Markets fail
That is why we have Government
Governments fail
That is why we have Democracy
Democracy fails
That is why we have Journalism
Journalism fails
That is why we have Education
Education fails
That is why we have Intellectuals
Intellectuals fail
That is why we have Research
Research fails
That is why we have Markets…

© ALL RIGHTS RESERVED

Share

Anthony

Anthony J. Pennings, PhD recently joined the Digital Media Management program at St. Edwards University in Austin TX, after ten years on the faculty of New York University.

« go backkeep looking »
  • Referencing this Material

    Copyrights apply to all materials on this blog but fair use conditions allow limited use of ideas and quotations. Please cite the permalinks of the articles/posts.
    Citing a post in APA style would look like:
    Pennings, A. (2015, April 17). Diffusion and the Five Characteristics of Innovation Adoption. Retrieved from https://apennings.com/characteristics-of-digital-media/diffusion-and-the-five-characteristics-of-innovation-adoption/
    MLA style citation would look like: "Diffusion and the Five Characteristics of Innovation Adoption." Anthony J. Pennings, PhD. Web. 18 June 2015. The date would be the day you accessed the information. View the Writing Criteria link at the top of this page to link to an online APA reference manual.

  • About Me

    Professor at State University of New York (SUNY) Korea since 2016. Moved to Austin, Texas in August 2012 to join the Digital Media Management program at St. Edwards University. Spent the previous decade on the faculty at New York University teaching and researching information systems, digital economics, and strategic communications.

    You can reach me at:

    apennings70@gmail.com
    anthony.pennings@sunykorea.ac.kr

    Follow apennings on Twitter

  • About me

  • Writings by Category

  • Flag Counter
  • Pages

  • Calendar

    May 2024
    M T W T F S S
     12345
    6789101112
    13141516171819
    20212223242526
    2728293031  
  • Disclaimer

    The opinions expressed here do not necessarily reflect the views of my employers, past or present.