Anthony J. Pennings, PhD

WRITINGS ON DIGITAL ECONOMICS, ENERGY STRATEGIES, AND GLOBAL COMMUNICATIONS

Origins of the “Information Economy”

Posted on | March 16, 2014 | No Comments

The economic understanding of knowledge’s contribution to the economy accelerated in the 1970s and terms such as “information society” and the “information economy” began to achieve high rates of circulation by the next decade. When I was at the East-West Center in Honolulu as a graduate student during the 1980s, I was part of a team of economists studying the roles of communication and information in the economy. Headed by Meheroo Jussawalla, we were primarily looking at the Asia-Pacific region, although necessarily attuned to the global technological and theoretical trends of the time.

The proliferation of microprocessing technologies, particularly “microcomputers” such as the Apple II and the IBM PC, as well as the commercialization of communications satellites and other digital technologies, stimulated ideas about the social impact of “high tech” and formulations about a new type of society taking shape. Competing characterizations included the “post-industrial economy” introduced by Daniel Bell in his The Coming of Post-Industrial Society (1973). Also, the “knowledge economy” by Fritz Machlup in The Production and Distribution of Knowledge in the United States (1962) that was expanded upon in later volumes in a series called Knowledge: Its Creation, Distribution, and Economic Significance. The “knowledge society” was also popularized by Peter Drucker in his Age of Discontinuity (1992) that included a chapter with the same name.

Marc Porat‘s doctoral dissertation was an important voice in the emerging discourse that recognized the increasing importance of information as a cultural and economic phenomenon. Porat’s doctoral work at Stanford University is noted for the measures of the information economy that he created to analyze U.S. Department of Commerce employment data. It has often been cited as empirical proof that agriculture and manufacturing industries were on the decline and that the “information economy” had arrived.[1]

By focusing on the activities that workers performed rather than their explicit job titles, Porat was able to argue that, by 1980, more people in the United States were engaged in information work than any other kind of work. About 48 percent of the U.S. population were employed in one form or another of information activity, as compared to only about 3 percent in agriculture and slightly more than 20 percent in manufacturing. About 30 percent were engaged in providing services – an important category because information work and services were often statistically combined, whether flipping hamburgers or arguing a legal case. Porat helped establish information work as a separate category.

Porat and the others helped develop a conversation about a new economy based on the dynamics of information in a high-tech environment. Many saw the changes as a harbinger of a post-industrial revolution which would democratize wealth because the “nature” of information made it intrinsically undepletable and infinitely shareable. Information was also a crucial part of the automation process and would help relieve humanity of the drudgery of work. These ideas persisted well into the 1990s with the “new economy” and its resultant dot-com bubble.

Many economists did not share this utopian view. Although classical economists had based market theory on assumptions that information was free and readily available, information economists saw through this simplistic rendering. Machlup was, in fact, a student of Ludwig von Mises, and supervised his doctoral dissertation. Von Mises is better known today as the founder of the so-called “Austrian School of Economics.” With other disciples such as Friedrich von Hayek, who wrote The Road to Serfdom (1944), a critique of government planning; von Mises’ group of economists examined the role of information in markets. In particular, they focused on the use of prices as communication or signaling devices in the economy. The process of exchange, they argued, is a knowledge producing and sharing activity.

The Austrian school seemed to stumble on to the role of information in the economy while targeting their main nemesis, socialism. It was the planning activities of communist and socialist governments that drew their ire. Their targets included modern democratic political economies after nearly all non-communist countries adopted the macroeconomic ideas of John Maynard Keynes. The Austrians would be instrumental, in fact, in the Reagan-Thatcher turn to market ideology in the 1980s after a turbulent decade of inflation and economic stagnation.

These liberal-minded economists had their critics. For the less enthusiastic, usually Marxist-informed interpretations, “information” and related technologies were seen in terms of enriching the rich and expanding the global sphere of capitalist control. They saw information as a way for corporations to extract surplus value from the production process without extensive investments in additional equipment, labor, or raw materials. Information was a way to enhance productivity, but usually at the expense of the workers involved. 

Herb Schiller in his Who Knows: Information in the Age of the Fortune 500 (1981) pointed out that information technologies were primarily being used by a few thousand “super-corporations” to form a transnational web of cultural hegemony and economic control. The Canadian take on communications and information, arguably started by Harold Innis and Marshall McLuhan also undertook a theoretical investigation of the economics of information. Canadians were especially sensitive to the trade in information and cultural products across the border with the USA. For instance, Thomas McPhail‘s Electronic Colonialism pointed to the globalization of a new type of empire due to the proliferation of mass media messages. This new media empire would operate by influencing the attitudes, values, and languages of people around the world.

For liberal theorists, the new information environment would also upset previously dominant structures of power and democratize society by providing information composed not from the “top”, but from diversified and autonomous sources. These would combat the dominant one-way, industrially produced, mass media messages with interactive technologies. Ithiel de Sola Pool for example, in his Technologies of Freedom, mirrored this view with a “soft” technological determinism arguing that the new converging communication and information technologies are “conducive to freedom” and are more “pluralistic and competitive.”

Peace scholar Majid Tehranian, a Harvard-trained political economist, wrote Technologies of Power in response to Pool. It remains one of my favorite books because it not only addressed the issues of economics and freedom but connected information “machines” to the complexities and importance of democracy, as well.

The above is only a cursory overview of the ferment that characterized the time. Hopefully, it points to some of the names that struggled to come to grips with the increasingly important roles of communications and information in the economy. It was an intellectual period in which the impact of information technologies was not at all as apparent as it is today.

As for me, I did my doctoral dissertation on electronic money and dystopias, because if there is one thing economists find inconvenient, it is money. Money is the Achilles heel of free market economics.

Notes

[1] Porat, Mark Uri (May 1977). The Information Economy: Definition and Measurement. Washington, DC: United States Department of Commerce. OCLC 5184933.

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

How the Web Secures Your Data

Posted on | February 13, 2014 | No Comments

Central to facilitating the usefulness of net-centric communications and commerce is the set of protocols that secure private data. Information such as usernames and credit card numbers going through the Internet pass through many types of host routers, as well as your ISP — any of which can constitute a security threat. It is possible for unencrypted information to be monitored and stolen at many points along the route from sender to receiver. The following explanation of the most commonly used security system is abbreviated, but should provide a start for understanding how your data is protected on the Internet.

An initial solution that is still in use is the Secure Sockets Layer (SSL) protocol that was adopted by Netscape in the early days of the World Wide Web to protect sensitive data flowing over the Internet.

SSL starts with a “handshake”, an exchange of information that authenticates the server and sometimes the client (your computer). The process begins with the browser sending a HELLO message containing information about its security abilities and some random data that will be used later on. The server then determines which encryption methods will be utilized, based mostly on the SSL capabilities of the client’s browser, and sends it own HELLO message containing the details of the encryption method that will be used in the session. The encrypted data is then sent between the client and server.

If you are starting a website and need to provide secure transmissions, you can purchase an SSL certificate from a company like godaddy.com or another of the web services company that are SLL certificate authorities such as Symantec or highly rated DigiCert. DigiCert offers a number of different certificates and various seals that you can display to assure visitors to your site that you have properly secured the connection and are protecting their data.

Along with the HTTPS address prefix and icons like the padlock, we are now seeing changes to the address bar such as turning it green to indicate that increased SSL-based security measures are in place.

© ALL RIGHTS RESERVED

Share

Anthony

Anthony J. Pennings, PhD recently joined the Digital Media Management program at St. Edwards University in Austin TX, after ten years on the faculty of New York University.

Bitcoins and the Properties of Money

Posted on | January 27, 2014 | No Comments

Money is a matter of functions four, a medium, a measure, a standard, a store.” – Lyrical couplet used to memorize the roles of money.

Bitcoin emerged during the Great Recession to challenge the notion of official money and particularly its political and symbolic connection to a national system. While the US and much of the rest of the world teetered on financial ruin, it emerged as the great geek hope for an ultra-national currency.

Bitcoin is a complex computer-based type of digital money proposed by the mythical Satoshi Nakamoto that utilized the principles of cryptography and peer-to-peer networking. This “cryptocurrency” emerged from the calculative and recording abilities of modern processing technologies and the transactional capabilities of highly encrypted networks spanning the globe. It has been both embraced and vilified for its capacity to act as money, to transcend national regulation, and for the volatility it offers speculators.

Bitcoin’s value spiked in 2013 when it traded at around $1000. A striking example of its appreciation was a pizza purchased in 2010 by one of the Winklevoss twins for 10,000 bitcoins. The value of those coins at the end of 2013 would be worth around $10 million. One of the most intriguing aspects of this innovation is that this bitcoin money supply is “mined” by computers doing complex calculations.

But can Bitcoin conduct the activities that are required of a currency: the facilitation of exchange; the capacity to price, the ability to compare value, and the storage of wealth.

An analysis of Bitcoin or any other cryptocurrency or even an electronic payment system like Paypal should start with an examination of money and how our current system works. Economists give a wide berth to the question of what is money. The standard agreement is that money is anything that can be used to buy goods and services or anything that is accepted by both sellers and buyers as payment. They also generally agree that money has the following functions that can used as criteria to evaluate the utility of a type of money:

1) Medium of Exchange – Money facilitates the exchange of goods and services. It operates as a “symbolic third”, a separate entity that helps to reconcile the value of other goods and services. It is much more efficient than barter, which requires a “double coincidence of wants” (I want your cow, you want my chickens). Money reduces the transactions costs associated with a purchase primarily by reducing the problems associated with bargaining. Can you imagine trying to negotiate for all the products and services you use? How about standing in line when four people in front of you need to negotiate for their milk and bread? Virtual currencies like Bitcoin are seen to reduce transaction costs, especially if they can be used to avoid the billions in fees paid to companies such as Visa Inc. and Citibank for the use of credit and debit cards.

2) Unit of Accounting/Measure of Value – Money is used to price products and services and to express those prices in terms of a common unit of account. It is a measure by which prices and values are expressed. By making nearly everything price-able and exchangeable, a system of comparison is created. When the prices of goods are stated in terms of the monetary unit, the relative price-value of all items become comprehensible. This function also allows for prices to be displayed – crucial for a market environment. In the US we have the dollar, in Japan the yen, and in Brazil, the real. In each case, money provides a common “nominator” – a way to use numbers to value products. It can be used to add up the wealth of a person, a company, or a nation in monetary values.

Economists use the term “nominal indicator” to refer to this process of naming a price-value in terms of monetary numbers. Indo-Arabic numerals (0,1,2,3) have been nearly universally adopted as the major money nominator around the world. The power of zero and its place-value system of writing helped displace Roman numerals and provide the vehicle for new calculative innovations. Spreadsheets have given the nominal function of money extraordinary new power by creating enhanced capabilities for budgeting and other types of financial analysis at any one point in time, or over time, or projected into the future.

3) A Store of Value – Money should have the ability to hold its value over time. Although currencies do deflate and inflate in value, it needs to be able to transfer most of its value (wealth) into the future. Granted, most people would actually like to see it increase in value. This capability of money allows it to act as a standard of deferred payment in a credit process where money is lent out with the expectation that it will be returned, with interest, at a future date. Because of its volatility, countries like Sweden have classified Bitcoin as as an asset like art or jewelry that can be taxed – and not as a currency.

Surprisingly popular, Bitcoins are accepted by a wide number of merchants and companies such as Zynga. It would seem network effects are in play as the currency becomes more popular with each additional site that accepts it. Bitcoins offer companies a new option for accepting payments and announcing your willingness can be a PR event in itself. Companies should monitor the value of the cybercurrency and convert their Bitcoins to official currencies often to avoid major fluctuations in value. Bitcoins have been subject to hacking as well and thefts have occurred. A company should also be prepared to pay taxes on any transaction as if they were dealing just with dollars.

Bitcoins suggest the possibilities of a secure, decentralized, and distributed currency system outside of control of national governance, but they also raise additional concerns and issues about the role of money in an globalized digital economy. Bitcoins promise a monetary system where its value and amounts produced are determined by it’s cryptocurrency parameters and not the influences of any individual or organization, including central banks. But how will nations control their money supply? What will it mean for organized crime? Will it create new inequalities? Will its value vary too much to be useful for anything but asset speculation or a temporary hedge?

It is likely that the international system of nation-states will continue to block Bitcoin’s growth. The Bank of Finland recently denied Bitcoin status as an official currency or even a payment instrument because Finnish law states that “a payment instrument must have an issuer responsible for its operation” according to Paeivi Heikkinen, head of oversight at the Bank of Finland. They have joined China and Germany in taking action in slowing the adoption of the virtual currency. The US on the other hand has taken a cautious approach to the issue with Fed chair Bernanke acknowledging the Fed doesn’t have the authority to supervise bitcoin and other virtual currencies. However, Bitcoin broke $1000 when Bernanke suggested that they “may hold long-term promise.”

While the future of Bitcoins is uncertain, what we should be looking at is the platform it runs on called a blockchain. This is a distributed electronic ledger shared among the nodes in a participating network and suggests a new era of online services.

Share

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is Professor and Associate Chair of the Department of Technology and Society, State University of New York, Korea. Before joining SUNY, he taught at Hannam University in South Korea and from 2002-2012 was on the faculty of New York University. Previously, he taught at St. Edwards University in Austin, Texas, Marist College in New York, and Victoria University in New Zealand. He has also spent time as a Fellow at the East-West Center in Honolulu, Hawaii.

Digital Media Archetypes

Posted on | January 16, 2014 | No Comments

The digital world is competitive, but quite enticing for people looking to explore their capabilities to produce creative and imaginative content. Companies struggle in a competitive environment and it takes commitment and intensive skill sets to be successfully employed in the digital fields. One way to look at the requirements are in terms of some emerging digital archetypes – general job categories dealing with the requirements of working in media intensive organizations. The purpose of delineating these areas is to help people conceptualize the types of competences and skills towards which they should gravitate. Also to investigate some of the education opportunities that are available to train and certify a set of digital media-related skills.

By archetypes, I mean recognizable personality types that emerge from the systemic organization of activities and responsibilities in modern digital environments. Individuals look to situate themselves according to their interests and perceived skills and gravitate towards particular types of jobs. The word “archetype” is derived from ancient Greek. Its root words are archein, for “original” and typos, for “model, pattern, or type”. The modern meaning infers that similar characters, concepts or objects are seen to be copied, derived, or modeled from an original pattern. So the idea of archetypes here is a starting point for examining different digital media job categories.

Here is a list of six proposed digital media archetypes:

Design
Technology/Programming
Business Management
Strategic Communication
Analytics
Global Knowledge

The first three are fairly traditional. Technical skills involve computer programming, information management, and network administration skills. Design requires strong artistic and aesthetic sensibilities combined with animation and layout skills. This will often require the mastery of one or many applications like those in the Adobe Creative Suite. Business skills, as my friend Igor Shoifot, COO of http://www.fotki.com argues, are crucial at all positions across the digital media sphere. Now a Silicon Valley venture capitalist, he stresses everyone in a startup needs to keep business goals in mind, even if they are not directly involved in sales activities.

I’m adding three new categories. The area of communication has been given renewed emphasis due to the network effects and viral capabilities of the Internet. New tactics and technologies have increased the importance of public relations, content marketing, blogging and micro-blogging, as well as social media management. This important approach is now often called strategic communications.

Another area is analytics – capturing value from the growing reservoirs of “big data” – structured and unstructured data that are available in proprietary and public stores of information. Mining this information from the web, processing it into usable data sets, and organizing it into meaningful stories and visualization schemas like infographics will continue to be highly valued skills into the foreseeable future.

Lastly, global knowledge is increasingly expected in work environments, especially those dealing with digital media and cultural industries that operate across ethnic, racial, and national borders. While this area is fluid due to rapid technological and political changes, knowledge of both regions and individual countries are relevant. The globalization of media and cultural industries is very much a process of adjusting to local languages, aesthetic tastes, business conditions, and local consumer preferences – not just mass distributing standardized content.

Work environments involving global media and cultural content environments require and utilize various combinations of design, technical, strategic communication, analytics, global knowledge, and business acumen. Most people may specialize in one or two areas of expertise; others may take on many, particularly in small organizations. Still others may take on leadership roles that require them to establish rapport with, and orchestrate people with different types of competences to accomplish different types of communicative and creative tasks.

Share

© ALL RIGHTS RESERVED

AnthonybwAnthony J. Pennings, PhD is the Professor of Global Media at Hannam University in South Korea. Previously, he taught at St. Edwards University in Austin, Texas and was on the faculty of New York University from 2002-2012. He also taught at Victoria University in Wellington, New Zealand and was a Fellow at the East-West Center in Hawaii in the 1990s.

The Fedwire Network and Open Market Operations of the Federal Reserve

Posted on | December 22, 2013 | No Comments

I’ve been studying financial technology since I did my masters degree on global money and telecommunications networks. One of the most intriguing examples is the US Federal Reserve’s Fedwire network. Probably the most secure data network in the world, it has been designed to provide a wide range of services for financial institutions such as interbank payments and settlement processes. Take a close look at the following figures. In 2014, Fedwire processed some 135 million transactions totaling over US$885 trillion dollars or $3.5 trillion dollars a day.[1] That is certainly a lot of money. The total US government budget for one year is just over $3.5 trillion dollars.

Not surprisingly then, Fedwire is central to the nation’s money supply and health of the economy. But not only does it move massive amounts of money around, it is also the mechanism through which the Federal Reserve’s interest rate policy is implemented. When Fed Chair Janet Yellen announces that interest rates will increase/decrease, Fed traders get on their computers and sell/buy financial instruments to reach the desired interest rate target. These are known as Open Market Operations(OMO). [2]

The Federal Reserve was created in the aftermath of a devastating financial crisis in 1907. Although the crisis was alleviated, primarily by the actions of the infamous New York banker JP Morgan, it drew public anger and calls for reform. The next year, Teddy Roosevelt pushed for the Aldrich–Vreeland Act that formed the National Monetary Commission to study banking systems around the world. Banking elites drew up a new system of money control over the next few years. A few months after JP Morgan died, President Woodrow Wilson signed the “Currency Bill” or Federal Reserve Act of 1913 on Dec 23 of that year. It opened the next year on November 16 with assets from the sale of shares in the Federal Reserve Banks to stockholders of subscribing national banks.

Federal Reserve Banks began moving funds and information electronically in 1915 but it was not until 1918 that they created the proprietary telecommunications system known as Fedwire. The telegraph system connected the Federal Reserve Board in Washington DC to the US Treasury and all 12 Reserve Banks with lines leased from telegraph companies. Initially it was used for settling gold accounts without physical transfer and within a couple of years for transferring Treasury securities electronically. Over the next 40 years Fedwire migrated from Morse code telegraph systems to teletype and telex switched networks and eventually to proprietary telecommunications networks.

During the 1960s, the entire global financial system began to face a “paper crisis” and pushed the move to computerized information solutions. As transactions increased across the sector due to a thriving economy, paper-based record keeping and transfer just could not keep up. Banks, insurance, and securities companies began to automate their activities with computers. Also clearinghouses and exchanges saw the necessity of moving to computers and data networks. The Fed also began to implement automated computer operations starting with Xerox Sigma computers that allowed banks to enter their sales directly into the Fed’s books without additional human intervention. The pressure continued when the US went off the gold standard under President Nixon and the two oil crises dramatically increased the amount of US dollars in global circulation leading to extraordinary double-digit inflation.

President Jimmy Carter appointed Paul Volcker as the new Chair of the Fed in October of 1979. Volcker immediately enacted some tough measures that increased interest rates by “targeting” the amount of money in the economy and in bank reserves. Overnight interest rates for banks borrowing of Fed Funds increased to 11.2% in 1979 and later to a peak of 20% in June 1981.[3] Carter had also signed the Depository Institutions Deregulation and Monetary Control Act of 1980 (DIDMCA or MCA) that brought all banks in the country under Fed control. The Act required the Fed to price and charge for most financial services including funds transfers, check clearing, currency and coin services. It was required to provide access to these services to nonmember depository institutions, and other financial services. This encouraged private-sector competition while also standardizing computer systems and connecting the network throughout the entire financial system. Internet protocol (IP) and distributed processing technologies were integrated in the 1990s and the Fed’s open market operations even replaced their famous chalkboards with computers.

How does the Fed try to control interest rates? Banks borrow money from each other through the Fed Funds Market over the Fedwire so they can increase their loan portfolios. If they determine they can make a profit by borrowing the money from other banks and lending it out at an advantageous rate, they will likely do that. Alternatively, they may decide that it is advantageous to hold interest-bearing bonds with their deposits rather than lend out the money. These transactional equations form the operational rationale for the FOMC policy prescriptions and the trading operations they direct.

The FOMC does not set the rate of interest but rather it sets a target that actual trading operations are instructed to meet in order to influence the money and credit conditions in the economy. It does this through the trading desks at the Open Market Operations located at the New York Fed. Workday mornings, around 9:30 AM, with the exact time of intervention determined by chance, the traders at the OMO intervene to influence the Fed Funds Market and guide the Fed Funds Rate towards the FOMC’s target rate.

The traders buy or sell government securities through the OMO to influence the banking system’s lending behavior, but it would be a mistake to think that the OMO trade directly in the federal funds market. Instead, the Fed participates in the Treasuries market and more recently agency debt, especially debt from Fannie Mae, Freddie Mac, Ginnie Mae, and the Federal Home Loan Banks. What they do is influence the amount of excess reserves banks have to lend overnight. The basic economic logic looks like the below.

– Fed Purchases
• Inc Bank Reserves
• Dec Fed Funds Rate
– Fed Sells
• Dec Bank Reserves
• Inc Fed Funds Rate

The OMO conducts both permanent operations, holding debt for long periods; and temporary operations, using repurchase agreements with maturities of less than a week. It does this through trades with the 19 primary dealers that act as intermediaries with the larger banking system. They trade with some 21 primary dealers including Barclays Capital, Citigroup, Credit Suisse Securities, Goldman Sachs, J.P. Morgan Securities, Morgan Stanley, UBS Securities, and Daiwa Securities America. Two primary dealers were lost with the collapse of Bear Sterns and Lehman Brothers in 2008. If the Fed’s OMO purchase bonds from the primary dealers, it will inject money into the banking system and decrease the incentive to borrow from the Fed Funds Market, consequently reducing interest rates. If they sell bonds through the primary dealers it will absorb money from the banks and increase the incentive to borrow through the Fed Fund Market and thus increase interest rates. So the result is that trading operations directly increase or decrease the level of Fed balances, not the flow of federal funds transactions among banks.

The buying and selling of U.S. government securities from these primary dealers influence the price of money bankers are lending to each other overnight. If the Fed purchases bonds it will increase bank reserves and decrease the Fed Funds interest rate. If they sell bonds to the primary dealers it will draw money out of bank reserves and increase the Fed Funds interest rate. By making it attractive for banks to buy government securities from them, the Fed can “tighten” the money supply; conversely, the purchase of bonds will inject money into the economy. It is this inverse relationship between OMO trades and the Fed Funds Rate that guides the nation’s money supply. If bank reserves increase because the banks have sold their government bonds to the Fed, the prices of interest rates, the amount lenders will charge for borrowing that money, will decrease. Likewise, if bank reserves fall because banks prefer to use that money to buy interest-bearing government bonds from the Fed, that action will increase the Fed Funds rate.

Whatever you think about the Federal Reserve, it is central to our current financial infrastructure. Very few people understand the processes of the central bank or the impact it can have on the economy. Others fear the Fed and the power it has over the economy. In any case, it is important for people to come to grips with this dominating institution and the way it structures our current political economy. Currently the Fed Funds Rate is near zero until unemployment decreases and the Fed purchases an additional $75 billion dollars a month in mortgage-backed securities and treasury bills to provide additional stimulus.[4]

Notes
[1] Fedwire transfer figures from http://www.federalreserve.gov/paymentsystems/fedfunds_ann.htm and include daily numbers which averaged U$S2.4 trillion dollars a day.
[2] This video takes you inside the New York Fed’s OMO.
[3] More information on the Fed’s handling of inflation in the late 1970s and early 1980s.
[4] For additional economic stimulus, The Fed has been buying a combination of treasury securities and mortgage-backed assets at a rate of $85 billion a month. In December of 2013 they announced the first “tapering” of $10 billion, reducing the monthly rate to $75 billion.

© ALL RIGHTS RESERVED

Share

Anthony


Anthony J. Pennings, PhD recently joined the Digital Media Management program at St. Edwards University in Austin TX, after ten years on the faculty of New York University.

Why Digital Media Firms Need to be Fed Watchers

Posted on | December 12, 2013 | No Comments

Over the course of about 10 years of teaching economics and digital media at New York University (NYU), I developed a simulation of the Federal Reserve Bank that has proved useful in engaging participants in the study of economics, monetary policy, and how the US economy works.

I developed it initially to make my macroeconomics course more interesting, but recently applied the Fed simulation to an MBA course in digital economics. Consequently, I felt an additional burden to relate the Fed’s actions to a variety of media companies. In this post, I identify key components of the larger economy that digital media companies should be following and how decisions by the Federal Reserve can influence these aspects of the economy – and ultimately their organizations.

The Fed Watcher's Handbook

The entirety of this article is published in The Fed Watcher’s Handbook.

Producers of digital goods and services need to develop educated expectations about economic conditions influencing their businesses. They need to tie observations about the economy and where it is heading to their own decisions about hiring, production schedules, and investments in permanent facilities like studios or office space. While the Fed’s actions, policies, and research are not the only points of concern that can impact a digital firm; they are significantly consequential to warrant sustained observation.

This material is now available at Amazon as part of The Fed Watcher’s Handbook: Simulating the Federal Reserve in Classrooms and Organizations

How do digital media managers conceptualize larger business trends and track this information while necessarily micromanaging the daily activities of their organizations? One of our media economics textbooks proved helpful, The Economics and Financing of Media Companies by Robert G. Picard points out four important areas where media companies should direct their attention:

  • the business cycle
  • inflation
  • interest rates
  • and exchange rates

Although Picard barely mentions the Federal Reserve, we can address the above concerns, starting with the recurring and irregular expansions and contractions of economic activity known as the business cycle.

The Federal Reserve influences the business cycle by controlling the supply of money in the economy. By injecting more money through the financial system, it can lower interest rates and consequently increase business investments and consumer spending. Economic growth means more jobs, increased incomes and more disposable income for consumers to spend. Sales of media products and services are subject to consumer incomes and expectations about the state of the economy. Monitoring the contraction and expansion of the economy supplies vital information about consumer spending patterns and confidence.

Media goods like game consoles and HDTVs are particularly sensitive to economic fluctuations. Advertising, a key driver of media sales, reacts strongly to business conditions. But admittedly, more research needs to be done on digital media products as substitute goods for other entertainment pastimes, particularly in times of economic downturn. For example, films and radio were popular pastimes during the Great Depression as a means of escaping the harsh conditions of the time. More recently, Netflix began its historic rise during the Great Recession and video gaming also did well as consumers purchased relatively cheaper forms of entertainment that allowed them to stay home or take advantage of mobile leisure time.

Inflation can come either from too much money/demand in the economy or too few goods and services. If the economy is growing too fast, prices for goods and wages increase accordingly. When prices inflate it can be good for the economy as purchases are made more quickly to avoid the higher prices. Another incentive to spend is that the value of monetary savings depreciates faster. Inflation is good for borrowers as they wind up spending less to pay off their debt. On the other hand, prolonged inflation makes it hard for producers to plan and inventories can dry up as consumers hoard goods, contributing to shortages. Excessive inflation can be quite stressful for an economy as the US discovered during an economic period during the 1970s known as “stagflation“.

Deflation comes from too little money or effective demand (demand with cash) or from an abundance of goods and services. Deflation has been a more persistent problem in our contemporary economy. This has occurred despite the massive infusions of money into the economy and rising prices of energy. While deflation mainly results from a slow economy and lack of demand, one of the reasons for our current deflationary trend has been the increasing efficiencies of digital products following Moore’s Law that predicts the speed of digital microprocessing doubles every 18 months. Digital technologies are allowing us “to do more with less” – including humans. Competitive pressures due to low barriers to entry into Internet commerce also push prices down. Price stability is a major concern for the Federal Reserve and they watch changes in prices closely.

Interest rates are the prime vehicle for the Federal Reserve to influence the price of money and consequently the supply of it in the economy. The cost of borrowing capital is critical for many digital media firms that need funds for new initiatives, expanding operations, or just the cash flow for meeting payroll and other expenses. The Federal Reserve conducts complex financial trading operations to meet their target for interest rates which they announce about 8 times a year following a two-day meeting.

With the globalization of the net-centric activities, exchange rates are also a concern for digital media firms. The Fed rarely intervenes directly in the spot markets that determine the dollar’s exchange rates. Exchange rate policy is the responsibility of the U.S. Treasury. However, because the global currency markets are so huge, trading trillions of dollar each day, they sometimes coordinate with the New York Fed. Even then, it is more to signal a policy that can influence exchange rates rather than to implement one.

The Fed can have an influence on exchange rates with their interest rate policy. Lower interest rates depress the value of the US dollar by discouraging purchases of U.S dollar denominated financial instruments. It also makes it attractive to borrow cheap money and spend it internationally. Sending money offshore requires purchasing other currencies and that depresses the value of the U.S. dollar. It makes exports more attractive but can drive up the prices of imports, including raw materials and other components needed for your own production processes.

It would be a mistake to think that the Fed somehow controls the economy – far from it. Despite powerful governmental influences, the US and most of the global economy operates under market (or market-failure) conditions. Also, large banks and companies can exert significant influence on markets as we saw with the events leading to the crash of 2007. Still, the Fed can have a major impact and its stabilizing effects are often underestimated. The research on the economy it produces is also extremely valuable. It would be wise for digital media firms to keep an eye on the Federal Reserve.

For a small fee, you purchase my Fed Watcher’s Handbook.

© ALL RIGHTS RESERVED

Share


AnthonybwAnthony J. Pennings, PhD is the Professor of Global Media at Hannam University in South Korea. Previously, he taught at St. Edwards University in Austin, Texas and was on the faculty of New York University from 2002-2012. He also taught at Victoria University in Wellington, New Zealand and was a Fellow at the East-West Center in Hawaii in the 1990s.

Licensing Creative Properties – Merchandise and Characters

Posted on | December 4, 2013 | No Comments

One strategy for digital media growth is the licensing of creative intellectual property, a $150.8 billion industry that is increasingly global.

Intellectual property law provides protection for creative content and licensing is required when a desired property such as an animated character is controlled with a copyright, trademark or even a patent. Licensing is giving permission to another company to use a specific form of its intellectual property. Or, a company might want to license a creative property from another company. In either case, a Licensing Agreement defines clear conditions for the relationship and a royalty is paid for using the protected signage or characterization on a product such as a sleeping bag, lunch box, doll, jacket, etc. In this post I examine licensing merchandise and character properties that are protected with a copyright or trademark.

It may be useful to understand the scope of the business and some of its biggest license holders and licensors. One would have to start with the Walt Disney Company, especially in light of their acquisitions of Lucasfilms, Marvel, and Pixar. While 2012 figures still rate the Disney Princesses at $1.5 billion as the highest source of revenues from licensing, Star Wars merchandise at $1.48 billion is a close second. Add other properties licensed such as Mickey Mouse, Winnie the Pooh, Cars and High School Musical and you can understand why Disney controls just over half of this market. The figures are even more extraordinary when you realize that they do not include merchandise manufactured and sold by the property owner.

According to the Licensing Letter, some 34 entertainment/character properties in Canada and the U.S. had retail sales of licensed merchandise of over $100 million. The recent growth has been significant with sales of licensed merchandise in the U.S./Canada alone increasing 136% between 2011 and 2012. Along with Disney, Hello Kitty is a significant property with royalties of just over US 1$billion and Rovio’s Angry Birds totaling over $500 million.

angry birds at NASA

Peter Vesterbacka, the Chief Marketing Officer of the Finnish company Rovio publicly stated that they wanted to turn Angry Birds into a “permanent part of pop culture”. From a $.99 game to a universe of signage, it’s difficult to walk through the store without seeing Angry Birds toys, candies, stickers, shirts, physical games, etc. I took my daughter to NASA’s Johnson Space Center near Houston and she enjoyed playing in a huge “Kids Space Place” with Angry Birds 3-D sculptures, imaginative games and all, of course, adorned with their trademarks (See above image).

Licensing covers a wide range of other property types besides Entertainment/Characters such as Fashion, Sports, Collegiate, Art, Music, and other types of Toys and Games. For those categories as well as Media and Entertainment, a good source of information is the Licensing Letter.

© ALL RIGHTS RESERVED

Share

Anthony

Anthony J. Pennings, PhD recently joined the Digital Media Management program at St. Edwards University in Austin TX, after ten years on the faculty of New York University.

JFK’s Contribution to Global Communications

Posted on | November 21, 2013 | No Comments

We choose to go to the moon. We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win, and the others, too. – President John F. Kennedy, September, 1962

It was CLARKE~1Arthur C. Clarke, the author of many science fiction classics including 2001: A Space Odyssey (1968), who published a seminal article on the possibilities of satellites in the 1945 edition of a British journal, Wireless World. The former radar engineer conceived the idea of putting satellites into an orbit high enough above the earth so that they could maintain pace with the revolving ground below. He reasoned that a satellite circling the Earth at about 6,870 miles per hour, 22,300 miles above the equator, would need 24 hours to complete its orbit.

Three satellites, or as he called them, “rocket stations,” positioned in “geostationary orbit” approximately 120 degrees apart, could act as a worldwide telecommunications system. Each satellite could act like a fixed radio repeater tower. Together, they could provide interconnections between various points over the entire Earth and between each other. Geosynchronous meant that the antennas that received satellite signals did not have to move to track the satellites. Called “earth stations,” these satellite dishes sometimes were initially 50 meters in diameter to pick up the faint signals from the pre-photovoltaic satellites. Clarke’s idea helped propel the “Space Race” and the vision of a new global communications system connecting countries and peoples around the world.

We are coming up on the 50th anniversary of the tragic death of our 35th president. On November 22, 1963, shortly after noon, President John F. Kennedy was assassinated as he rode in a motorcade through downtown Dallas. Although he was only president for three years, he had an extraordinary influence on the development of our modern technological age, especially the rise (literally) of global communications and the fulfillment of Clarke’s vision.

President Eisenhower created NASA in 1958 in response to the Soviet Sputnik satellite launches. Still, Kennedy directed the space agency to send humans to the Earth’s Moon by the end of the 1960s. Concerns mounted after the 1961 successful launch and landing of Soviet cosmonaut Yuri Gagarin, the first human in space. JFK laid out an initial vision on May 25, 1961, in a State of the Union address before Congress. It was less than half a year into his first term, but just weeks after the failed Bay of Pigs invasion of Cuba.

Although initially committed to unmanned space exploration, Kennedy saw how putting humans into space captured the public’s imagination. The space agency needed billions of dollars to design and build the rockets and systems to put larger civilian and military payloads into space. He pushed astronaut flights to gather political and popular support for funding NASA.

In his “Urgent National Needs” speech, Kennedy emphasized the need to recover from the ongoing recession and also reinforced a commitment to freedom in the southern hemispheres, particularly in Vietnam. He stressed the dangers of the Cold War and what was perceived as a USSR advantage in military superiority, especially in space and their “powerful intercontinental striking force.”

Finally, he laid out the vision for space, specifying the importance of new and more powerful rockets and requesting $125 million for “accelerating the use of space satellites for world-wide communications.” A related concern was weather satellites that were becoming more useful in forecasting weather conditions and helping to avoid disasters.

In all, Kennedy asked for over $7 billion for “a great new American enterprise” in space. It was within this context of national urgency that he set out the initial vision of going to the Moon by the end of the decade.

A year and a half later, Kennedy reinforced the vision of a Moon landing in his speech at Rice University. In September 1962, while dedicating the new Manned Spacecraft Center (now called the Johnson Space Center) just outside of Houston, he emphasized the importance of science and technology and reinforced the importance of communications and weather satellites. He pointed out that the Mariner spacecraft was on its way to Venus and compared it to “firing a missile from Cape Canaveral and dropping it in this stadium between the 40-yard lines.” The imagery was intentional; missiles with nuclear warheads that could pinpoint targets in the US were becoming a practical reality.

By conflating space with national and civil defense, he was able to mobilize the resources for a combined national effort to travel to the Moon and “do the other things”: close the missile gap with the USSR, circle the Earth with satellites helping ships at sea, connecting military operations, and predicting weather conditions, as well as become the leaders of international communications. The same rockets that would propel the astronauts to the Moon would first set up a global network of communications satellites.

Work had begun that year on the creation of a consortium that would create a fleet of orbiting satellites to provide global communications. The capital expenditures of space communication systems were so expensive that it took the Communications Act of 1962 to mobilize the resources of NASA, the Department of Defense and AT&T, the largest corporation the world at the time to create the new domestic monopoly Comsat. Though initially designated as a private enterprise on February 1st, 1963, Comsat required initial funding by the US government. Soon after, former Under Secretary of the Air Force, Dr. Joseph Charyk, was named its first CEO. Comsat was chartered as a common carrier subject to the Communications Act of 1934 and its creation, the Federal Communications Commission. Given the extensive foreign relations nature of the company, the President was given the power to oversee its activities.

Comsat moved quickly to initiate the formation of Intelsat, an international telecommunications satellite consortium. In August of 1964, the organization was formed with 19 other countries with Comsat as the U.S. Representative and main owner. The percentage of U.S. ownership was 61 percent compared to the next two largest owners: the United Kingdom with 8.4 percent and France with 6.1 percent. Other countries were skeptical because of the satellite’s ability to bypass national boundaries. Intelsat was careful to work with established national Post, Telephone and Telegraph (PTT) entities and the ITU (International Telecommunications Union) and eventually satellites were gradually accepted around the world.

With the backing of the United States, the Intelsat program proved to be very PHONHOMsuccessful for the development of international telecommunications. Although undersea telegraph cables have been operating since 1866 and since 1956 for voice communications, they could not keep up with global demand by the 1960s. Hughes Aircraft launched the first three experimental geostationary orbit satellites in 1963 and 1964. While Syncom-1 never functioned adequately, Syncom-2 transmitted telephone, telex, and data communications across the Atlantic to Africa and Europe. Syncom-3 was launched over the Pacific and repeated a similar performance. Intelsat- 1, or “Early Bird,” as it was named, became the world’s first commercial satellite when it was launched from Cape Kennedy on April 6, 1965.

Clarke’s vision of three geostationary “rocket stations” was realized in July of 1969 when Intelsat III was placed over the Indian Ocean Region. Launched Intelsat recoveryjust weeks before the Moon landing, Intelsat III offered 1,500 voice circuits or 4 TV channels and carried President Nixon’s congratulatory telephone conversation with the astronauts.

As more satellites were launched, Intelsat was criticized for its natural monopoly model. Satellite proposals like the Orion and Finansat had challenged the status quo but their business models were based on “skimming the cream” off of prime routes such as that between the US and Great Britain. Nevertheless, President Clinton finally privatized Intelsat in 2000 to further competition in global communications.

Advances in fiber optic cables also challenged the satellite model. An undersea communications cable can move terabits of data each second while satellites lag with hundreds of megabits. Undersea cables such as the 18,000 kilometer-long SEA-ME-WE-4 linking countries from France, Egypt, Singapore, and Indonesia have become international workhorses for the Internet. But by using spot beams, satellites can provide a tremendous amount of bandwidth for niche government, media, and corporate needs as well as provide necessary redundancy in case a cable is cut or otherwise damaged.

Although the dynamics have changed, JFK’s contribution to global communications lives on in a vibrant network of interconnected components that transmit our Facebook likes, blogs like this one, and other information and news that we value in our disparate, but global civilization.

Share

Citation APA (7th Edition)

Pennings, A.J. (2013, Nov 21) JFK’s Contribution to Global Communications. apennings.com https://apennings.com/how-it-came-to-rule-the-world/the-cold-war/jfks-contribution-to-global-communications/

© ALL RIGHTS RESERVED



AnthonybwAnthony J. Pennings, PhD is a professor at the State University of New York in South Korea (SUNY Korea). Previously, he taught at St. Edwards University in Austin, Texas and was on the faculty of New York University from 2002-2012. He also taught at Victoria University in Wellington, New Zealand and was a Fellow at the East-West Center in Honolulu, Hawaii during the 1990s. He lives in Austin, Texas when not working in Korea.

« go backkeep looking »
  • Referencing this Material

    Copyrights apply to all materials on this blog but fair use conditions allow limited use of ideas and quotations. Please cite the permalinks of the articles/posts.
    Citing a post in APA style would look like:
    Pennings, A. (2015, April 17). Diffusion and the Five Characteristics of Innovation Adoption. Retrieved from https://apennings.com/characteristics-of-digital-media/diffusion-and-the-five-characteristics-of-innovation-adoption/
    MLA style citation would look like: "Diffusion and the Five Characteristics of Innovation Adoption." Anthony J. Pennings, PhD. Web. 18 June 2015. The date would be the day you accessed the information. View the Writing Criteria link at the top of this page to link to an online APA reference manual.

  • About Me

    Professor at State University of New York (SUNY) Korea since 2016. Moved to Austin, Texas in August 2012 to join the Digital Media Management program at St. Edwards University. Spent the previous decade on the faculty at New York University teaching and researching information systems, digital economics, and strategic communications.

    You can reach me at:

    apennings70@gmail.com
    anthony.pennings@sunykorea.ac.kr

    Follow apennings on Twitter

  • About me

  • Writings by Category

  • Flag Counter
  • Pages

  • Calendar

    April 2024
    M T W T F S S
    1234567
    891011121314
    15161718192021
    22232425262728
    2930  
  • Disclaimer

    The opinions expressed here do not necessarily reflect the views of my employers, past or present.