Você está na página 1de 309

Ten Years and Beyond: Economists Answer NSF's Call for Long-Term Research Agendas

Edited by:

Charles L. Schultze, Brookings Institution and Chair, AEA Committee on Government Relations Daniel H. Newlon Director, AEA Government Relations

Electronic copy available at: http://ssrn.com/abstract=1886598

Introduction
We would like to acknowledge and thank the National Science Foundations Directorate for the Social, Behavioral and Economic Sciences (NSF/SBE) for challenging economists and other relevant research communities to step outside of present demands and to think boldly about future promises. Specifically, NSF/SBE invited groups and individuals in August 2010 to write white papers that describe grand challenge questions in their sciences that transcend near-term funding cycles and are likely to drive next generation research in the social, behavioral, and economic sciences. NSF/SBE planned to use these white papers "to frame innovative research for the year 2020 and beyond that enhances fundamental knowledge and benefits society in many ways. This request is part of a process that will help NSF/SBE make plans to support future research." At the conclusion of the submission period on October 15, 2010, NSF/SBE had received 252 papers. A compendium of abstracts to the 252 white papers1 and most of the full texts of the white papers can be downloaded from the website http://www.nsf.gov/sbe/sbe_2020/. We are disseminating the white papers of interest to economists independent of the NSF because these papers offer a number of exciting and at times provocative ideas about future research agendas in economics that are worth further consideration by economists. These papers could also generate other compelling ideas for infrastructure projects, new methodologies and important research topics. Also some of these papers are not available at the NSF website because they were not submitted successfully by the deadline. We have placed 54 of the white papers on our website http://www.aeaweb.org/econwhitepapers/ and have assembled these white papers in this electronic publication. The following white papers are of possible interest to economists: Acemoglu, Daron Alesina, Alberto Altonji, Joseph Challenges for Social Sciences: Institutions and Economic Development ..........9 Why Certain Countries have Developed and Others Have Not? .............15 Multiple Skills, Multiple Types of Education, and the Labor Market: A Research Agenda..............................................................21 Grand challenges in the study of employment and technological change .........27 Some Foundational and Transformative Grand Challenges in Economics .....................37 A Proposal for Future SBE/NSF Funded

Autor, David and Lawrence Katz Baily, Martin Neil* Berry, Steven
1

National Science Foundation, Directorate for Social, Behavioral, and Economic Sciences. 2011. SBE 2020: White Papers; Titles, Authors, and Abstracts. Arlington, VA: National Science Foundation.

Electronic copy available at: http://ssrn.com/abstract=1886598

Bloom, Nick Blume, Lawrence

Boskin, Michael

Research: Refocusing Microeconomic Policy Research .........................49 Key Outstanding Questions in Social Sciences ..55 Robustness and Fragility of Markets: Research at the Interface of Economics and Computer Science ................................59 Ideas About Possible NSF Grand Challenges in Economics Over the Next Twenty Years .......65 Future Research in the Social, Behavioral, and Economic Sciences with the Panel Study of Income Dynamics ...................69

Brown, Charles, Dan Brown, Dalton Conley, Vicki Freedman, Kate McGonagle, Fabian Pfeffer, Narayan Sastry, Robert Schoeni, and Frank Stafford Brunnermeier, Markus, Lars Peter Hansen, Anil Kashyap, Arvind Krishnamurthy, Andrew W. Lo Card, David, Raj Chetty, Martin Feldstein, and Emmanuel Saez Charness, Gary, Martin Dufwenberg Cramton, Peter* Cutler, David Darity, William, Gregory N. Price, Rhonda V. Sharpe Diamond, Peter

Modeling and Measuring Systemic Risk ........75

Duflo, Esther Eaton, Jonathan and Samuel Kortum*

Fischer, Stanley Fudenberg, Drew Gintis, Herbert Goulder, Lawrence *

Expanding Access to Administrative Data for Research in the United States .................81 Future Research in the Social, Behavioral & Economic Sciences .................85 Market Design: Harnessing Market Methods to Improve Resource Allocation .....................87 Why Dont People and Institutions Do What They Know They Should? ..........................91 Broadening Black and Hispanic Participation In Basic Economics Research .....................97 Three Important Themes: Taxation of Capital Income, Behavioral Economics in Equilbrium Analyses, and Systemic Risk ....................105 A Research Agenda for Development Economics ............................111 The Contribution of Data to Advances in Research in International Trade: An Agenda for the Next Decade ......................117 Questions about the Future of the International Economy ...............................123 Predictive Game Theory .......................125 Long-range Research Priorities in Economics, Finance, and the Behavioral Sciences ...........131 Integrating Economic and Political Considerations in the Analysis of Global Environmental Policies .....................135

Greenstein, Shane, Josh Lerner, and Scott Stern Gruber, Jon Haltiwanger, John

The Economics of Digitization: An Agenda for NSF ....................................139 What is the Right Amount of Choice? ...........149 Making Drill Down Analysis of the Economy a Reality .................................151 Future Directions for Research on Immigration..157 Developing a Skills-based Agenda for "New Human Capital" Research ..........................163 Making the Case for Contract Theory ................169 A Research Agenda For Understanding the Dynamics of Skill Formation .....................173 Some Compelling Broad-Gauged Research Agendas in Economics ...............................181 Challenges in Econometrics .......................183 Research Opportunities in Social and Economic Networks ...........................189 A New Architecture for the U.S. National Accounts .....................................193 Measurement and Experimentation in the Social Sciences ...........................199 Implications of the Financial Crisis ................205 Virtual Model Validation for Economics ...........209 A Complete Theory of Human Behavior ...........215 Language and Interest in the Economy: A White Paper on "Humanomics" ..................223 A New Household Panel in the U.S. ..............229 Economics, Climate, and Values: An Integrated Approach .......................233 Some Foundational and Transformative Grand Challenges for the Social and Behavioral Sciences: The Problem of Global Public Goods..239 Complexity in Social, Political, and Economic Systems ....................................245 Research Opportunities in Economics: Suggestions for the Coming Decade ...............251 Three Outstanding Challenges for Economic Research ...................................257 A Research Agenda in Economic Diagnostics......263

Hanson, Gordon Hanushek, Eric Hart, Oliver Heckman, James Hubbard, Glenn Imbens, Guido Jackson, Matthew Jorgenson, Dale Kapteyn, Arie Kroszner, Randall Levine, David Lo, Andrew McCloskey, Deirdre Moffitt, Robert Nelson, Julie, Evelyn Fox Keller Nordhaus, William

Page, Scott Poterba, James Reis, Ricardo Rodrik, Dani

Rogoff, Kenneth Roth, Al Samuelson, Larry Stavins, Robert Van Reenen, John Varian, Hal Weir, David Yitzhaki, Shlomo * Not available at the NSF website

Three Challenges Facing Modern Macroeconomics .............................267 Market Design: Understanding Markets Well Enough to Fix Them When theyre Broken........273 Future Research in the Social, Behavioral and Economic Sciences .................279 Some Research Priorities in Environmental Economics ...............................285 The Productivity Grand Challenge: Why do Organizations Differ so Much? ...................291 Clinical Trials in Economics .......................297 Grand Challenges for the Scientific Study of Aging ...............................299 Sensitivity Analysis through Mixed Gini and OLS Regressions .............................303

We have grouped the papers below by infrastructure investments proposed and by the economic and social issues motivating fundamental research agendas. White papers also recommend more support by NSF for groups underrepresented in the economics profession (Darity), new NSF initiatives that transcend disciplinary boundaries (Gintis, Lo, McCloskey, Nelson), new statistical methods (Imbens, Yitzhaki), better theoretical tools (Fudenberg, Hart, Jackson, Samuelson) and prizes by NSF for research accomplishments for established researchers (Charness). Economics has been transformed by the increased availability of data and the methods, measures and computational power needed to analyze the data (Card, Eaton). According to some of the white papers, advances in economics could accelerate over the next decade if investments were made in: cross-country research data that fill gaps in international data analysis such as the absence of international firm datasets with basic information on inputs, outputs, growth, management practices and technology of firms (Bloom, Eaton, Van Reenan) or more harmonization of measures between datasets for different countries (Alesina, Brown, Eaton, Weir), a new longitudinal survey of US households to address limitations in the aging data infrastructure for studying economic and social dynamics in the US (Altonji, Moffitt); an advanced data collection laboratory to gather longitudinal socioeconomic data for US households over the internet, from administrative records and from new forms of data collection including personal digital assistants, webcams and self-administered devices (Kapteyn, Moffitt);

direct, secure access to the US governments comprehensive micro-economic administrative electronic files for households and businesses so that researchers have access to a rich archive of information covering almost every aspect of socio-economic behavior at different levels of aggregation (Card, Eaton, Haltiwanger, Hanson, Hanushek); direct, secure access to large amounts of underutilized data collected by private sector firms and kept secret (Van Reenen, Varian); data infrastructure for cumulative, transparent, and high quality research on the digital economy and on the rules and policies that govern the economic incentives to create, store and use digital information (Greenstein Reis), a new system of National Income Accounts that better reflects the global economy and includes psychological measures of well-being and better measures of non-market activities (Boskin, Jorgenson, Reis) a special program in experimental design and analysis that would support field experiments/clinical trials designed to resolve fundamental debates in economics and encourage public-private research co-operation in this area (Varian), development and validation of an agent based virtual economy with sophisticated agents that mimic human behavior and well-developed models of production, trade and consumption, scaling up existing agent based models, and/or many small-scale research projects using agent based models (Blume, Gintis, Levine, Page), sustainability science environmental observatories with social science data collection efforts sufficient to capture the bi-directional linkages between human actions and natural-environmental processes (Brown), and improvements in existing longitudinal surveys (Altonji, Brown, Weir) by collecting better information on job content, skill requirements, education, genetics and humanenvironment interactions.

Many of the white papers propose agendas of fundamental research motivated by important and persistent economic and social issues including: Financial crises and economic instability (Baily, Boskin, Blume, Cramton, Diamond, Gintis, Haltiwanger, Hansen, Hart, Jackson, Kroszner, Lo, Nordhaus, Poterba, Rodrik, Rogoff, Van Reenen) Gaps between rich and poor countries (Acemoglu, Alesina, Baily, Duflo, Eaton, Fischer, Samuelson, Van Reenen) Global warming and other environmental problems (Berry, Brown, Goulder, Nelson, Nordhaus, Page, Poterba, Stavins, Van Reenen) Education and training (Altonji, Autor, Baily, Berry, Hanushek, Heckman, Moffitt, Roth, Van Reenen) US Economic inequality (Autor, Brown, Heckman, Moffitt, Page, Weir)

Health care costs and health disparities (Baily, Berry, Brown, Card, Cutler, Gruber, Hubbard, Page, Roth, Weir) Taxation and government spending (Boskin, Card, Diamond, Hubbard, Kroszner, Poterba, Reis, Rogoff) Design of Efficient and Robust Markets (Blume, Cramton, Roth, Samuelson) Immigration (Hanson) Work and Family Balance (Bloom)

Some would develop new approaches to applied research such as economic diagnostics to determine which among multiple plausible models best applies to a particular problem (Rodrick) or more collaboration within economics and with allied fields to strengthen the link between methods and policy (Berry). The American Economic Associations Committee on Government Relations Charles L. Schultze, Brookings Institution and Chair Daniel H. Newlon, Director, AEA Government Relations

Challenges for social sciences: institutions and economic development


Daron Acemoglu, Massachusetts Institute of Technology

Introduction Why some countries are much poorer than others is one of the oldest questions in social science. It will also be one of the most challenging and important questions in the next several decades. This is for several reasons. First, despite spectacular growth in per capita incomes in much of the world during the 20th century, the gaps between rich and poor countries, rather than abating, have expanded. This pattern is challenging to most of our theories because many of the barriers to the spread of prosperity have disappeared: ideas travel around the world almost instantaneously, and any nation should today be able to easily copy any economic or social practice that it wishes; various impediments to trade in goods and to financial flows and foreign direct investments have largely disappeared. But the wide gaps in incomes and living standards remain. Second, these gaps have meant that while the rich world has become richer, poverty, disease and social injustice are still widespread in many parts of the world, notably in much of sub-Saharan Africa, in parts of South Asia and in various pockets of poverty in the Caribbean and Central America. Challenging though these issues may be, we are now much better equipped to understand, and perhaps work towards redressing, the causes of these widespread disparities. Much of the progress on this issue has been made in economics (see Acemoglu, 2009, for an overview), but the next step will require us to combine the insights and tools developed in economics with perspectives from other social sciences.

From proximate to fundamental causes Economic analysis has documented that differences in per capita incomes and prosperity across countries are related to differences in human capital, physical capital and technology. We understand the extent to which differences in the quantity and quality of education, differences in the availability of machines, and the differences in the use of new technologies and the allocation of resources between activities with different levels of productivity contribute to incomes. We also understand how the current large differences in prosperity have resulted from lack of steady growth in many parts of the world, while other nations achieved sustained growth. But these are only proximate causes in the sense that they pose the next question: why some countries have less human capital, physical capital and technology and make worse use of their factors and opportunities.

This has motivated economists and social scientists more broadly to look for potential fundamental causes. Here research and our understanding are still in their infancy. Institutions have emerged as a potential fundamental cause, contrasting, for example, with geographical differences or cultural factors (even as we recognize that cultural factors are central for understanding the evolution, and the persistence, of institutions). Institutional differences, associated with differences in the organization of society, shape economic and political incentives and affect the nature of economic equilibria via these channels. There is now vibrant theoretical and empirical research documenting the importance of institutions for economic outcomes. But the next stage, which requires an understanding of which specific configurations of institutions are most likely to encourage growth in the decades to come, why institutions differ across countries, and why they change, and why they often fail to change, is more challenging.

Institutions Douglass North (1990, p. 3) offers the following definition: "Institutions are the rules of the game in a society or, more formally, are the humanly devised constraints that shape human interaction." Three important features of institutions are apparent in this definition: (1) they are "humanly devised," which contrasts with other potential fundamental causes, like geographic factors, which are outside human control; (2) they are "the rules of the game" setting "constraints" on human behavior; (3) their major effect will be through incentives. The notion that incentives matter is second nature to economists, and institutions, as a key determinant of incentives, should have a major effect on economic outcomes, including economic development, growth, inequality and poverty. But if institutions matter so much for economic outcomes, shaping why some countries have incomes per capita 30 or 40 times greater than those of others, why do many societies choose institutions that are inimical to economic growth? To think about possible answers to these questions, it is useful to consider the relationship between three institutional characteristics: (1) economic institutions; (2) political power; (3) political institutions. Economic institutions matter for economic growth because they shape the incentives of key economic actors in society, in particular, they influence investments in physical and human capital and technology, and the organization of production. Economic institutions not only determine the aggregate economic growth potential of the economy, but also the distribution of resources in the society. Herein lies part of the problem: different institutions will not only be associated with different degrees of efficiency and potential for economic growth, but also with different distribution of the gains across different individuals and social groups. How are economic institutions determined? Although various factors play a role here, including history and chance, economic institutions are collective choices and because of their influence on the distribution of economic gains, not all individuals and groups prefer the same set of economic institutions, and often, many will prefer to maintain economic institutions that do not maximize the growth potential of a nation. This leads to a conflict of interest among various groups and individuals

10

over the choice of economic institutions, and the political power of the different groups will be the deciding factor. The distribution of political power in society is also endogenous. To make more progress here, let us distinguish between two components of political power; de jure (formal) and de facto political power (see Acemoglu and Robinson, 2006). De jure political power refers to power that originates from the political institutions in society. Political institutions, similar to economic institutions, determine the constraints on and the incentives of the key actors, but this time in the political sphere. Examples of political institutions include the form of government, for example, democracy vs. dictatorship or autocracy, and the extent of constraints on politicians and political elites. A group of individuals, even if they are not allocated power by political institutions, may possess it; for example, they can revolt, use arms, hire mercenaries, co-opt the military, or undertake protests to impose their wishes on society. This type of de facto political power originates from both the ability to solve its collective action problems and from access to economic resources (which determines the capacity to use force against others). This discussion highlights that we can think of political institutions and the distribution of economic resources in society as two state variables, affecting how political power will be distributed and how economic institutions will be chosen. An important notion is that of persistence; the distribution of resources and political institutions are relatively slow-changing and persistent. Since, like economic institutions, political institutions are collective choices, the distribution of political power in society is the key determinant of their evolution. This creates a central mechanism of persistence: political institutions allocate de jure political power, and those who hold political power influence the evolution of political institutions, and they will generally opt to maintain the political institutions that give them political power. A second mechanism of persistence comes from the distribution of resources: when a particular group is rich relative to others, this will increase its de facto political power and enable it to push for economic and political institutions favorable to its interests, reproducing the initial disparity. Despite these tendencies for persistence, the framework also emphasizes the potential for change. In particular, "shocks" to the balance of de facto political power, including changes in technologies and the international environment, have the potential to generate major changes in political institutions, and consequently in economic institutions and economic growth.

The challenges ahead Despite much promising research, many fundamental and applied questions remain unanswered. However, recent research has shown how theoretical and empirical progress can be made on the effects of institutions and on the factors affecting institutional equilibria both at the national and sub-national levels. Major questions for future research include, among others: Why do institutions persist? Recent research has documented that several institutional features of current economies have historical roots going back several centuries or sometimes even

11

more. There is also evidence that even after major institutional reforms, for example following the end of colonial rule in Latin America, Asia and Africa or the fall of military regimes, important institutional continuities remain. Despite the ideas on the sources of persistence mentioned above, we have only made limited progress in sources of persistence. These are partly in expectations and beliefs. The belief among the majority of US citizens that the Constitution safeguards their rights undoubtedly plays an important role in enabling the Constitution to do just that. But the citizens of many other countries do not hold similar beliefs and institutional outcomes are often very different. Yet appealing to such beliefs without understanding what the sources of these differential beliefs might be is not satisfactory, and theoretical and empirical investigation of various sources of institutional persistence, including the dynamics of political and social beliefs, remains a major area for future research. Though institutions persist, they are not historically predetermined. Major institutional reforms have taken place in many countries, and in some cases, as in Botswana, South Korea and China, such changes have altered the economic trajectories of nations fundamentally. What enables institutional reform? Why do many attempts at reform fail and even backfire? How can we work towards successful reforms? These questions are both academically interesting and central to inform policy debates. Despite their importance, we have little theory to guide us and few applied insights. While the role of secure property rights for investment and the importance of checks and balances in the political sphere for stability are well understood, we do not yet know which specific combinations of economic and political institutions are most conducive to economic growth. For example, the African evidence suggests that the weakness of the state is a major area of economic development, while during the early phases of the growth experiences of many East Asian nations the state was heavily involved in the economy. Yet this does not mean that greater state involvement is necessarily part of the cluster of institutions encouraging growth. Many pernicious dictatorships from North Korea to Burma highlight the dangers of allpowerful states. Despite much rhetoric on this topic, we currently do not know whether greater involvement of the state ensures a level playing field and facilitates economic development or whether it inexorably leads to insecure property rights and opens the way to more heightened political conflicts to control the all-powerful state. We also do not know which combinations of property rights, financial institutions, judicial institutions, education and various dimensions of social institutions are most conducive to economic development. Relatedly, we are also far from a consensus on the role of democracy and checks on political power in fostering an environment that is conducive to innovation and economic growth. Even though many of the economies spearheading economic growth over the last two centuries have been relatively democratic and many of the most disastrous economic performances have been under authoritarian regimes ranging from colonial rule to military dictatorships and personal rules, in the postwar era democratic countries do not have appreciably higher growth rates than nondemocratic ones. The rapid growth of China under a highly authoritarian regime has made some commentators conclude that authoritarian rule might be more conducive to economic growth. Nevertheless, there are good reasons to think that authoritarian regimes will ultimately

12

become incompatible with innovation and the creative destruction that accompanies most growth experiences. The extent to which this is the case and the various interactions between political regimes and economic growth are other major questions that will require much future research. References Acemoglu, Daron (2009) Introduction to Modern Economic Growth, Princeton University Press. Acemoglu, Daron and James Robinson (2006) Economic Origins of Dictatorship and Democracy, Cambridge University Press. North, Douglass (1990) Institutions, Institutional Change, and Economic Performance, Cambridge University Press.

13

14

AlbertoAlesina HarvardUniversity September2010 Question1 Thefundamentalquestionforeconomistsistounderstandwhycertaincountries, (nations,regions)havesuccessfullydevelopedandothersarelagging.Answering thisquestionwillofcoursehelpunderstandinghowtodefeatpoverty. Inrecentyearseconomistshavemadeprogressbyextendingtherealmof variablesincludedintheirmodels,empiricalanalysisandoverallthinking.This processneedstocontinueifwewanttobesuccessful.Themostpromisingand excitingareasofresearchineconomicsarethosewhichlieattheborderofthe field(strictlydefined)andtouchuponotherdisciplines.Examplesincludepolitical economics(borderingwithpoliticalscience),behavioraleconomics(bordering withpsychology),lawandeconomics(borderingwithlawofcourse)andrecently culturaleconomics(borderingwithsociologyandanthropology).These developmentshaveleadalsotoawelcomedeeperattentiontolongtermstrends, historicalanalysisandthedevelopmentofnewandrichdatasets.Weareof coursefartohavedefinitiveanswersonmanyissuesandmoreenergyneedsto bedevotedalongtheselines.

15

Iwillelaborateonprobablythelessknownofthesubjectmattersmentioned above,whichisthemostrecentandinmyopinionveryexciting,butvery challenging:culturaleconomics.Howmanytimesinourcasualcogenerationswe mentionthewordcultureasanexplanationofmanythingswhichareof relevanceforeconomistssuchassavingrates,trust,attitudetowardwork,family relationship,theroleandeducationofwomen,povertytraps,hardwork.How manytimesinourcasualconversationswewonderwheredodifferentcultures comefrom?Howmanytimeswewonderwhich,howandhowquicklydifferent culturesmeltinthepot?Manytimes.Butthenwhenaseconomistswetryto understandthosevariablesweignoreculture.Researchersinothersfieldsdidnot forgetaboutculture.Weberpostulatedaculturalrootforthedevelopmentof capitalism,theprotestantethic,butneoclassicaleconomistsignoredit. Anewbutrapidlygrowingbodyofresearchistaking,instead,theideaof includingcultureinourframeworkofanalysis.Letsbeginwithadefinitionof culture:Thecustomarybeliefs,socialnorms,andmaterialtraitsofanation, racial,religiousorsocialgroup.ApaperbyGuiso,SapienzaandZingales(2006) discussesthisdefinitionandthemethodologicalissuesrelatedtothe developmentofthisfield.1 Ratherthandiscussingthequestionofcultureingeneralletsdiscussone exampleofaspecificculturaltrait:familyrelationships.Incertaincultures familiesareverytightandfamilyrelationshipsareconsideredveryimportant, forinstanceinMediterraneanandLatinAmericancountries;inotherculturesthe familyisimportantbutattitudesaremoreindividualisticandfamilyrelationships arelessimportant(sayAngloSaxonCountriesandScandinaviancountries).How dotheseculturaltraitsaffectmanyeconomicdecisions?

1 Guiso,L.,P.Sapienza,andL.Zingales,DoesCultureAffectEconomicOutcomes?JournalofEconomic Perspectives,20(2006),2348.

16

Crosscountrycomparisonsareverysuggestiveandprovocative,butfroma scientificpointofviewtheytellverylittlesincetoomanythingsvaryacross countries.Oneneedtoidentifymicroevidence,withincountries,thatisonehas tolookathowdifferentindividualswithinthesamecountrybehaveasafunction oftheirlevelsoffamilyties.Bylookingwithinacountryonecanholdconstantall theothercharacteristicsandinstitutionsofacountry.ApaperbyAlesinaand Giuliano(2010)isanoverviewofresultsregardingtheroleofthefamily relationship.2.Thestrengthoffamilytiesismeasuredbyseveralanswersfrom surveysaboutrelationshipsbetweenfamilymembers. Withstrongfamilytiesthefamilybecomeanorganizedproductionunitandithas importantimplicationson:1)theamountofhomeproduction:strongertiesmore homeproduction;2)lowerparticipationofwomeninthelabormarket,andlower educationofwomen;3)lowerparticipationofyoungstersinthelabormarket, theyliveathomelonger:4)lowergeographicalmobilityandasconsequenceless flexiblelabormarkets;5)morerelianceonthefamilyasaproducerofsocial insuranceandcareforelderlyandchildren,thuslessdemandforpublicly providedsocialservices;6)moreinwardlookingattitudesandlesstrusttowards nonfamilymembers.7)Lowertendencytoparticipateinsocialactivities,lower politicalparticipation,andingenerallowersocialcapital. Thereisnotattemptobviouslytobenormativehere.Strongorweakfamilyties havedifferenteffects;theyleadtodifferentsocialandeconomicorganizations. Onecannotberankedabovetheother.Howeveritisclearthatthese correlations(andpotentialcausation)areextremelyimportanttounderstand variousaspectsoftheeconomicstructure,growthpotentialandpoverty reductionpolicies.Forinstancecertainlabormarketsandsocialpoliciesmay haveverydifferenteffectsdependingonthenatureiffamilyrelationships.By ignoringtheseculturalaspectswemaydesignthewrongpoliciesandwemaynot understandwhycertainpolicies,saylabormarketregulationmayormaynot

AlesinaA.andP.Giuliano(2010)ThePowerofthefamilyJournalofEconomicGrowth,June2010

17

workindifferentcountries.Thisforinstanceisthepointofarecentpaperby AlesinaAlganCahucandGiuliano(2010).3 Progressalongthelineofuncoveringcausationisdonebylookingatimmigrants inanothercountry,typicallyintheUS.Thisishowitisdone.Onecanattributeto a,sayaBrazilianimmigrantintheUStheaverageculturaltrait(inthiscasefamily ties)ofhis/hercountryoforigin,Brazil.Thenonecanlookathowafirstsecond etcgenerationBrazilianimmigrantbehavesintheUS.Ifhedisplaysabehavior consistentwiththestrengthoffamilytiesinBrazilthismeansthatsuchcultural traitspersistevenindifferentenvironment.Notonly,buttheobjectiveof isolatingcausalityisreachedbyattributingtothisBrazilianimmigrantnothis views(asmeasuredbyhisanswerstopolls)buttheaverviewsofBrazilians. Thisopensupanotherfascinatingquestion,namelyhowquicklyculturesmeltand hoedeeply.TosomeextenttheUSisasuccessfulmeltingpot,butcultural differencesinbehaviorpersist.Whatdeterminesthespeedofassimilation?How doesthegeographicaldistributionofethnicgroupsmatters,andhowdoesit affectsuchspeed?Theanswertothesequestionsmayleadustobetter understandimmigrationpoliciesandbetterdesignpoliciestodealwith assimilation.Forinstance,aresmallethnicgroupsmorelikelytoassimilatequickly orsincetheyaresmalltheywillhaveatendencytoholdonmoretightlytotheir culturaltraits?Whichculturalaspectsassimilatemoreorlessquickly? Thenextquestioniswhereculturecomesfrom.Continuingwiththisexample, whyincertainsocialgroups,ethnicity,regions,nationsfamiliesaretighterthan others?Onehastoolookdeepintohistorytounderstandtheanswer.For instanceahypothesisisthatinthedistantpasttheadoptionofcertain technologiesratherthanotherscratedmoreorlessneedforwomenworkinthe field.Thatleadstoacertaindevelopmentoftheroleofwomenasstayhome mothersandwiferatherthanworkerswhichmayaffectforcenturiesafterwards theroleofwomenandtheorganizationofthefamilyandsociety.
3 AlesinaA.,Y.Algan,P.CahucandP.Giuliano(2010)Familyvaluesandtheregulationoflabor, unpublished

18

Thistypeofanalysismoregenerallyasksthequestionofwherepreferences comefrom.Weaseconomistalwaysstartwiththeassumptionthatpreferences areprimitive,exogenouslygivenandwehavenothingtosayaboutwherethey comefrom.Culturaleconomicswillleadinthedirectionofbeingmoreambitious. Perhapswecanmakesomeprogressintheexplanationofwherecertainattitudes areborn,howtheypersistandwhatleadtoachange.Aseconomistsnotonlywe thinkofpreferencesasprimitivebutalsoasconstantovertime.Thiscultural analysiswillalsohelpunderstandevolutionofpreferencesandwilllinkupwith otherfascinatingareasofresearchlikethatofpersuasionthatishowcertain messagesmaychangesnotonlyinformationandbeliefsbutalsotheutility functionofindividuals.Anotherpointofcontactheriswiththeliteratureon identity,pushedamongstotherbyGeorgeAklerlof. Thenatureoffamilyrelationshipsisonlyoneexample.Anotherwidelystudied culturaltraitistrust.Theimportanceoftrustineconomicscannotbe underemphasized.InKenArrowswords"Virtuallyeverycommercialtransaction haswithinitselfanelementoftrust,certainlyanytransactionconductedovera periodoftime.Itcanbeplausiblyarguedthatmuchoftheeconomic backwardnessintheworldcanbeexplainedbythelackofmutualconfidence." Whatdeterminestrust,itsevolution,itsimplicationshavebeenatthecoreof researchinCulturalEconomics.Workontrustspansfromcorporatefinanceto growthanddevelopment,tointernationaltrade,wherebilateraltrustmakings countrieshasbeenshowntodeterminetradepatterns.Thispointhighlights anotherfundamentalissue:individualtrustandinteractbetterwiththosewho aremoresimilartothemselves.Thelatterconsiderationhasimportant implicationforissueconcerningthecostsandbenefitsofethnicfragmentation. Religiousbeliefsmayalsomatterandarecertainlypartofabraddefinitionof culture.Beliefssintheafterlifemayhaveimplicationforactivateskinthecurrent life.WebersviewsaboutthedifferencesbetweenProtestantbeliefsandCatholic

19

beliefsaretheprimaryexampleofthispoint.Thriftmaydependonyourreligious views.Theroleofwomenvariesgreatlyindifferentreligions. Onehastoadmitthattostudycultureisnoteasy.Itisaconcepthardtomeasure anditiseasytofallintoatrapofanythinggoes.Weshouldmaintaintherigor thateconomistshaveeveninthestudyofculture.Identificationproblemsare huge.Reversecausalityloomsalwaysinthebackgroundofthesestudieson culture.Butweshouldnotshyawayfromtacklingbigissuesineconomics.Inmy opinionourprofessionisslippingtoomuchintoperfectlytightmethodologies appliedtosmallerandsmallerproblems.Wemayperfectlyidentifycertainthings baseduponnaturalexperiments.Butwhatweuncovermaybequitesmalland noneverygeneral. Question2 Theanswertoquestion1)implicitlyanswersthesecondaswell.Thedomainis advancedbyincludingimportantbutoverlookedvariablesintheanalysis. Graduatestudentsaretrainedinthinkingoutsidetheboxandpushtheir creativity.Theconstructionofnewdatasethasbeenofthemostimportant outputofthisresearch.Preciselybecausewearepushingtheanalysistowards domainsnottypicallytravelledbyeconomistsoneoftenfeelstheneedtoextend thecoverageofdataandtobuildnewdatasetsincluding:historicaldatasets, geographicaldatasets,surveys,experiments.Economistshavebeganusingand extendingwhenpossible)surveyliketheGeneralSocialSurveyfortheUS,The WorldValueSurveyandvariousregionalsurveys.Manyexperimentshavebeen run,greateffortshasbeendevotedtogobackinhistoryandcollectdataonearly institutions,agriculturaltechnologies,humancapital,etc.Thisisbecauseoneof thefindingofthisliteraturehasbeenthelongtermpersistenceofculturaltraits. Thislargecollections(anddissipation)ofexistingbutunknowntoeconomists,and newdatasetshasbeenaveryimportantoutcomeofthisliterature.

20

MultipleSkills,MultipleTypesofEducation,andtheLaborMarket: AResearchAgenda1
JosephG.Altonji DepartmentofEconomics YaleUniversity September,2010 Summary Iproposeamajorprogramofresearchonskill,education,andthelabormarket.The

programwillbuildonfourfacts.First,abilityandskillaremultidimensional.Second,secondary and postsecondary education is heterogeneous in quality and in the types of skills and knowledge provided. Third, jobs differ substantially in what they require. Finally, technical change,globalization,andshiftsinthecompositionofdemandforgoodsandservicesalterthe demandforparticularskillsinthelabormarketrelativetosupply,withimportantimplications forthewagedistribution.Inessence,theresearchprogramwillplacethemultidimensionality of ability, skills, and knowledge at the center stage of theoretical and empirical research on child development, educational attainment, and labor market careers. In this document, I discuss why the program is needed and why the prospects for success are high. I provide a briefsketchratherthanafullblownproposalandofnecessityuseaverybroadbrush. WhyisResearchonMultipleTypesofSkillandEducationNeeded?

ThisworkislicensedundertheCreativeCommonsAttributionNoDerivs3.0 UnportedLicense.Toviewacopyofthislicense,visithttp://creativecommons.org/licenses/bynd/3.0/orsenda lettertoCreativeCommons,171SecondStreet,Suite300,SanFrancisco,California,94105,USA.

21

Since the pioneering work of Gary Becker and Jacob Mincer, a large community of

scholarshasstudiedthedemandforeducationandtheeconomicreturntoeducation.There havebeenimportantadvancesintheuseofinstrumentalvariablesmethodsandintheuseof structural models of education choice and labor market outcomes. As a result of these developments,weknowmuchmorethanbeforeabouttheaveragereturntoayearinschool. However,theoverwhelmingmajorityofthestudiesabstractfromtypeofeducation.Thefocus iseitheronyearsofschoolcompletedorbroadeducationcategoriessuchashighschool,some college,etc.Thisisunfortunatebecausebasicdescriptiveanalysesshowlargedifferencesinthe labor market payoff across subject areas. The substantial differences by gender and race in courseofstudycontributetoobservedgapsinlabormarketoutcomes.Andmismatchbetween the skills and knowledge the education system produces and the types valued in the labor marketisaperennialpublicconcern. Over the past ten years, the role of noncognitive traits and cognitive traits in the

acquisition of human capital and in the labor market return to human capital has received considerableattentionineconomics.Muchofthisworkhasfocuseduponchilddevelopment, educational attainment, and early labor market success. (See, for example, Cuhna, Heckman and Shannach (2010)). There is also current research on how cognitive skills and personality traits arise from genetic influences, early childhood environment, and formal education. Progressindevelopmentalpsychology,cognitivepsychology,genetics,andneurosciencemake thisapromisingareaforresearchbyeconomistsontheproductionofhumancapital,broadly defined.Sofar,thisresearchhasnotbeensystematicallyextendedintomodelsofthetypeof

22

secondaryandhighereducationpeopleacquireormodelsoftheeffectsofskillsandeducation oncareerpaths. Finally, research on the distribution of earnings has paid increasing attention to the effectsoftechnicalchange,globalization,andchangesinproductdemandonthedemandfor particular types of skills. Autor, Levy and Murnane (2003) and Autor and Handel (2009) are goodexamples.Tounderstandtrendsinthelevelanddistributionofwages,employment,and unemploymentintheUSandothercountries,weneedmodelsthatdistinguishworkersalong multipledimensionsofskillandknowledge,andthatdistinguishjobsinparallelfashion.There hasbeenconsiderableprogressinthisareaoverthepastdecadeforresearcherstobuildon.In particular,researchersintheU.S.andEuropehavehadsuccessinquantifyingtheimportance ofoccupationspecificskillsforwagesandjobmobilitypatterns. PotentialforSuccess: Thetimeisripe,forseveralreasons.First,researchonthechilddevelopmentprocess

childdevelopmentisprogressingrapidly,fueledbyadvancesintheunderstandingoftheroleof genesandenvironmentinshapingthetalentsandpersonalitytraitsthatmatterforparticular programsofstudyandsubsequentcareerpaths.Thisislikelytocontinue. Second,progresshasbeenmadeinthedevelopmentofmodelsofcourseselectionin

highschoolandincollegethatplacetheroleofdifferencesinpredeterminedabilities, knowledge,andpreferencesatcenterstage.Advancesincomputerpowerandeconometric methodshavemadeestimationofsuchmodelsfeasible.Empiricalresearchonthecausal effectsofparticularcoursesofstudyisinanearlystagebutshouldfollowtwopaths.Thefirst

23

istousedynamicchoicemodelstounderstandeducationaldecisionsandusetherestrictionsof themodeltoaccountforselectionbiaswhenmeasuringcausaleffects.Arcidiaconos(2004) studyofcollegemajorisoneofasmallsetofpapersthatcanbebuiltupon.Thesecondpathis tousequasiexperimentalmethodsthatexploitvariationininstitutionalfeaturesthatinfluence howstudentsareassignedtocoursesequencesinhighschoolortomajorsincollege.Research inothercountriesthatrelymoreheavilyontestscorestodecidetypeofprimaryandsecondary educationandadmissiontoparticularcollegemajorswillbevaluable. Third,advancesincomputationandinsimulationbasedestimationmethodologiesare

makingitpossibletoestimateanintegratedmodelofchilddevelopment,humancapital accumulation,andlabormarketsuccessthatspansbirthtoadulthoodbycombiningdatasets thatindividuallylackthenecessaryinformation. Fourth,advancesinmodelingandespeciallyincomputationwillmakeitfeasibleto

incorporatemultipleskillandeducationtypesintogeneralequilibriummodelsofthesupply anddemandforlaborthathavebeenusedinmacroeconomicstudiesofwagegrowthand distribution. Fifth,thedataaregettingbetter.Statelevellongitudinaldatasystemsthattrack

individualstudentsarerevolutionizingresearchontheeducationproductionfunction. Researchonthereturntostudentachievement,highschoolcurriculum,vocationalprograms, andcollegemajorrequiremoredatasetsthattrackindividualswellintothelabormarket.Data forFloridaandTexashavebeenmatchedtodataonhighereducationandtoearningsrecords. Theselongitudinaldatasystemshaveenormouspotentialforthestudyofhowstudent achievementandfieldofstudyaffectlabormarketperformance.Othercountries(notably

24

Denmark),haveadministrativedatasetsthatcanbeusedtoresearchstudentachievement, fieldofstudy,andlabormarketsuccess.Excellentsurveybasedpaneldatasetsbeginningin childhoodhavebeencollectedinothercountries. Sixth,thedatacanandshouldbeimproved.Partoftheresearchprogramshouldbeto

investinnewdatasetsthatbegininearlychildhoodandcontinueuntilcareerpatternsarewell established.TheNLSY79:ChildrenandYoungAdultsdataprojecthasthepromiseof accomplishingthis.TheEarlyChildhoodLongitudinalStudyishavinganenormousimpacton researchontheroleoffamilyandschoolsinchilddevelopment.Thelongeritiscontinued,the morevaluableitwillbe.Totheextentitsagecoverageoverlapswiththeearlyperiodofother datasetsthatfollowchildrenintoadulthood,itcanplayakeyroleinresearchdesignsthatuse multipledatasetsforestimation.Resurveyingmembersofexistingpaneldatasetsthatstart inadolescencebutstopinthemid20swouldhaveahugepayoff.HereIhaveinmindthe NationalEducationLongitudinalSurvey:1988,anextraordinarilyrichdatasetthatstartedwith anationalsampleof8thgradersin1988butendedin2000.InafewmoreyearsNLSY:97,which startedwithabout9,000childrenbetween12and16in1997,willbecomeanextremely valuableresource. Extendingsurveydatathroughmatchestoadministrativeearningsrecordsisanother

avenuethatshouldbeexplored,perhapsusingadvancesinstatisticalmethodsforconstruction ofsyntheticdatasetsthatmimicthestatisticalpropertiesoftheoriginaldatabutcompletely hidedataonindividuals. Acriticalneedisforbetterdataontheinformationonjobcontentandskill

requirementsinpaneldatasets.TheeNLSY79andNLSY97andthePSIDandcrosssectional

25

surveyssuchastheCPSprovideverylittleinformationonthetaskspeopleperformatworkand theskillsthattheyneedtoperformthem.Mostresearchersrelyuponvariablesfromthe DictionaryofOccupationalTitlesanditssuccessordataset,ONET,thatcanbemergedinto householdsurveysusing3digitoccupationcodes.Similardataisavailableforothercountries. However,jobsvaryalotwithinabroadoccupationclassification.Amajoreffortisneededto developsurveymodulesthatcanbeusedtoprovidemoreinformationaboutwhatpeopledo andtheskillsthatareinvolved.AutorandHandel(2009)achievedsomesuccessinusinga shortseriousofquestionstoelicitinformationaboutwhatpeopledoonthejobandwhatskills theyneed. ReferencesCited Arcidiacono,PeterAbilitySortingandtheReturnstoCollegeMajorJournalofEconometrics, Vol.121,Nos.12(August,2004),343375 Autor,DavidandMichaelHandel,PuttingTaskstotheTest:HumanCapital,JobTasks,and Wages.NBERWorkingPaperNo.15116,June2009. Autor,David,RichardJ.Murnane,andFrankLevy,TheSkillContentofRecentTechnological Change:AnEmpiricalExploration.QuarterlyJournalofEconomics,118(4),November2003, 12791334 Cuhna,Flavio,JamesJ.Heckman,andSusanneM.Shannach,EstimatingtheTechnology ofCognitiveandNoncognitiveSkillFormation,Econometrica78(May2010).

26

Grand challenges in the study of employment and technological change: A white paper prepared for the National Science Foundation*

David H. Autor MIT and NBER Lawrence F. Katz Harvard University and NBER September 29, 2010

This document contains 1,968 words excluding this title page

This work is licensed under the Creative Commons Attribution-NoDerivs 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nd/3.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.

27

Leading economists from Paul Samuelson to Paul Krugman have labored to allay the fear that technological advances may reduce overall employment, causing mass unemployment as workers are displaced by machines. This lump of labor fallacypositing that there is a fixed amount of work to be done so that increased labor productivity reduces employment is intuitively appealing and demonstrably false. Technological improvements create new products and services, shifting workers from older to newer activities. Higher productivity raises incomes, increasing demand for labor throughout the economy. Hence, in the long run technological progress affects the composition of jobs not the number of jobs. In 1900, for example, 41 percent of the U.S. workforce worked in agriculture. After a century of astonishing agricultural productivity growth, the number stood at 2 percent in 2000. This Green Revolution transformed physical and cognitive skill demands and the fabric of American life. But it did not reduce total employment. The employment-to-population ratio rose over the twentieth century as women moved from home to market, and the unemployment rate fluctuated cyclically with no trend increase. What is fallacious in the lump of labor fallacy is the supposition that there is a limited quantity of jobs. It is not fallacious, however, to posit that technological advance creates winners and losers. The shift from the artisanal shop to the factory with mechanization in the nineteenth century reduced the demand for skilled craft workers and raised it for more educated workers (managers, engineers, and clerks) and for less-skilled operatives. More recent technological changes from electrification to computerization have expanded the demand for highly-educated workers but substituted for less-skilled production workers. Technological improvements raise overall living standards but may adversely affect the quality of jobs for some workers. Two forces are rapidly shifting the quality of jobs, reshaping the earnings distribution, altering economic mobility, and redefining gender roles in OECD economies. These forces are, first, employment polarization (a demand-side force) and, second, a reversal of the gender gap in higher education (a supply-side force), reflecting women's rising educational attainment and men's stagnating educational attainment. The result has been a labor market that greatly rewards workers with college and graduate degrees but is unfavorable to the less-educated, particularly less-educated males. The economic and social repercussions are only starting to receive study.
9/27/2010 David H. Autor and Lawrence F. Katz

28

Employment polarization
In the United States and other advanced countries, employment growth is polarizing with job opportunities increasingly concentrated in relatively high-skill, high-wage jobs and low-skill, low-wage jobs. Figure 1 plots changes in employment by decade for 1979 through 2009 for ten major occupational groups encompassing U.S. non-agricultural employment. These occupations divide into three groups. On the left-hand side of the figure are managerial, professional, and technical occupations. These are highly-educated and highly-paid occupations. Employment growth in high-skill occupations was robust for the past three decades. The next four columns display employment growth in middle-educated and middle-paid occupations, including sales; office workers; production, craft and repair; and operators, fabricators and laborers. Their growth rate lags the economy-wide average and slows in each subsequent time interval. These occupations were hard hit by the Great Recession with absolute employment declines from 7 to 17 percent. The final three columns depict employment trends in service occupations involved in helping, caring for, or assisting others. Workers in service occupations disproportionately have no post-secondary education and relatively low hourly wages. Employment growth in service occupations has been rapid in the past three decades, expanding by double digits in the 1990s and the pre-recession years of the past decade. Even during the Great Recession, employment growth in service occupations was modestly positive. The consequence has been a sharp decline in the share of U.S. employment in traditional middleskill jobs. The four middle-skill occupationssales, office workers, production workers, and operativesaccounted for 57 percent of employment in 1979 but only 46 percent in 2009. The polarization of employment is widespread in the OECD. Figure 2 plots the change in the share of employment between 1993 and 2006 in 16 European Union economies for three sets of occupations low-, middle-, and high-wagecovering non-agricultural employment and grouped by average wages. In all 16 countries, middle-wage occupations declined as a share of employment. Low-wage occupations increased as a share of employment in 11 of 16 and high-wage occupations increased in 13 of 16. In all 16 countries low-wage occupations expanded relative to middle-wage occupations.

Employment polarization: Demand-side causes


9/27/2010 David H. Autor and Lawrence F. Katz

29

A leading explanation for the polarization of employment in the OECD focuses on the computerization of many job tasks, altering the composition of jobs and the tasks workers perform within jobs. The price of information technology has fallen at a stunning pace. William Nordhaus (2007) estimates that the real cost of performing a standardized set of computational task fell at least 1.7 trillion-fold between 1850 and 2006, with the bulk of this decline occurring in the last three decades. The rapid, secular price decline in the real cost of symbolic processing creates enormous economic incentives for employers to substitute information technology for expensive labor whenever feasible. Simultaneously, it creates significant advantages for workers whose skills are complementary to computers and it disadvantages those whose tasks are easily substituted by computers. Although computers are now ubiquitous, they do not do everything. Their ability to accomplish a task depends upon the ability of a programmer to write a set of procedures or rules that appropriately direct the machine at each possible contingency. For a task to be autonomously performed by a computer, it must be sufficiently well defined (i.e., codifiable) that a machine can execute the task successfully by following the steps set down by the programmer. We refer to the procedural, rule-based activities to which computers are currently well-suited as routine tasks. Job tasks that primarily involve organizing, storing, retrieving, and manipulating information are increasingly codified in computer software and performed by machines. These advances have also dramatically lowered the cost of offshoring information-based tasks to foreign worksites. Measures of job task content uniformly find that routine tasks are most pervasive in middle-skilled cognitive and manual jobs, such as bookkeeping, clerical work, repetitive production, and monitoring jobs. The substantial decline in clerical and administrative occupations is substantially a consequence of the falling price of machine substitutes for such tasks. The automation and offshoring of routine tasks reduces the domestic demand for workers in these tasks, but as per the Green Revolution example, does not necessarily reduce overall labor demand. Rather, it raises relative demand for workers who can perform non-routine tasks that are complementary to the automated activities. These non-routine tasks can roughly be subdivided into two major groups on opposite ends of the occupational-skill distribution: abstract tasks and manual tasks. Abstract tasks are activities that require problem-solving, intuition, persuasion, and creativity. These tasks are characteristic of professional, managerial, technical and creative occupations, such as law, medicine, science, engineering, and design. Workers who are most adept in these tasks typically have high levels of education and analytical capability.
9/27/2010 David H. Autor and Lawrence F. Katz

30

Manual tasks, on the other hand, are activities that require situational adaptability, visual and language recognition, and in-person interactions. Driving a truck through city traffic, preparing a meal, or installing a carpet are all activities that are intensive in non-routine manual tasks. Such tasks demand workers who are physically adept and, often, able to communicate fluently in spoken language. Yet, such jobs are often organized in ways that may require little or no education beyond high school. This latter observation applies with particular force to service occupations, e.g., food preparation and serving, cleaning and janitorial work, and maintenance. These jobs demand interpersonal and environmental adaptability, which are precisely the job tasks that are challenging to automate because they require responsiveness to unscripted interactions. Such jobs are also difficult to offshore because they typically must be performed in person, often in direct contact with final consumers (e.g., haircutting, food service, house-cleaning). A consequence of these forcesrising demand for highly-educated workers performing abstract tasks and for less-educated workers performing manual or service tasksis the partial hollowing out or polarization of employment opportunities seen in Figures 1 and 2. This hypothesis is supported by a rapidly growing body of research that links the process of computerization to occupational change over time and across countries. Employment projections from the U.S. Bureau of Labor Statistics forecast these shifts to continue for (at least) the next decade.

Educational gender reversal


The polarization of employment opportunities in the last three decades has been accompanied by a substantial secular rise in the earnings of those who complete post-secondary education. The hourly wage of the typical college graduate in the U.S. was approximately 1.5 times the hourly wage of the typical high-school graduate in 1979. By 2009, this ratio stood at 1.95. This enormous growth in the earnings differential between college- and high-schooleducated workers reflects the cumulative effect three decades of more or less continuous increase. Many other OECD countries have seen increases in the wage gap between college and non-college workers, though the U.S. case is more extreme. The polarization of job opportunities is half the explanation for the growing wage gap. If the rate of growth of educational attainment had kept pace with the rising relative demand for highly-educated workers, the increase in these earnings differential may have been held in check. But it did not in the United States. The explanation for why it did not is a puzzle and cause for concern.

9/27/2010

David H. Autor and Lawrence F. Katz

31

As shown in Figure 3, female educational attainment rose substantially in these decades. Throughout the OECD, the share of females attaining post-secondary (tertiary) education increased remarkably in this period. Comparing the fraction of women ages 2534 with college education in 2009 with that of women ages 4554 in the same year, we can see that college attainment among women more than doubled in many countries over two decades. In Spain, it rose from 23 to 43 percent. In the U.S., the gains were more modest but substantial, rising from 28 to 35 percent. The counterpoint to gains in female skill investment is the lackluster increase among males. Figure 4 shows that male college attainment rose only weakly in most countries over the same period. In Spain, it rose from 27 to 33 percent. In the U.S., college attainment in 2009 was several percentage points lower among males ages 25-34 than among males who completed schooling two decades earlier. Figure 5 shows that female rates of college attainment now greatly exceed those of males in most industrialized countries. Indeed, in 2009, the ratio of female-to-male college attainment exceeded parity among younger cohorts (ages 25-34) in all eleven countries in the figure. For the European counties this ratio averaged 1.3, almost identical to the U.S. ratio. For cohorts that were ages 45-54 in 2009, however, female to male college attainment was roughly at or below parity in eight of eleven countries. The rising educational attainment of women is good news, but males failure to keep pace is problematic. It means fewer young males will gain entry to high-end occupations and that the supply of workers who can perform high-end abstract tasks is not increasing as fast as demand. This exacerbates rising wage inequality and retards the growth of advanced economies, which depend on their besteducated workers to develop and commercialize the innovative ideas that drive economic growth. The cross-national phenomenon of polarizing employment growth and stagnating male educational attainment presents a grand research challenge on two fronts: (1) understanding the sources of gender differences and trends in college attainment; (2) analyzing the social and political implications of job polarization and the decline in traditional middle-class jobs. The extent to which in-person services can be reorganized and professionalized into higher-skill, higher-wage jobs may be a key to whether we see continued economic polarization or the emergence of new middle-class jobs and shared prosperity.

9/27/2010

David H. Autor and Lawrence F. Katz

32

9/27/2010

EU
M an ag er s Pr of e ss io n al s

-20

-10

10

20

-.2

.2

.4

.6

Te c hn ic ia ns

Sa le s O ffi ce /A dm in Pr od uc ti o n O pe r at or Pr ot e s/ La bo re rs

Occupations Grouped by Wage Tercile

Lower Third Upper Third

1979-1989 1999-2007
ct iv e Fo od /C le an i

Figure 1. Percent Change in Employment by Occupation, 1979-2009

Figure 2. Change in Employment Shares by Occupation, 1993-2006

David H. Autor and Lawrence F. Katz

1989-1999 2007-2009

Se rv ic e ng Se r Pe rs on al

Middle Third

vi ce

U Av SA er Po age rtu g Ire al la Fi nd nl a N nd N or et w he ay rla n G ds re ec e U Sw K e G de er n m an Sp y Be ain lg D iu Lu enm m xe a m rk bo u Fr rg an Au ce st ria Ita ly


C ar e

33

Figure 3. Female College Education Attainment Rates in 2009


by Birth Cohort and Country
5 15 25 35 45 55 U ni E. 0 10 20 30 40 50 te U d .( 10 Sta te C s ou nt rie s)

er m N an et y he rla nd s G re ec e U ni Sp te ai d n Ki ng do m Ire la nd

Ages 45-54 in 2009 Source: Eurostat, U.S. Census Bureau

Ages 35-44 in 2009

Figure 4. Male College Education Attainment Rates in 2009


by Birth Cohort and Country
5 15 25 35 45 55 U ni 0 10 20 30 40 50 te U d .( 1 0 St a te C s ou nt rie s) G er m N an et y he rla nd s G re ec e U ni Sp te ai d n Ki ng do m Ire la nd

E.

Ages 45-54 in 2009 Source: Eurostat, U.S. Census Bureau

Ages 35-44 in 2009

9/27/2010

David H. Autor and Lawrence F. Katz

D en m

Ages 25-34 in 2009

ar Po k rtu ga l

ly

Fr

an ce

Ita

an ce D en m ar Po k rt u ga l
Ages 25-34 in 2009

Fr

Ita

ly

34

Ratio of Female Rate to Male Rate by Birth Cohort and Country


.2 .6 1 1.4 1.8 U ni E. 0 .4 .8 1.2 1.6 te U d .( 10 Sta te C s ou nt rie s)

Figure 5. College Education Attainment Rates in 2009

er m N an et y he rla nd s G re ec e U ni Sp te ai d n Ki ng do m Ire la nd

Ages 45-54 in 2009 Source: Eurostat, U.S. Census Bureau

Ages 35-44 in 2009

9/27/2010

David H. Autor and Lawrence F. Katz

an ce D en m ar Po k rt u ga l
Ages 25-34 in 2009

Fr

Ita

ly

35

36

Some Foundational and Transformative Grand Challenges in Economics Suggestions from Martin Neil Baily, The Brookings Institution Bullet point format The fundamental question: There is no generally accepted theory of macroeconomics that is adequate to respond to the challenges thrown up by the financial crisis and resulting deep and persistent recession. o Much of the research in macroeconomics over the past forty years has been focused on developing models of rational behavior with rational expectations that are consistent with observed macro data. This has resulted in gap between macroeconomic policymakers and the research community because, in practice, policy is made with a hybrid of Keynesian, neoclassical, and monetarist ideas. The macroeconomic forecasting models that serve policymakers, such as the FED, or the business sector are extended IS-LM models that work pretty well under normal circumstances but do not predict sharp changes, such as severe recessions. o The gap between theory and practice has consequences. 1. Although some economists saw the dangers of a housing bubble, almost no one predicted how severe the housing collapse would be and how it would damage the financial system and the economy. 2. The FED Chairman and successive Treasury secretaries were forced to improvise over weekends in the face of collapsing financial institutions and a potential breakdown of the financial system. 3. At the present time, neither monetary nor fiscal policy is capable of ensuring a solid economic recovery. o The NSF should launch a large-scale effort to re-examine what we know and do not know about the macro-economy and how research can move forward constructively. There should be a willingness to examine new approaches and not be tied to past frameworks. For example, behavioral economics has looked at issues such as how and why people save. To improve macroeconomics, we need to develop these new economic tools to understand how people and markets behave when there is widespread fear and panic. Tversky and Kahneman taught us in the 1970s that people do not make good economic decisions under uncertainty, but their insights have not been incorporated into mainstream macroeconomics. Other important topics are: 1. To define and measure systemic risk. 2. Given that monetary policy is being limited by the zero bound on interest rates, maybe we should revisit the optimal inflation question. 3. As a comprehensive model of business cycles, the real business cycle model has failed dismally. However, there is evidence that technology shocks or productivity shocks can have major impacts on cyclical movements, as happened in the late 90s and as seems to be happening in the current recovery.

37

o Macroeconomics needs to re-boot, not discarding what has been learned, but reassessing what is of value and what needs to be done to provide a better understanding of the economy. o In a related argument, Alice Rivlin and Isabel Sawhill commented that the NSF should fund research that imbeds what we know about behavior at the microeconomic level into macroeconomics. Sawhill pointed to the gap between Guy Orcutt and Alices original vision of building microsimulation models that can be used for policy analysis and the reality. Other countries have developed such models with government funds and use them for all kinds of analysis. We have not, except for a few underfunded and modest efforts. The fundamental question: Most young people in the United States will not complete a two or four year college degree and most of them will end up in jobs that do not pay well. How can we structure the US education and training system so that it provides the skills that young people need to earn middle class wages? o The combination of changing technology and globalization has resulted in a relative decline in the earnings of people without high levels of skill or education. Much of the economics literature on education and skills has focused on tracing the relationship between the amount of education and the subsequent wage. Or for job training programs, the extent to which these programs improve subsequent wages. o There is a strand of thinking about policy in this area that is essentially nostalgia based. In the past, wages were set with a large institutional component, union bargains or routine pay increases. These approaches will not work in the future, given the current structure of the economy. Companies and unions that refuse to face competitive forces will go broke. In order to receive middle-class earnings, workers must have productivity to match. It is essential that the education and training system in the United States do a better job for non-college bound students. o It does not appear to be working to teach watered down academic curricula to non-college bound students because they do not see the value of what they are asked to learn. Despite the earnings premium that is achieved holding a high school diploma and by college completion, an increasing number of students, especially young men, are dropping out. Education is something they hate and feel humiliated by. o How can curricula be reformed to provide economic value and attract students? The current training of teachers does not equip them to train students in marketable skills in the job market. How can teacher training and pay structures be reformed so that schools can serve the non-academic students? The fundamental question: What is the value of work and how does it change with age? (Based on a suggestion by Henry Aaron).

38

o As policymakers face the pressure of budget deficits, it is almost inevitable that the retirement age will rise and people will be required to work longer. o European countries have taken a very different path than has the United States, choosing policies that result in much shorter working hours per year through shorter work weeks and longer vacations. In some (not all) European economies, labor force participation is also much lower than in the US, with more early retirement and fewer women working. Faced with budget pressures in Europe, policymakers are moving to increase the amount of time spent working. o Economics treats work as a source of disutility, at least at the margin. But work produces huge consumer surplus for many people and in the modern world perhaps for most. Furthermore, limitless leisure can be devastating for some people and may be devastating for all. There are all sorts of margins that are relevantat what age one starts working, how long one works each day, how many weeks a year, how many years, when one retires, whether one retires abruptly or suddenly. Given population aging, it is important to have a better of understanding of the value and cost of work. o Some college professors would choose to continue working until they drop, but those who make living carrying pianos feel differently. If everyone is required to work longer, what policy steps would be helpful as older workers manage the transition from jobs requiring strength and endurance to jobs that are less physically demanding? The fundamental question: How are innovation, productivity and employment being affected by globalization and the rise of new economic powers? o Popular discussion of the challenges facing the US economy almost always end up talking about India and China and what is happening in these economies. Some economists, for example Paul Krugman, Alan Krueger, Gene Grossman, Lawrence Katz and Robert Lawrence, write about these and related issues, but a casual check of recent economics journals does not reveal a broad enough interest. o How important is innovation in emerging markets? By some accounts Chinas manufacturing sector remains concentrated in low-value activities, but there are also reports that it is moving up the technology ladder. India is clearly generating innovation in its provision of offshore services and is starting to make manufacturing innovations in autos and medical devices. Does innovation in emerging markets result in a reduction in innovation in the United States or is it complementary? Do American consumers benefit from overseas innovation? o How do innovation and productivity growth affect employment? It seems that industries that developed within a national market and are then exposed to global competition are forced to restructure and make sharp employment reductions as they increase productivity. Other industries are on the steep part of their S-curve and innovation spurs rapid productivity

39

increase and price declines that leads to increased demand and stable or increasing employment. Successful export industries are able to drive increased domestic employment. The fundamental question: There is a wide variation in the cost of treating any given disease, depending on the hospital or region where it is treated. There is a similar large variation across advanced economies in health care costs. There seems to be little or no relation between the cost of treatment and the effectiveness of the treatment. How can this be? How can this result be utilized to improve the quality and reduce the cost of the US health care system? (Also suggested by Henry Aaron). o The Dartmouth studies have revealed wide variations within the United States and they found best practice treatments were not the most expensive. The McKinsey Global Institute compared health care costs in the United States to other OECD economies and found that costs were much higher here and that health outcomes were generally as good or better in other countries. o The National Institutes of Health is barred from considering costeffectiveness as they evaluate different treatment protocols. This puts the onus on economists and other social scientists to undertake studies of this area. o Despite statements to the contrary, there is evidence that treatment protocols used by doctors are heavily influenced by the economic incentives that they face. For example, Medicare-driven reimbursement patterns have driven very short hospital stays in the United States. In Germany hospital stays are much longer because they provide an economic return to doctors and hospitals. For the treatment of cancer, surgeons recommend surgery and radiation oncologists recommend radiation. o The health sector is very backward in its use of information technology, not because of problems in the technology, but because incentives to increase efficiency are not in place. The fundamental question: Can evolutionary economics provide new insights into areas where conventional economics has proven inadequate? o Attached to this statement is a copy of the recent speech by Charles Taylor in which he makes the case for evolutionary economics. Taylor is at the Pew Charitable Trusts but is expressing his own views.

40

Attachement to: Some Foundational and Transformative Grand Challenges in Economics

Macro-Prudential Regulation and the New Road to Financial Stability

Looking Through Darwins Glasses Charles Taylor Director, Financial Reform Project, Pew Charitable Trusts
(Chicago Federal Reserve Bank/IMF conference: Macro-prudential Regulatory Policies: The New Road to Financial Stability; September 23-24, 2010)

Thank you to the IMF and the Federal Reserve Bank of Chicago for inviting me to speak to you tonight. I speak on my own behalf and not that of my employer, Pew Charitable Trusts. When I was at the Group of Thirty, I got to know Brian Quinn, who was Executive Director of Supervision at the Bank of England. Brian said to me once that a good year for him was one in which a small bank or two failed somewhere in the United Kingdom. That was a provoking thing to say on the face of it, because his job was to make sure that banks didnt fail. But he had a good reason. He wanted his staff to keep their hand in. With a small failure or two each year, they would know what they were doing if, heaven forbid, one day something more serious happened. I think in fact there may be a deeper reason why some positive rate of failure among financial institutions is good. You need turnover in any population for it to prosper in a changing environment and financial institutions most certainly live in a changing economic, social and technological environment. Its a complex environment too. There is no way to understand everything thats happening and so there is no way to rely entirely on planned adaptation. Individual institutions will inevitably make mistakes however good their managers, shareholders and regulators. It is not just greed thats good. So is some failure. Then again, consider the spread of best practice. Adopting best practices is by definition beneficial for any individual institution that does it, whether we are talking about marketing, product development, risk management or any other aspect of business. But if everyone tries to adopt best practice in all things, the population will become increasingly homogeneous. Carried to an extreme, two things happen. Herd behavior will be the result if any small thing shocks the system -- increasing the chances of instability. And something that might cause one member to fail, is increasingly likely to wipe out everyone. The universal and strict adoption of best practices can be highly destabilizing for any system as a whole.

41

*** Both of these insights are paradoxical if you think in comparative static terms. Both, however, seem sensible enough if you are thinking in terms of evolution. For evolving populations, death makes way for new life and diversity is a form of insurance. Indeed, I believe we should think in terms of evolution when we think about financial systems and macro-prudential regulation. Macro-prudential regulation is then simply the art of constraining evolution. Looking at financial stability through Darwins glasses will bring a great deal into focus. Why is this? Well, evolutionary theory provides an array of insights into systemic instability its causes, what to watch for and what to do about it. Evolutionary theory is a big enough tent to accommodate several other theories and insights about systemic stability. Because it is comprehensive and deep, it may help us pick up signals of future instability rather than just explanations for what went wrong in the past. Lets examine each of these assertions. 1. The Financial System is an Evolutionary System As many of you know, evolution can be viewed as a family of algorithms for changing populations. Evolutionary algorithms: Work locally on individuals or families by changing them or their descendants and there is an element of unpredictability involved. History matters in that evolution usually tinkers and only occasionally does something radical: what comes after usually resembles what went before. There is selection based on fitness. The environment changes unpredictably. And The environment provides only limited resources so that competition is inevitable as successful populations grow. There is no doubt that the financial system evolves in the colloquial sense all the time. It certainly has through my sixty years on this planet pretty relentlessly. Still, if you stop and think for a minute, you can see that the financial system also changes in large measure as an evolutionary system in the strict scientific sense: change is local, initiated by individual players or small groups clubbing together; history does matter; there certainly is selection based on profitability -- at least when governments dont intervene; the technological, economic and social environment is always changing unpredictably; and the competition for funds, markets, talent and technology is fierce. Now, given enough time and a large enough population, it is a mathematical certainty that many things happen. The list is long: diversity, complexity, speciation, cooperation, specialization, symbiosis, co-evolution and predator-prey relationships emerge. And networks of interaction and interdependency appear for all of the above. Remarkably, the initial population can then spawn so to speak evolving secondary populations of

42

networks, characteristics, processes and strategies. And I mean that strictly, not just metaphorically, in the sense that networks, characteristics, processes and strategies are all populations where change is local, history matters, selection rules, the environment changes and resources are limited. Evolutionary theory predicts all these things. So much for the canard that evolution cannot make predictions! Because it is an evolutionary system in the strict sense, we should expect to see all of these things occur in the financial system and indeed we do. The facts confirm the theory remarkably well. One more thing before we turn to financial instability. Some evolutionary systems feature intelligent populations. They never feature the omniscient all knowing, all anticipating intelligence of econ 101 because for such brainiacs there is no uncertainty and, frankly, no need to evolve. Those creatures inhabit timelessness heaven with the all the boredom and none of the joy. Dismal indeed. Evolving populations can only ever have limited intelligence limited knowledge of the past and the present and limited foresight into the future. Limited foresight. Ah, I am sure you are thinking, That sounds more like the bond trader I know. Me too. Anyway, it turns out that heritable limited knowledge and foresight has interesting implications in evolutionary theory especially when it comes to instability. We will come back to that in a moment. Lets talk about instability. 2. Evolution is All About Instability While evolution always changes a population, it doesnt always improve its fortunes. Leaving aside extinctions, which clearly are sub-optimal from the viewpoint of those concerned, evolution is a tough task master that throws often throws a curve ball. Instability arises because of: Resource Constraints: When populations grow steadily for a long time, they bump up against resource constraints. If they cant then adjust to zero growth or if the resources are not renewable, they will either break into sub-populations that compete, or bump along, or just die out. The investment bank extinction could be viewed that way: they had reached the limits to growth and had begun to compete in ways that undermined their own resilience and reliability sacrificing capital and liquidity for profits. Co-Evolution: When species arise and try to co-habit and either cooperation, competition or predator-prey relationships develop between them, they start to coevolve. What happens to one of them is changing the environment of the others. There is a rich literature of mathematical modeling and agent based simulation that tells us that co-evolution can be highly unstable. Nature does too. Consider the uneasy equilibrium between foxes and rabbits in the English countryside. If

43

foxes start to grow more cunning and hunt down more and more rabbits, the pressure is on to become more cunning still as the rabbit population thins. There is positive feedback driving cycles of increasing foxy cunningness. Less attentive, slower rabbits, however, are being culled continually. On average, the rabbit population is becoming better at evading foxes creating negative feedback. If rabbit slyness emerges fast enough, or to put it another way, negative feedback counteracts positive feedback quickly enough, then cycles can continue forever. If, however, negative feedback comes on the scene too late, it can be very destabilizing. Financial positive feedback loops that eventually abate include credit, liquidity and leverage cycles and, if they emerge too quickly or last too long, they can often be found at the scene of the crime when booms, busts and collapses play out. Networks of Interdependence: Two good survival strategies for species are to cooperate, and to get good at preying on others. Both require networks of interdependence, which quite evidently co-evolve whether we are talking about nature or finance. Speciation or specialization begets more speciation and specialization, increasing both complexity and diversity. Network theorists and epidemiologists have a thing or two to tell us about network stability the importance of super-spreaders and critical nodes, the dangers of concentration and about other characteristics of networks that can hamper or help contagion. Limited Intelligence: Intelligence is a powerful advantage for any species. Intelligence allows a species to anticipate the responses of predator and prey. It can eliminate a lot of really bad variations in thought experiments, avoiding costly and time-consuming random variation. Put another way, adaptive Lamarkian evolution can enhance Darwinian evolution in intelligent species a huge advantage. But, unfortunately, it adds significantly to instability because it makes strategies of extrapolation and imitation more likely. These can be powerful survival tools for the below average much of the time and for everyone when times become uncertain moving with the crowd offers some protection. (When I was a currency analyst on Wall Street, I was fascinated to see that when times got uncertain the dispersion of forecasts from different analysts diminished. If we were likely to be wrong, we wanted to be wrong together.) But extrapolation and imitation tend to produce herd behavior and homogeneity. What works for krill or starlings as a defense against whales or eagles can make matters a lot worse in a financial boom or a bust. Complexity: One of the best things about evolution is that it explains complexity so satisfactorily. Complexity, however, adds to instability for those of limited intelligence. It obscures the past and the present and makes the future hard to predict. Particularly where evolution is accelerating and new complexity is popping up all over the place, it gets easier to make mistakes and extrapolation and imitation strategies become increasingly attractive. Management fashions certainly influence the senior managers of large banks who face extraordinary

44

complexity both within and without. But, as I said, extrapolation and imitation strategies breed herd behavior and homogeneity which are destabilizing. Self-criticality: A sixth source of instability is the tendency for evolutionary systems to be self-critical -- not in the sense that Wall Street traders spontaneously start to organize after work classes where they can criticize themselves -- but rather that evolutionary systems have a strong tendency of their own accord toward states in which they are teetering on the edge of failure. Evolutionary systems are competitive and they drive individuals toward the edge of their abilities, exhausting their reserves. They tend to over-specialize think of developing a particular skill but neglecting your general education. Moreover, across any network of interdependency, filaments of fragility of uncertain length develop and from time to time generate avalanches of failure. The size of these avalanches are often governed by a power law there is a constant that dictates how, as avalanches get bigger, their frequency declines. It is not just that networks provide pathways for domino effects but, given half a chance, evolution lines up wobbly dominos.

3. Implications for Macro-Prudential Strategy I could go on, but I think I have made my point. Evolution suggests plenty of things to monitor for early signs of instability: Homogeneity of organizational structures, practices, and strategies; Co-evolution -- of processes, practices, institutions, markets, strategies, products and services; Concentration a form of unstable interdependence; Declining robustness and resilience (or declining wellness, one might say) meaning declining excess capacity, increasing leverage, declining capital or liquidity; Positive feedback mechanisms and things that encourage positive feedback such as information asymmetries and misaligned incentives (including moral hazard); Complexity, opacity and speed of change including complexity of organizational structure, incomprehensibility of new instruments and trading strategies, and rapid growth in activity and profits, which can be symptoms of something going wrong as often as something going right; Incentives misalignment a form of informational asymmetry that creates corrosive positive feedbacks Turnover which should be neither too low nor too high but just right; and Interconnectedness which can be creating super-spreaders, critical nodes and new pathways for other kinds of instability. Multiple small causes that are associated with impending avalanches of selfcriticality.

45

While some of these can be measured by traditional types of economic indices and surveys for others, such as homogeneity or process evolution there is experience in management and the sciences to draw on. What evolution can tell us about policy levers? Well, first of all, tread carefully. The possibilities for undesirable side-effects not only are legion, but forever changing. Beyond that, there are some familiar ideas to consider such as discouraging concentration, aligning incentives, raise capital and liquidity requirements, lowering permitted leverage and loan-to-value ratios, counteracting protracted positive feedback loops in the credit and liquidity cycles, increasing transparency and making sure that institutions of all sizes can be broken up or allowed to disappear when they fail. Then there are some less familiar ones such as encouraging (and certainly not inhibiting) diversity; avoiding threshold effects that can precipitate rapid positive feedback in microprudential and market regulations; monitoring the co-evolution of markets, products and processes along with institutions; and looking for circumstances in which many small things may be going wrong together - -lengthening those filaments of instability associated with self-critical systems. Evolution also gives us a more balanced theory of micro-prudential regulation and market discipline. Both are capable of adding to stability by raising wellness. But both are also capable of adding to instability through positive feedback loops and increasing homogeneity. Evolution is a forward-looking context for other theories and insights. Agent based simulation (ABM), complex adaptive systems (CAS) analysis and network analysis all clearly apply to evolutionary systems. CoVaR and credit exposure mapping address network instability by addressing the relationship between the size of links and vulnerability how close and how wobbly the dominoes are. Finally, evolution may help us ready ourselves to fight the next war rather than the last one. Only some comfort because we are creatures of limited intelligence and, while we may be looking at the right thing when we study evolution, critical details are bound to elude us from time to time. 4. Conclusions I am reminded of a old friend of mine who was a civil engineer. In the late 1960s, he supervised the construction of the M4, the motorway that runs West from London across the Cotswolds toward Bristol and Bath. I was studying mathematics at Cambridge at the time and one day when he came to pick me up at my college for a lift to London, he told me all about a new piece of software his firm had developed. He was very excited because it allowed him to try out different routes and then to drive along them to see how the lay of the land changed. Over the succeeding months, he and his colleagues

46

worked out a route of gentle turns, elegant bridges and lovely vistas. The result is arguably the most beautiful motorway in England. Macro-prudential regulators are also trying to look across the landscape and try out different routes. They are not building a motorway but rather shepherding an independent minded flock through the trees. In the past, they focused on the trees ahead and the individual sheep. They rarely looked at the woods or the flock as a whole. If they did look up, the view across the hills and valleys was blurred in the extreme. In the future, we need to study the lay of the land and nudge the financial system away from cliffs and precipices. We need to ensure the co-evolving populations of which the system is made up are strong enough to run the occasional rapids and weather the occasional storm. For that, we need glasses that will help us focus on the landscape both near and far. I hope that Darwins glasses were bifocals. I think we should try them on.

47

48

A Proposal for Future SBE/NSF Funding: Refocusing Microeconomic Policy Research Steven Berry Yale University Department of Economics & Cowles Foundation and NBER Abstract How can the NSF harness large and vital research efforts in econometrics and economic theory to address our era's most important microeconomic, social and climate policy questions? The goal presented in this white paper is to refocus the economics profession's more technical fields of inquiry on ideas and tools that are relevant to policy, while making sure that the most useful newly developed ideas and tools are actually adopted in policy analysis. Examples of applications include bio-fuels / global warming, health care reform and education choice. Policy analysis in each of these examples requires the use of econometrics and economic theory together with a well-informed understanding of institutions and policy. Increased cross-sub-disciplinary efforts within economics and allied fields could have important social payoffs.
This work is licensed under the Creative Commons Attribution-NoDerivs 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nd/3.0/ or send a letter to

Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.

Introduction The broad field of economics invests tremendous resources (including money from the NSF) into various kinds of theoretical and methodological endeavors, which oftentimes show great promise as potential tools for policy relevant research. However, method and policy often remain disconnected, to the detriment of both. This white paper highlights the potential for collaborative research efforts that focus on applying new ideas and methods to important policy questions with the aim of providing better policy analysis while also ensuring that the more technical researchers receive important feedback as to what tools are actually of real-world use. It is already the case that many economists are working on this kind of collaboration, either directly in teams of researchers or via less formal interactions across fields. But as is usual in academia, there is also strong pressure for increased specialization. At a time when a vocal minority of empirical economists actively rejects the usefulness (for their particular research agendas) of much of econometrics and almost all of economic theory, the incentives for econometricians and theorists to

49

define themselves as ``pure'' practitioners of their craft, devoid of policy concerns, increases. And once this happens, a negative feedback loop can set in, where policy-minded researchers correctly observe that (for example) many parts of recent economic theory do not seem to have much policy-relevance. The purpose of this proposal is not to argue against ``pure'' econometrics or theory (which has great long-run value) and neither is it to argue against simple empirical strategies that are useful for policy analysis (whose value is obvious.) The opportunity for the NSF is to tilt the profession's portfolio of research in a socially productive way. This white paper provides three, very much non-exhaustive, policy examples where the more rapid dissemination of methods into a policy arena would have high value: Bio-fuels and global warming, Educational choice, and Health care reform.

Refocusing Policy Research

Drawing on the author's particular expertise, the examples mostly focus on the problem of policy in market equilibrium, which is a classic setting where econometrics, theory and policy analysis have to work together. However, the intent is not to restrict an NSF initiative to such examples, but rather to suggest that the NSF solicit novel and important proposals for tying together theory, data, econometrics and policy. Another possibility would be to start an initial inquiry as to what broad policy areas would most benefit from the integration of methodology and policy researchers and then call for detailed proposals in those fields. While this proposal emphasizes cross sub-disciplinary research within economics, the logic will obviously often extend to policy experts outside of economics, for example in health care or climate change. When the model under consideration is behavioral, organizational or political, researchers from allied social science disciplines may be the appropriate policy-oriented analysts. Indeed, recent technical advances in econometric method and economic theory may sometimes have particularly high value in just those allied fields where they are least disseminated. Methods and Empirical Policy Analysis Recent empirical economics has often emphasized the learning that can take place without the use of economic models, as through field experiments or through thinking directly about ``causal effects'' of a policy change. For example, one can learn a lot about the effectiveness of a particular policy intervention in African villages by running a randomized experiment across a set of representative villages. In other cases, some economists argue that a particular policy change represents a ``natural experiment'' that allows one to infer the effect of a policy fairly directly from data, without much of an economic model. However, economists traditionally teach their undergraduate and graduate students that many kinds of counter-factual policy analysis require us to uncover an underlying policy-invariant function (or parameter) that cannot directly be observed from data. A classic, and still vividly relevant example, is the need to uncover supply and demand elasticities in order to predict the effects of a change in a tax, for example a gasoline tax that is intended to reduce carbon emissions. These elasticities are called

50

``structural'' because they represent the underlying structure of the model that allows us to make a prediction about a policy that has not been previously implemented. The idea of ``structural estimation'' of such parameters is controversial, in part because of unproductive feuds over what is meant by the term. However, unless economists and policy-makers believe that nearly the entire approach of traditional undergraduate and graduate economics ought to be declared irrelevant to real-world policy debates, we will often need to estimate (somehow) such underlying models and interpret their implications in light of some more or less explicit economic theory. In the case of the gasoline tax (as in much equilibrium policy analysis) theory, econometrics and policy are tied up in a inextricable way. The theory tells us what are the relevant elasticities that we need to know and only theory together with econometrics can tell us how to estimate them. This is because the ``demand elasticity'' is not observed directly from data on market outcomes and representative experimental data is likely unavailable. Even if we can run localized experiments, the experiments have to be designed to tease out the separate demand and supply price-elasticities that are required by the theory (as opposed to, say, a vaguely defined single ``price effect.'') Now, the supply-and-demand model is not at all new and the basic econometrics of ``instrumental variables'' that would allow us to estimate simple linear demand and supply functions is not new either, so it might be argued that any well-trained empirical economist could tackle the problem and that there is no need for ``cross-sub-disciplinary'' research. However, it turns out that new instrumental variable methods are an extremely hot topic in theoretical econometrics, with much work that focuses on relaxing traditional assumptions that might be inappropriate or ad-hoc. Such methods are sometimes implemented on a question of policy relevance, but often (as it turns out) by a relatively small handful of researchers who work on a handful of problems. In practice ``policy applications'' using new methods are often presented as mere stylized examples of the method, rather than as serious policy research. Policy Examples This section considers richer examples of policy questions that could greatly benefit from interaction with new methods, leading (one hopes) to both better method and better policy analysis. Much of the focus is on the interaction of policy and equilibrium markets but that (again) reflects the knowledge base of the author as opposed to the limits of a broader strategy for supporting research. The list is intended to be purely illustrative of the kinds of policy questions that would benefit from the proposed initiative. Global Warming and Bio-fuels. Crop-based bio-fuels (like soy diesel) have been suggested as one part of a solution to global warming, since the carbon released from burning the bio-fuel is recaptured when the next crop is grown. The US and the EU are both considering policies that would effectively require that a significant fraction of world crop output be converted to bio-fuels. Recent research (see, for example, the 2008 paper in Science by Searchinger, et al) points out that the source of increased crop production for bio-fuels is critically important. To the degree that new land is cleared for bio-fuel production, the carbon released from the process of land-clearing may more than offset any carbon gain. The new literature on bio-fuels makes explicit the fact that crops used for bio-fuels have to come from some combination of new land, yield increases and demand reduction.

Refocusing Policy Research

51

Refocusing Policy Research Thus, the policy analysis of bio-fuels requires us to know the market price-elasticity of crop yields, landuse and also the price-elasticity of demand. It turns out that the empirical agricultural economics literature on these topics fails to use even modestly up-to-date econometrics and ignores the fact that crop prices, yields and demands are jointly determined in market equilibrium. In addition to the classic kind of supply and demand issues of endogeneity and equilibrium, an appropriate set of empirical models would need to be able to move from disaggregated output and landuse data up to higher levels of regional and world aggregation. For example, recent work by the present author and colleagues (Berry, Levinsohn and Pakes, 2004) presents a strategy for dealing with micro and macro data in an equilibrium context. It is key to note that the policy application in that methodological paper is not nearly as important as the bio-fuels policy debate and that the general methods have not spread to the empirical studies relevant to bio-fuel policy. On the theory side, bio-fuel policy makers are using very old Computable General Equilibrium (CGE) models that ignore the recent decades of research on world equilibrium trade. Notable examples include contributions by Melitz and by Eaton and Kortum. A number of recent economics Ph.D. theses have shown that the new trade theory can have important real-world applications that improve on older models. Bio-fuel policy-makers clearly realize that some kind of world-equilibrium model, fit to empirically estimated elasticity estimates, is necessary. Recently, modeling exercises by the California Air Resources Board and by the EU indirect land use research initiative have placed large weight on such exercises. The problem is that policy-makers are not being offered the insights of many years worth of advances in applied equilibrium trade theory and empirics. If bio-fuels policy is made on the basis of incorrect elasticity estimates, or poorly specified models, a policy designed to combat global warming could instead result in the Brazilian rainforest being cleared to grow soybeans for bio-diesel, a serious unintended consequence indeed. Health Care and the Role of Markets. Much policy research on health care looks at the ``directly observed'' effects of various health-policy experiments and interventions, a frequently useful exercise. However, much of the 2008-2010 health care debate revolved around the role of health care markets in a partly competitive equilibrium. Counterfactual policy analysis about introducing competition to health care markets, by necessity, involves thinking about economic primitives of supply and demand, which can be quite complex in the health-care context. Market-wide experiments in changing the amount and nature of competition are likely to be very limited in scope, so models and estimates are necessary. Recent work by Katherine Ho, David Dranove, Mark Satterthwaite, Gautam Gowrisankaran, Josh Lustig and others shows that progress is possible. Recognizing the importance of appropriate models, the FTC has apparently adopted some recently developed discrete-choice demand methods as an official basis for hospital merger analysis. However, there is only a little research effort currently being placed on improving the methods to account for the unique features of health care competition.

52

Educational Choice. Much useful empirical research on education focuses on the possible outcome of direct policy interventions like changing class size. To answer some of these important questions, it is not clear that complicated econometric or economic theory is necessary. However, there are other examples of educational research where models of demand, equilibrium and selection are all critical to policy analysis. School choice is a good example. The way that different students are sorted into a set of possibly very different quality schools is complicated. Simple models that ignore the heterogeneity of schools and students, and the role of potentially unobserved variation in tastes and school quality, are likely to provide misleading results. There is a lot of research on very sophisticated models of choice among differentiated alternatives in equilibrium, but they are often applied in situations of limited policy relevance. Some researchers -- Patrick Bayer, Justine Hastings, and Holger Seig (among others) -- have made good progress in applying empirical equilibrium models to this class of policy-relevant models, so progress is clearly possible. As in many other policy areas, society overall would benefit if a new group of highly skilled methodological researchers would add their efforts to the existing research agenda.

Refocusing Policy Research

53

54

This work is licensed under the Creative Commons Attribution-NoDerivs 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nd/3.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA. White Paper for NSF Grand Challenges Nick Bloom (Stanford and NBER) September 15th 2010 Abstract This short piece outlines some of the topics I see as part of the Grand Challenges for the social sciences over the next 10-20 years. I have focused on three areas where I see the policy agenda being particularly constrained by the lack of high-quality research.

1) Balancing work and family The President and First Lady jointly launched the Council of Economic Advisers paper1 Worklife balance and the economics of workplace flexibility at the Whitehouse in March 2010. This report makes clear the importance of balancing economic growth with family friendly working practices to policymakers, voters and firms. Changes in technology and the demographics of the US workforce have led to increasing conflict between work and family life. In Europe these pressures are claimed to be one of the key reasons that birth rates have now fallen below 2 (and below 1.5 in much of Southern Europe), far below the replacement ratio to sustain population levels. So a key policy question is should the developed world follow the French and Scandinavian model of regulating holidays to force workers to spend time away from work, or the US and UK model of allowing firms and individuals the freedom to organize their time? Despite the huge economic and policy interest in this question the empirical evidence is extremely limited. I was personally involved in assessing the prior literature for the CEA report, and was amazed to discover the prior research is primarily case-study or cross-sectional survey based.2 The CEA viewed this prior literature as so limited that even in its Executive summary it devoted one of its main bullet points to call for more substantive research, stating: A factor hindering a deeper understanding of the benefits and costs of flexibility is a lack of data on the prevalence of workplace flexibility arrangements, and more research is needed on the mechanisms through which flexibility influences workers job satisfaction and firm profits to help policy makers and managers alike [Executive summary, preceding page 1]

Work-life balance and the economics of workplace flexibility, Council of Economic Advisers, March 2010. http://www.whitehouse.gov/files/documents/100331-cea-economics-workplace-flexibility.pdf 2 This is not meant as an attack on these research approaches. I regularly teach case-studies and am a co-author of one of these survey articles (Bloom, Kretschmer and Van Reenen, 2010). I just highlight that neither approach is well suited for the type of causal analysis that is required for policy making.

55

The reason for the poor state of the literature is it is inherently difficult to examine the impact of work-life balance practices on firms and households. Surveys struggle to elicit causation for example in my paper (Bloom, Kretschmer and van Reenen, 2010) we find more women managers work in firms with a balance work-life balance culture. But what does this mean? It could be a causal effect (women push for better work-life balance), or reverse causation (women are attracted to better work-life employers) or some other correlated factor (maybe firms with more enlightened managers hire more women and provide better working conditions). Without further evidence distinguishing these stories undertaking evidence based policy is extremely difficult. To advance our knowledge we have to employ the tools of modern economics - natural experiments and field experiments. For example, running experiments in firms in allowing some randomly chosen groups of employees better working conditions and comparing the impact against other control groups. This is a challenging research study to pull off, potentially spanning several disciplines, although the tools of economics in pushing for causal identification are clearly at the core of this research. This would move beyond the assumption that correlation implies causation, which the CEA was unsurprisingly not satisfied with in the prior research. Given the top level policy interest, along with the weak state of the current research base, I think this area deserves special NSF support. 2) Building cross-country micro databases for productivity and growth research The increasing globalization of economic activity means that answering any domestic economic question increasingly requires a global analysis. This is true both at the macro level but also at the micro level for example, to what extent is Chinese trade now driving US manufacturing productivity and technology upgrading, and what should policymakers do in response? But collecting firm-level data has traditionally been the preserve of national agencies for example the US Census and the Bureau of Economic Analysis with very little international data comparisons. The organizations that do provide international comparisons like the OECD tend to provide industry or macro-level aggregates, because comparability is only possible at these broad levels. Comparing, for example, the growth rates, R&D expenditure and education levels of firms on a broad country by country basis is currently impossible.3 So to date research undertaken across countries has either been set-up and funded by individual research groups like the World Values Survey or the World Management Survey4, or carried out by international organizations like the World Bank which have focused solely on developing countries. This has led to huge gaps in international data analysis for example, no international datasets exist with basic information on inputs, outputs, growth, management and technology. I view the challenge of funding large-scale, long-lived international research projects. From my own experience running international management surveys it is feasible to raise grants of a few hundred thousand dollars to run one-off small scale surveys, but extremely difficult to fund large

This is not to say no international data exists for example Compustat in the US, Datastream or Amadeus in Europe but the coverage of these are limited to a small set of countries, and even within these countries to publicly listed firms (Compustat and Datastream) or to a few basic data fields (Amadeus). 4 See http://www.worldvaluessurvey.org/ and http://worldmanagementsurvey.org/

56

scale, international panel surveys.5 My advice is to firstly increase absolute funding to primary data collection without better basic international data no amount of clever estimation or clever theory will answer basic questions on the drivers of productivity and growth. Second, I would reallocate funding away from dozens of small-scale proprietary research projects to a few largescale public projects. I often see papers using individually collected data samples which after the paper is published never see the light of day. It would be much more useful to, for example, fund one or two large projects that provide public access longitudinal data across countries then 50 smaller projects which collect their own data but keep this proprietary. This is a common issue across the whole of social sciences, but again I think economics is the central discipline here because of its emphasis on large-scale, rigorous panel data collection.

3) Causal evidence on the impact of management on productivity After the crisis of 2008 the US government considered ways to assist its domestic manufacturing industry, including providing a massive increase in funding to the Manufacturing Extension Partnership (MEP). The MEP is a government funded agency that provides management assistance to US firms, and currently has a budget of around $100m per year, with the debate in 2008 and 2009 about whether this should be doubled. But from my discussions with economists in the administration one major reason holding this back was the lack of evidence for the impact of management practices on firms productivity. Senior policymakers including Larry Summers had apparently asked what evidence there was for the positive impact of improved management practices on firms performance, and what evidence there was for market failures in this. There has been a long debate on the importance of management practice in social science, but with unfortunately very little consensus. While researchers in management and strategy often claim overwhelming evidence for large impacts of management on productivity, economists have typically been skeptical. The reason for their skepticism is the lack of (arguably) causal evidence research to date has been based on surveys and case studies, both of which are problematic in terms of drawing causal inference. For example, correlations of good management with productivity could be due to reverse causation productive firms have the resources to hire in management consulting firms. This is clearly a major research and also policy hole, given the important role that management practices presumably play in driving US economic growth. This is true not only true in manufacturing and retail, but also in public sectors like healthcare and education. Several recent articles in the New York Times have, for example, highlighted the waste in the US healthcare system from poor management practices. So my final suggestion is for substantially increased funding for research to evaluate the causal impact of management (and organizational) practices on firm performance. To do this I would suggest doing two things: A) Building a large scale public access management database: This would start to build up a strong common survey infrastructure, that could be used to exploit natural experiments to try and estimate the impact of management on performance. I am involved in running a wave of such a survey at the US Census Bureau to survey around 50,000 US manufacturing plants, and think it would be valuable to repeat this in future and extend to
5

See, for example, the overview of the international management surveys in Bloom and Van Reenen (2010).

57

other industries like healthcare and retail. This will be a public access database in that all researchers will have access (within the limits of protecting confidentiality of Census data). To date nothing like this exists, so that the returns to building a large scale public panel database on management practices will be very high. B) Running management field experiments: to uncover the causal impact of management practices on firm performance. A number of researchers like Dean Karlan, Antoinette Schoar, Miriam Bruhn, Chris Udry, Greg Fischer and myself have run management field experiments in developing countries (see Bloom et al. 2010 and the references therein). These have been extremely informative on the impact of management on firm performance. But nothing remotely similar has been undertaken in the US. Running such field experiments in the US where some firms are helped to randomly improve their management practices, and their performance compared to a control group - would be invaluable in filling the current gaps in management research. Again, given the extensive experience of economists in large scale data collection, natural experiments and field experiments this would naturally be an area to have them involved in. Discussion with other disciplines over like organizational behavior, management and sociology about measurement and practices would also be very helpful. References: Why do management practices differ across firms and countries?, Nicholas Bloom and John Van Reenen, Journal of Economic Perspectives, March 2010 Determinants and consequences of family friendly workplace practices, Nicholas Bloom, Toby Kretschmer and John van Reenen, Strategic Management Journal, Fall 2010 Does management matter: evidence from India, Nicholas Bloom, Benn Eifert, Aprajit Mahajan, David McKenzie and John Roberts, Stanford Mimeo 2010.

58

Robustness and Fragility of Markets


Research at the Interface of Economics and Computer Science Lawrence Blume Cornell University

Abstract
Market behavior is the central topic of economics. Yet while economists have a good understanding of the behavior of well-functioning markets, we have little to say about market fragility, market resiliency, and market collapse. Research emerging at the frontier between computer science and economics oers new ways of addressing this important issue. Markets are the central topic of economics. Economists now have a broad, if abstract, understanding of the virtues of market allocation. This understanding, however, is based on a coarse description of how markets function. This is entirely sensible a key insight of economics is that at a certain level, well-functioning markets all look the same. But while well-functioning markets all look alike, we have little in the way of a classication, taxonomy or typology of broken, or disordered markets. We have a very incomplete understanding of the causes and remedies of market breakdown. We know even less about sound principles of market design, especially from the standpoints of robustness and resiliency. We know next to nothing about
This work is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/3.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA. I am grateful for the many interesting discussions on the topic of disordered markets I have had with my colleagues David Easley, Joe Halpern, Jon Kleinberg, Robert Kleinberg and Eva Tardos.

59

the transmission of failure across interlinked markets. These issues appear to be particularly important in light of recent nancial history the credit freezes in overnight lending and commercial paper in September and October 2008, and the May 2010 ash crash. They are just as important in commodities markets, and are especially important for economies in the developing world; a point made by Amartya Sen in his path-breaking work on famines. The workhorse model of market behavior is the general equilibrium model of production and exchange. Consumers and producers are black-boxed, roughly modeled as responders to the incentives provided by market prices. Often (but not always) these black boxes are derived from reduced-form descriptions of behavior, specically utility- and prot-maximization, and the exogenous market environment is specied by a list of tastes, technologies, initial resource allocations (and perhaps information and beliefs) from which consumer and producer behaviors are derived. The market outcome function maps environments into equilibrium (that is, market-clearing) prices and their associated market resource allocations. This model contains no description of transaction rules, social norms and other institutional arrangements under which trade takes place. The claim is that the performance of all well-functioning markets can be captured at this level of abstraction. This claim has been validated by decades of economic practice. The key phrase in the preceding discussion is well-functioning. When markets collapse, there may be no price at which to trade, or desirable trades at a quoted market price may not execute. Allocation in disordered markets can no longer be described by the scissors of supply and demand. Consequently, calibrating general equilibrium models to determine the eects of, for example, regulatory policy, on the frequency and magnitude of such events, is a pointless exercise. The transition from well-functioning to disordered is determined by the institutional and social arrangements of the markets and the volatility of their environment. Moving beyond simple empirical descriptions of market collapse requires theories of market performance that are based in the social, legal and technological description of market institutions. Market design and market collapse have not been totally ignored by economists. There are three dierent research programs that address the social and institutional frameworks of markets. First one particular form of market organization, the auction, has been intensively studied. Second, the nance market micro-structure literature purports to model nancial markets as non-cooperative games. The gap between real markets and their game theoretic models is huge. This literature has not pro2

60

vided much guidance in developing design principles for new nancial asset markets or in understanding the consequences of regulation for market performance. Finally, there is a literature on the behavior and design of matching markets that is small, but perhaps the most interesting of the three in its deployment of dierent research methodologies and in the conversation between theory and data. Nonetheless, the fact that no paradigm for understanding market robustness, fragility and collapse has emerged suggests that a new approach and a fresh set of ideas are needed. All markets that operate at an interesting scale share several important features: Interconnected groups of economic agents that act and learn in response to incentives, a set of dierent possible global outcomes that range from highly ecient to catastrophic; and a level of complexity that makes it dicult to determine how these aggregate outcomes arise from the behavior of the participants. While the conventional economics toolkit has made little progress on these issues, a number of crucial tools for reasoning about these issues have been developed over the past several years by researchers at the interface of computer science and economics. There are natural reasons for this: over the past decades, both disciplines have been trying to design and analyze complex interconnected systems, with adaptive agents, in the presence of incentives. Furthermore, both disciplines are concerned with the consequences of agents interacting through networks. As a result, the two elds have increasingly interacted, with a strong research interface forming between the two. This inter-disciplinary area of CS/Econ has had a number of signicant successes, providing important insights both into new styles of economic interaction facilitated by computing technology, and into fundamental research on economic systems more broadly through computational ideas and models. Beyond this, work in CS/Econ has contributed to the development of new kinds of markets, such as the market for search advertising. Research at the CS/Econ interface is concentrated on three themes: Networks, mechanisms, and individual decision-making. Computer science has for many years been concerned with the performance of systems in which agency is distributed across some network. While interest was originally focused on networks of machines, CS in recent years has become interested in networks with human actors. Issues relevant to market breakdown include the eects of network topology on information and liquidity ows in markets and the contagion of market collapse. Little is known about the co-evolution of individual behavior and network structure as agents seek out advantageous network connections. Issues relevant to market creation include the possibilities new technology aords for the creation of new markets and the reorganization of older markets. Obvious examples include online auctions 3

61

(eBay), job search, and other business-related social network sites (monster.com, LinkedIn). Equally fascinating is research conducted by a community of scholars including economists and computer scientists on the economic eects of cell phones in rural Africa and India. Markets are important exemplars of resource allocation mechanisms, and market design is a special case of the general problem of mechanism design, which considers the problem of institutional design, that is, designing incentives to guide the behavior of self-interested agents toward a collective goal. Mechanism design had been a popular research topic among economic theorists; more recently it has become important to computer scientists, and CS researchers have raised a new set of questions that are now capturing the interest of economists. These questions include the computational feasibility of mechanisms, the robustness of mechanisms to bad behavior by individuals and to environmental shocks, the identication of secondbest mechanisms when mechanisms that actually achieve the social goals do not exist, and the analysis of mechanisms under a wide variety of behavioral postulates that go beyond the classic decision-theoretic models that economists favor. Even before the satisfactory axiomatization of the now-dominant expected utility theory emerged in the early 1950s, dissenting economists were decrying its limitations. Nonetheless it was only in the 1980s that the discussion moved from a few well-conceived examples to a systematic critique. Despite this critique, however, few decision-theory models have emerged that are suciently expressive to model alternatives to the behavioral hypotheses that comprise EU and suciently tractable to deploy in problems such as dynamic choice and portfolio choice where the structure of the decision problem is complicated. Computer scientists bring new problems and solutions to the table. Issues include the design of computationally feasible heuristics for complicated choice problems, machine learning models for the analysis of highdimensional data sets, principles of learning other than the Bayesian formulation which dominates economic analysis, and which considers models of knowledge and belief alternative to the probabilistic model underlying dynamic expected utility. The interaction between computer science and economics has not been ignored by the NSF. In particular the awkwardly named CISE-CCF ICES program, Interface between Computer Science and Economics and Social Science is now collecting its rst round of proposals. The diculties of enabling the emergence of a new research community extend beyond sources of available research funding. A pervasive challenge in this area is the lack of people who have expertise in all the dierent facets of reasoning about complex economic systems, including their interconnected4

62

ness, feedbacks and the sources of their inherent complexity. The shared interests of computer science and economics can only be fully explored by a new generation of graduates who are well-trained in both disciplines. An NSF-sponsored conference on the emerging collaboration between economics and computer science, surveying current work and exploring future possibilities, was held at Cornell University in 2009. (Incidentally, this conference also celebrated the birth of the Cornell Center for the Interface of Networks, Computation, and Economics.) The nal report expands on some of the themes discussed here. The research program described here is part of a broader theme that has captured attention in dierent parts of the economics community, that institutions (sometimes) matter. A proper study of the transition between a given markets wellfunctioning and disordered regimes depends on the details of market organization, and this includes informal social arrangements governing market organization as well as formal transaction rules. The study of these arrangements is an active research area in sociology. Sociologists and economists have been exploring their shared interests for decades now. Sociologists have also been collaborating with computer scientists in the study of on-line communities. There is every reason to believe that these three disciplines together will have interesting things to say about the behavior of disordered markets.

REFERENCES
Blume, L., D. Easley, E. Kalai, J. Kleinberg, and E. Tardos, Research Issues at the Interface of Computer Science and Economics: Report on an NSF-Sponsored Workshop, available online at http://www.cis.cornell.edu/conferences workshops/CSECON 09/post-workshop.pdf.

63

64

September 17, 2010 To: Dr. Myron Gutmann, Head Directorate for Social, Behavioral and Economic Sciences National Science Foundation Michael J. Boskin Tully M. Friedman Professor of Economics Stanford University Request for ideas about NSF Grand Challenges Dear Myron and Colleagues: This brief note is in response to your request for ideas about possible NSF grand challenges in economics over the next twenty years. Before presenting a few ideas, let me congratulate you for being pro-active and focusing on a long time frame, rather than purely reactive. Let me begin by saying that I think of the National Science Foundations economics program as the core component in funding basic economics research of potentially broad applicability, as opposed to very narrow programmatic specific research, for which funding is available from other government agencies and private sources. I also believe economics as a discipline is in better shape than recent commentaries would suggest. I do not believe the primary causes of the housing bubble and severe recession were big gaps in economic knowledge. Nor do I think that the evidence of little response to the 2008-2009 stimulus bills is inconsistent with much of what is known in economics. I think the problem has been much more the failure to implement policies based on sound economics. Excessively loose monetary policy was the biggest cause of the bubble. The serial social engineering of housing was a predictable off-budget political response to budget restrictions. The failure of the fiscal stimulus was a playing out of permanent income theory, and the tiny, perhaps negative, multipliers in New Keynesian theory when short-run fiscal stimulus is accompanied by expectations of large future expenditures and taxes beyond the period when the Fed is at the zero lower bound on interest rates. Indeed, I think the main issue for economics is explaining why frenzies,

From:

RE:

65

manias, and bubbles persist and why people keep thinking, in the words of Ken Rogoff and Carmen Reinhart, This Time is Different. Perhaps the answer is a combination of neuroscience and difficulty in judging when a boom has really become (or almost become) a bubble. All that said, economics still has plenty of challenges. I would put these in five interrelated categories: theory, measurement, econometrics, aggregation, and people. Let me briefly say a word or two about each: Theory. In my career, which now spans four decades, I have generally focused on applying theory and econometrics to real-world problems and policies. What theory I have developed myself would more properly be called applied theory, not fundamental; and while early in my career I developed some new econometric estimators, they were for use on particular problems. But we need to think of economic research as a portfolio. For those who think as I do that it is important to input the best economic research into the design and implementation of economic policy, at least as much as can be done given political and other constraints, economic theory is more than just an exercise for bright people. It has real uses. As an example, recall my discussion opening this letter on permanent income theory, or alternatively life cycle theory, vs. short-run Keynesian consumption out of disposable income, and the implications for the design and evaluation of policy responses to deep, long-lived recessions. So I have always viewed, indeed encouraged in my own department, economic theory as the R&D part of the portfolio, some fraction of which will help generate new technologies that are useful in analyzing more applied problems. This should continue to be an important part of the NSF portfolio. Measurement. Despite numerous improvements made by our government statistical agencies, many economic statistics and data lag behind, in some cases considerably behind, modern economic theory, research and measurement. Sometimes the consequences are immensely consequential when compounded over a long period of time. Some of this is certainly inevitable in a flexible, dynamic, constantly evolving economy. Some of it reflects agency budgets, institutional issues, and a proper reticence to adopt new procedures until they are extremely well tested. Of course, there is considerable ongoing research in academe and within the statistical agencies themselves, some part of which is shared and eventually incorporated into government and private measurement. Important recent examples are the New Architecture for the National Income Accounts, and new approaches to fiscal measurement and analysis, where in each case, advances in recent years from a variety of agencies and academe produce a set of important ongoing revisions and perspectives. I believe a renewed effort on the theory underlying basic measurement, importantly including the units of account in the real world these have changed radically due to demography and social and economic patterns would be well worth the investment.

66

I can certainly report that increasing the accuracy of economic data would be most welcome in the private sector and among policy makers. The measurement issues are closely linked to the discussion below of aggregation. Econometrics. The structural revolution in econometrics was an immense improvement. We have a much greater understanding of the likelihood that we are identifying something that we think of as a relevant economic parameter rather than a reduced-form mishmash that is scientifically hard to interpret. The development of controlled experiments, most extensive in recent years in development economics, and related econometrics are also important arrows in the economists quiver. But they raise difficult questions. The experimental studies, often analyzing differences in differences, beg the question of whether they are permanent or transitory responses to policy interventions deemed likely to be temporary or permanent, among a host of other factors (see Angus Deatons piece in the recent Journal of Economic Literature). The structural revolution in econometrics, while on balance an immense improvement, leaves economists with a very thorny problem. The cases when we have a truly convincing identification strategy, instruments that we would widely agree are appropriate, tend to be few. Ingenuity and data development have sometimes solved this problem, but on many of the big issues of economics they remain elusive. For example, on the effects of public debt on growth, there are several suggestive studies (Reinhart and Rogoff, for example) and certainly official agencies make estimates and use those in their policy prescriptions, recommendations and critiques. The IMF, for example, and its entreaty for the U.S. to embark on a larger, quicker fiscal consolidation is based on their estimate that each 10% increase in the debt-GDP ratio decreases the growth rate by almost one-quarter point, but again, the really convincing identification of cause and effect is far from strong. I would like to see renewed emphasis on alternative econometric approaches to confronting these issues, perhaps deriving from a decision theoretic framework (as the minimax regret principle led to Stein estimators in statistics). Aggregation. The most commonly used model in economics, e.g. in macroeconomics and public finance, is that of a representative consumer. That is convenient, a good teaching device, and on some occasions a decent rough first approximation. But it runs up against some serious problems. In macroeconomics, for example, Euler equations dont aggregate if you have two classes of consumers, say, patient and impatient, with very different saving propensities. It is unclear for what purposes, under what conditions, a weighted average works or does not work. And we know from microeconomic data, say household data, the Survey of Consumer Finances or Consumer Expenditure Surveys, that observationally equivalent households in terms of family composition, age, education, location, etc., can have very different patterns of behavior. Importantly, as demography is evolving, we are going to have a larger and larger fraction of the population saving late in life relative to those saving (for college education for their kids, retirement, etc.) earlier in life. Low interest rates are good for borrowers, terrible for savers. Capital income taxes affect savers and dissavers differently, etc. There has been some work on trying to analyze how far off one gets from adopting representative agent models, but it seems to me that much more work needs to be done in this area.

67

More generally, one can think of the economy as a complex dynamic interaction of a variety of agents making decisions under incomplete information, uncertainty, differing beliefs (an idea that has such distinguished forbears as Hayek and Solow). Thats not easy to model. It is not clear how far off the representative agent models are in addressing many concerns. They may be a good first approximation in many, perhaps even most cases. But it seems to me important to delve much farther in this direction. People. Finally, let me make my most important point. The future of economics research, and the social value of that research and development, depend most of all on attracting bright, creative, energetic, committed scholars to the serious study of economics and to a career in cutting-edge economic research. NSF funding has played a valuable role in that regard, and I would view this as the single most important aspect of the program: it attracts, helps retain and, importantly, frees time from teaching and administrative duties for productive scholars advancing the frontiers of economics. This is more important than defining specific areas in advance into which to allocate research dollars. I certainly feel strongly about the areas mentioned above, but the general support of economics research and the research enterprise is likely to continue to be the most valuable public good financed by NSF. In this regard, let me also say something about interdisciplinary research. It is in vogue in fact, it is a large part of Stanfords capital campaign and has lots of people excited. There certainly are some areas where it is important the typical applied medical researcher is now a systems engineer. But there are innumerable examples historically of vast amounts of funding poured into interdisciplinary research that wound up producing very little. Scholars were cobbled together but didnt produce much serious research. Almost always, the best research on topics and issues that span disciplines is done by great scholars in the core disciplines, not by people trained in interdisciplinary issues. The best environmental economics is done by high-quality economists. Interdisciplinary programs generally do not attract scholars of as high quality as the core disciplines. There certainly are some potentially valuable areas, and it is important for NSF to be funding them, but an overemphasis on interdisciplinary research at the expense of continued R&D investment in core economics is likely to provide some short-run popularity at the expense of the serious long-run mission of the NSF Social, Behavioral, and Economics Directorate. I hope these comments are useful. If I may be of any further assistance, please do not hesitate to call upon me. Sincerely,

MJB:jb

Michael J. Boskin

68

Future Research in the Social, Behavioral, and Economic Sciences with the Panel Study of Income Dynamics 15 October 2010 Charles Brown, University of Michigan Dan Brown, University of Michigan Dalton Conley, New York University Vicki Freedman, University of Michigan Kate McGonagle, University of Michigan Fabian Pfeffer, University of Michigan Narayan Sastry, University of Michigan Robert Schoeni, University of Michigan Frank Stafford, University of Michigan Abstract: There are extraordinary opportunities to address the next generation of research challenges in the social, behavioral, and economic sciences that build on the Panel Study of Income Dynamics (PSID). First, PSID offers untapped opportunities to examine questions of relevance to our understanding of environmental sustainability. Second, cross-national harmonization of PSID with other national panel surveys will be instrumental for developing and facilitating new research on the effects of policies and institutions. Third, measuring genetic information in PSID will open a wide range of new studies on social and economic behavior and outcomes. Advances in these areas will provide a foundation for future research and for new interdisciplinary collaborations. Abstract word count (200 word limit): 108 words Main Word Count (2000 word limit): 1950 words

69

Introduction There are extraordinary opportunities to address the next generation of research challenges in the social, behavioral, and economic sciences. We describe three such opportunities relating to the Panel Study of Income Dynamics (PSID) that focus on human-environment interactions, crossnational research, and genetics. PSID is the longest-running nationally representative panel survey in the world and is an important component of the National Science Foundations investment in research infrastructure for the social, behavioral, and economic sciences. PSID and Human-Environment Interactions First, PSID offers untapped opportunities to examine questions of relevance to our understanding of environmental sustainability. Sustainability science has emerged as an important paradigm for investigating the bi-directional linkages between human actions and natural-environmental processes with the goal of helping to solve a variety of environmental problems. The existing NSF Program on the Dynamics of Coupled Natural and Human Systems aims to investigate these problems, with a particular emphasis on modeling approaches. As this program and others like it matured, research results have highlighted the needs for data on longitudinal, economic, geospatial, cultural, and behavioral dimensions of human activity that can complement processlevel understanding and data in the natural sciences. Although some new social science data collection programs are emerging as part of existing or planned environmental observatories (like NEON and WATERS; Braden et al. 2009), social science data collection efforts are still not implemented at the scale envisioned for these systems. PSID offers a unique opportunity to demonstrate the value of individual-level longitudinal information in the investigation of the human role and response in environmental systems. We identify two major areas of environmental research that might benefit from engagement with and expansion of PSID. We believe that pursuing this research will generate important new understandings of the role and response of humans in environmental processes, but also demonstrate the value to the scientific community and to society of further investment in significant empirical social scientific research. One important area that can benefit from existing and expanded PSID data is in our understanding of the social and economic determinants of energy, water, and materials consumption. Although not measured directly in PSID, consumption in these areas is strongly related to existing PSID data (including information on travel, housing, utilities, and location). Some additional information, through new questions or a new module, would allow investigation of the direct consumption variables and their association with these more indirect measures. Furthermore, existing information on various expenses, contributions to charities, income levels, education, and so forth can be used to investigate the social, economic, and cultural determinants of consumption behaviors. Influences of intergenerational processes on consumption behaviors might also be investigated. An important outcome of these investigations might be a better understanding of the degree to which changes in the economic, information, or natural environments might be most likely to yield changes in behavior.

70

Another important area for investigation and application of the PSID is the locational characteristics of participants places of residence and places of work and travel. These studies would likely be most profitable in combination with environmental data on land use and cover, linked through the geocode. These studies could be investigated at multiple scales, focusing on demographic processes of migration and urban-rural movements, on neighborhood social and physical characteristics to understand residential preferences across a number of land markets, and at lot scales to investigate characteristics of residential land consumption and management. Detailed information on the housing and land related expenditures, house and land value, and movement can be used to better understand the residential choices of participants, how those choices are influenced by contextual factors at multiple scales, and, ultimately, how those choices influence the development of urban forms and structures we observe. PSID and Cross-National Research Second, in todays increasingly interconnected world, many industrialized countries face common challenges, such as population aging, the integration of immigrants, and social and economic inequalities. Although many of these problems affect modern nations similarly, their severity and impact on individual lives can differ markedly across countries. High quality, nationally representative data that cover a wide array of topics from different spheres of life have already served to illuminate some of the common challenges as well as their differential consequences. Many of the most pressing concerns, however, are often only captured in a dynamic perspective. For instance, the complexity of educational careers and life-long learning in the knowledge society and transitions into and out of unemployment and poverty in times of economic downturn require this dynamic perspective and consequently rely on longitudinal data. Beyond description, future research must strive to identify policy solutions that have proven successful in other nations. There is ample supply of novel policy approaches and alternative institutional arrangements around the globe. In other words, in a globalized world nations may serve as the new laboratories of social and economic policy. However, studying the causal role of specific aspects of different policies and institutions based on cross-national comparative research suffers an important inherent problem: there are many more explanations for crossnational differences than there are countries to compare. There are several strategies for investigating the role of institutional characteristics and policies in explaining observed cross-national differences. One promising strategy begins by reliably establishing the individual-level mechanisms that account for the observed phenomena in each nation. Understanding why given social or economic phenomena occur increases our chances for understanding how a given policy or institutional arrangement may affect these phenomena. For example, detecting barriers to educational access among disadvantaged children yields important information for inferring why different forms of educational financing do or do not impact educational opportunities. In short, pinning down the causal mechanisms that are at work at the individual-level is integral to our effort to make meaningful cross-national comparisons that have the potential to identify a best practice policy or institutional arrangement that may be transferable to a different nation.

71

Large-scale longitudinal surveys provide a strong foundation for the study of the causal mechanisms underlying a wide range of social and economic dynamics. The most important data requirement for future research, however, is that of cross-national comparability, both in terms of sample construction as well as measurement. The social sciences have profited immensely from existing large-scale projects that provide comparable data for a number of nations. Different organizational models have been successful. Some international collaborations are dedicated to the ex-post harmonization of existing surveys (such as the Luxembourg Income Study) while others have accomplished ex-ante standardization of a set of core questions that are asked in a large number of countries (for instance, the World Values Survey). Another successful model, and one that may be predicted to gain in importance in the future, is that of harmonization by imitation, in which important data collection efforts in one country inspire and guide similar projects in other countries. For instance, PSID has served as model for the German SocioEconomic Panel, the British Household Panel Study, and many other national panel studies. Because these data sources provide the most potent basis for cross-national research, the main future challenge will be to further increase the harmonization of measures between these datasets. So far, ex-ante harmonization has mostly involved informal cooperation among the founding survey administrators while ex-post harmonization is beginning to take place in more formal initiatives, such as the Cross-national Comparative Equivalent File project. Although the latter efforts should be expanded, continued opportunities for ex-ante harmonization should be pursued wherever possible, for instance in the case of new panel surveys or new topical modules in existing surveys. NSF support will be instrumental for developing and facilitating crossnational data harmonization. This will improve the foundation for fruitful cross-national comparative research by providing high-quality, partially harmonized, nationally representative, longitudinal data that not only allow a dynamic view on important social and economic phenomena but that also facilitate the search for best practices that have proven successful in other countries and that hold promise in being applied to the U.S. PSID and Genetics Research Third, from the days of Francis Galtons eugenic theories of the heritability of intelligence and criminality through the controversial, bestselling book The Bell Curve (1994), introducing genetics to discussions of social behavior in humans has been morally suspect. This has led to an intellectual firewall between mainstream social science and biological data. Although recently there has been increased interest in collecting biomarkers, in general, and genetic data, in particular, in social science surveys, no existing social science study collecting genetic information is intergenerational in nature, nor are these nationally representative samples of the entire adult population across the age spectrum. Further, the focus to date of such studies has generally been on health dynamics. Meanwhile, those U.S. studies that focus primarily on collecting social, demographic, and economic data have not yet embraced the integration of genetic information. The time is right for a nationally representative socioeconomic study to collect genetic markers. And there is no study better positioned to maximize the intellectual return on investment in this area than the PSID. Only the PSID provides what would be a full service socioeconomic dataset with gene markers. For example, no present study, aside from the PSID, provides an

72

opportunity to obtain genetic information across three generations. If the PSID were to collect genetic markers, a number of important research questions will be able to be answered, such as: How does the distribution of haplotypes (unique sets of polymorphic markers in an individual) vary by race, class, and region in the U.S.? Has accelerated immigration since the 1960s affected this distribution? How much genetic in-breeding occurs in the U.S.? How do the phenotypes and genotypes of our family and household environments affect individual outcomes? Does, for example, the expression of genetically-based propensities toward depression depend on growing up with a depressed parent? Does the effect of an individuals genetic background on social and economic outcomes depend not only on the observed behavior of family members but also on their (unexpressed) genetic makeup? For example, it could be adaptive to have a putatively more emotionally reactive allele when one is the only offspring to be homozygous for this allele, thereby garnering more parental attention. If the behavioral phenotype of an individual is not just contingent on her/his own genotype but that of her/his siblings, then it suggests non-independence of the units of analysis for classic heritability analysis. How do genes interact with exogenous economic shocks? The basic logic until now has been the following: a certain proportion of a population sample is found to have a variant of a particular allele. If this allele is shown to be randomly distributed across demographic subgroups (or, for example, within a particular subgroup such as ethnic group), and, likewise, it is found to be associated with a specific social outcome or tendency (such as addictiveness, shyness, or schizophrenia) within that same population (or subgroup), then researchers often look for specific outcomes which covary with the presence or absence of that particular allele. This has been the approach of most work to date in both the social and biological sciences that have used observational data. However, a problem is that alleles are not necessarily distributed randomly across sub-populations thus potentially biasing the observed phenotypic associations with those alleles. PSID would allow for within-family (cross-sibling or cross-cousin) and across-time (within-person) analysis what would alleviate some of these population stratification concerns. In closing, the three research opportunities described above will provide a foundation for the next generation of social, behavioral, and economic research on human-environment interactions, cross-national research, and genetics. They will be a national resource for conducting transformative research that will also strengthen links between the social, behavioral, and economic sciences, on one hand, and the environmental, development and learning, and genetics sciences on the other hand. References Braden, J.B., Brown, D.G., Dozier, J., Gober, P., Hughes, S.M., Maidment, D.R., Schneider, S.L., Schultz, P.W., Shortle, J.S., Swallow, S.K., and Werner, C.M. 2009. Social science in a water observing system. Water Resources Research, 45: W11301. Herrnstein, R., and Murray, C., 1994. The Bell Curve. New York: Free Press.

73

This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/bync-sa/3.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.

74

Modeling and Measuring Systemic Risk


Markus Brunnermeier, Lars Peter Hansen, Anil Kashyap, Arvind Krishnamurthy, Andrew W. Lo October 15, 2010 Abstract
An important challenge worthy of NSF support is to quantify systemic financial risk. There are at least three major components to this challenge: modeling, measurement, and data accessibility. Progress on this challenge will require extending existing research in many directions and will require collaboration between economists, statisticians, decision theorists, sociologists, psychologists, and neuroscientists.

Proposal An important challenge worthy of NSF support is to quantify systemic financial risk. The recent financial crisis has focused widespread attention on systemic risk in the global financial system. It is neither feasible nor desirable to eliminate all aggregate risk. Investment in risky ventures can be socially productive even when this risk cannot be diversified away. Calls for regulation based on concerns of systemic risk are premised on concerns that the potential excess risk-taking within the financial system will lead to government bailouts when losses mount. Designing appropriate policy interventions that do not create perverse incentives for the private sector is important. However, any meaningful discussion and implementation of such policy requires better measurements and better models of the interaction of the role of financial markets in the macroeconomy that motivate or justify these measures. Key questions include: What components of aggregate risk exposure of the private sector are problematic for a society? How might we measure these in meaningful ways, and what data can be used to support these measurements? What guidance do models provide on the best way for regulators and private agents to manage systemic risk? Prior to the crisis, financial regulation around the world largely consisted of a patchwork arrangement with a bevy of regulators overseeing various institutions and markets in isolation. No single regulator was responsible for looking across the global financial system and identifying vulnerabilities that might be building up from the complex interactions of actors throughout the economy. As Federal Reserve Chairman Ben Bernanke put it, We must have a

This work is licensed under the Creative Commons AttributionNonCommercialShareAlike 3.0 Unported License. To view a copy of this license, visit: http://creativecommons.org/licenses/by-nc-sa/3.0/, or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.

Modeling and Measuring Systemic Risk

75

strategy that regulates the financial system as a whole, in a holistic way, not just its individual components.1 The global regulatory response to the crisis has followed Bernankes dictum, creating various agencies and committees that are charged with monitoring and controlling these risks. In the United States, a substantial portion of the Dodd-Frank Wall Street Reform and Consumer Protection Act details how systemic risk should be regulated. But fulfilling this object will be extremely challenging. Currently, we lack not only an operational definition of systemic risk, but also the data needed to measure it. Without the potential for measurement, the term systemic risk is mere jargon that could support the continued use of discretionary regulatory policy applied to financial institutions and lead to ad-hoc policies that are inconsistent and fraught with unintended consequences. The transparency and rationality of regulatory policy would be greatly enhanced by the thoughtful modeling and reliable measurement of systemic risk. Policy concerns along these fronts have been articulated by many, including by Former Fed Chairman Paul Volcker in a September 24th speech at the Federal Reserve Bank of Chicago. Unless we are able to measure systemic risk objectively, quantitatively, and regularly, it is impossible to determine the appropriate trade-off between such risk and its rewards and, from a policy perspective and social welfare objective, how best to contain it. This is the current grand challenge that faces us today. In the last decade there has been a substantial literature that explores dynamic stochastic equilibrium models estimated by formal econometric methods. These models have gained considerable prominence in research departments of central banks and have improved our understanding of price stability. In contrast, there is a much smaller literature on equilibrium models that include a role for financial market frictions and can speak meaningfully to financial stability. There is a sharp contrast between our understanding of price stability and our understanding of financial stability and systemic risk, where the gaps in our knowledge are much more pronounced. There are at least three major components to the challenge of monitoring these risks: modeling, measurement, and data accessibility. Meaningful measurement requires a clear definition of systemic risk and thoughtful modeling of this construct. But modeling in this area is still primitive. We argue that systemic risk is a major social problem because of the potential for significant spillover from the financial sector to the real economy, yet existing models that identify externalities in the financial system with macroeconomic consequences are highly stylized and fall short of generating formal guidance for statistical measurement.

See Ben S. Bernanke, Financial Reform to Address Systemic Risk at the Council on Foreign Relations, Washington, D.C., March 10, 2009

Modeling and Measuring Systemic Risk

76

Thanks to basic macroeconomics models from decades past which motivated national income accounting measures, we can quantify the state of the economy in many ways. For instance, we know GDP growth (1.7% for 2010Q2), how non-farm payrolls have changed (95,000 in September 2010), the level unemployment (9.6% as of September 2010), the number of housing starts (598,000 in August 2010), and the rate of inflation in consumer prices (0.3% relative to the previous month in August 2010). We can measure the current risk of the U.S. stock market through the implied volatility of the S&P 500 index (19.88% as of October 14, 2010). And we can measure the relative value of the U.S. dollar compared with other currencies (76.666 as of October 14, 2010). What is the current level of systemic risk in the global financial system? We cannot manage what we do not measure. Some research does exist that builds on measures of risk exposures of stochastic cash flows in asset pricing models, characterizing risk and return relations using statistical methods. This research has a long history, including discussions of volatility fluctuations and tail risk, but is not tailored to the regulatory challenges going forward. The required models for measuring systemic risk will need to have quantitative ambitious of sufficient scope to confront real externalities that are induced by financial market behavior. To support this new research agenda, additional data must be collected, and the newly created Office of Financial Research offers one promising avenue to meet this challenge. Also, the Census Department currently supports empirical investigations with confidential data, and it may be necessary to draw on their experience. Given the complexity of the financial system, it is unlikely that a single measure of systemic risk will suffice. We anticipate that the variety of inputs ranging from leverage and liquidity to codependence, concentration, and connectedness will all be revealing. Moving beyond standalone inputs to a joint study will be difficult but is necessary if this task is to be achieved. The increased complexity and connectedness of financial markets is a relatively new phenomenon that requires a fundamental shift in our linear mode of thinking with respect to risk measurement. Small perturbations in one part of the financial system can now have surprisingly large effects on other, seemingly unrelated, parts of that system. These effects have been popularized as so-called Black Swan eventsoutliers that are impossible to predictbut they have more prosaic origins: they are the result of new connections between sectors and events that did not exist a decade ago, thanks to financial innovation and technological progress. A more integrated approach to studying these challenges will lead to enhanced understanding of their economic interactions and statistical relationships. This will push modeling in new directions and reveal new challenges for measurement. Existing research from a variety of areas may be useful catalysts for this new research agenda, but they require significant modification, extension, and integration. For instance, one intriguing

Modeling and Measuring Systemic Risk

77

approach to modeling the interaction of financial firms is to view the financial industry as a network. Network models have been used in a variety of scientific disciplines, including economics and other social sciences. When applied to financial markets, they capture direct spillover effects such as counterparty credit risk. The study of systemic risk requires also the study of indirect spillovers that occur through prices that clear markets because in a crisis situation, these indirect effects might be even more potent. Nevertheless, a network structure, with the appropriate enrichments, promises to provide one way of understanding better the systemic consequences of the failure of key components to a financial network. To push this approach in quantitative directions will require building on prior research from other fields that features quantitative modeling and empirical calibration. How individuals, firms and other entities respond to uncertainty in complex environments remains a challenge in economics and other social sciences. Concerns about ambiguity and, more generally, the challenge of learning and assigning probabilities in complex environments motivates the study of alternatives to the simple risk aversion model that has been a workhorse in economics. There are a variety of advances in decision theory, probability theory, and the cognitive neurosciences that give some guidance for how people do and should confront uncertainty. There is scope for productive exchange with closely related literatures from sociology, psychology, and neuroscience. Converting these various insights into operational quantitative models are only in the early stages of development, but they offer promise in helping us understand better the challenges of measuring systemic uncertainty. Research on mechanism design and incentives in the presence of private information has been a demonstratively successful research program. This program, however, has been more qualitative than quantitative in nature. In the crisis, policy-makers have had to fall back on qualitative models of systemic failure, such as the well-known Diamond-Dybvig model of bank runs. While these models have provided useful insights, policy could have been better calibrated if regulators could have relied on more sophisticated representations of the financial system. Going forward, insights from corporate finance and asset pricing, including research on asset prices that confront financial market frictions, the nature and dynamics of liquidity, and corporate governance structures related to risk management are critical to building rational and practical models of systemic risk. Mechanical models of market frictions run the danger of failing to provide reliable guides to behavior in response to changes in the underlying governmental regulations of financial firms

As mentioned previously, there is an extensive literature on measuring risk-return relations using statistical methods. Along some dimensions, this literature is now quite advanced. It features time variation in volatilities, typically measured using high frequency data. There are interesting

Modeling and Measuring Systemic Risk

78

extensions that confront tail risk using so-called Levy processes as alternatives to the mixture of normal models that has been analyzed extensively. This line of inquiry may provide some valuable inputs going forward, but the systemic risk research challenge will require that this statistical literature be pushed in new directions, away from the problem of characterizing riskreturn patterns and providing inputs into pricing formulas for derivative claims, towards identifying and characterizing the systemically important components of existing financial enterprises. New measures of risk or uncertainty will need to confront and quantify spillover effects that should be the target of regulation. High-frequency risk measures that are now commonly employed in the private sector and in academic research will have to be supplemented by low-frequency quantity information that measures the magnitude of imbalances that can trigger so called systemic events. Finally, systemic risk presents at attractive and intellectually stimulating area of inquiry that will attract young researchers. In summary, this is an exciting research challenge that can build upon a variety of previously disparate literatures to provide valuable insights, with major challenges going forward that involve collaboration among several disciplines in the SBE Directorate and beyond.

Modeling and Measuring Systemic Risk

79

80

Expanding Access to Administrative Data for Research in the United States David Card, UC Berkeley Raj Chetty, Harvard University Martin Feldstein, Harvard University Emmanuel Saez, UC Berkeley
Abstract We argue that the development and expansion of direct, secure access to administrative micro-data should be a top priority for the NSF. Administrative data offer much larger sample sizes and have far fewer problems with attrition, non-response, and measurement error than traditional survey data sources. Administrative data are therefore critical for cutting-edge empirical research, and particularly for credible public policy evaluation. Although a number of agencies have successful programs to provide access to administrative data most notably the Centers for Medicare and Medicaid Services the United States generally lags far behind other countries in making data available to researchers. We discuss the value of administrative data using examples from recent research in the United States and abroad. We then outline a plan to develop incentives for agencies to broaden data access for scientific research based on competition, transparency, and rewards for producing socially valuable scientific output. A Wealth of Administrative Data Governments create comprehensive micro-economic files to aid in the administration of their tax and benefit programs. The Social Security Administration (SSA), for example, records annual data on earnings and retirement and disability benefit payments for virtually the entire US population. State agencies collect quarterly earnings reports from firms on behalf of the Department of Labor for nearly all paid workers in the private sector. The Internal Revenue Service and the various state income tax administrations compile income data for all individuals and businesses. The Medicare and Medicaid programs record information on the health care services received by their beneficiaries. School districts record detailed information on academic outcomes, classes and teachers for all public school students. Counties record every real estate transaction. A rich archive of information covering most aspects of socio-economic behavior from birth to death, including education, earnings, income, workplace and living place, family composition, health and retirement, is recorded in administrative data. With the advent of modern computer systems, all these administrative data are stored in electronic files that can be used for statistical analysis. Indeed, government agencies are required to produce statistical reports that inform the public about their activities, and hence have already established statistical offices and set up the necessary files to produce such information. Eroding US Leadership Traditionally, empirical research in social sciences has relied on survey data sources such as the decennial Census, the Current Population Survey (CPS) or the Panel Study of Income Dynamics. In the post-war period the US led the way in the development of modern survey methods, and not coincidentally, in the development of statistical techniques for analyzing these data. The combination of data and methods established the nations dominant position in the conduct of empirical social science research. During the second half of the 20th century, the fields of political science, sociology, and economics were all revolutionized by US researchers using US-based survey data sources. Unfortunately, that dominant position is now at risk as the research frontier moves to the use of administrative data. Administrative data are highly preferable to survey data along three key dimensions. First, since full population files are generally available, administrative records offer much larger sample sizes. The full population earnings data from SSA or tax records is about 2000 times larger than the CPS. Larger sample sizes can be harnessed to generate more compelling research designs and to study

81

important but relatively rare events like a plant downsizing that affects some workers but not others, or a severe local weather event. Second, administrative files have an inherent longitudinal structure that enables researchers to follow individuals over time and address many critical policy questions, such as the long term effects of job loss, or the degree of earnings mobility over the life cycle. Third, administrative data provide much higher quality information than is typically available for survey sources, which suffer from high and rising rates of non-response, attrition, and under-reporting. Because of confidentiality and security concerns, administrative data cannot be made publicly available. However, numerous examples -- from the Centers for Medicare and Medicaid Services (CMS), from other countries, and from a variety of pilot efforts at federal, state, and local government agencies -- show that it is possible to provide secure access to de-identified administrative data (i.e., data that have been stripped of individual identifiers such as names, addresses, and social security numbers) to researchers. To the best of our knowledge, research access to de-identified data has never resulted in the improper disclosure of confidential information. The record shows that access can be achieved in a way that maintains the strictest standards of privacy while still allowing researchers direct access to individual records. A leading example of the research impact of routine access to administrative micro-data is CMS. Many hundreds of medical studies each year use the agencys Research Data Assistance Center (ResDAC) to develop requests for micro data files (including data protection plans), which are then reviewed by CMS. Routine access to Medicare and Medicaid files has enabled US healthcare researchers to maintain their global leadership position in the field and have yielded many important public benefits. Outside the US, many countries have developed systems to allow access to administrative data for research purposes. In Denmark for example, Statistics Denmark gives prepares de-identified data by combining information from administrative databases for approved research projects. The data extracts can then be accessed by researchers remotely (from any computer, including the researcher's office desktop) through a secure server. Researchers apply for data access through accredited "centers" at major universities, and access is provided through an open competition process based on scientific merit. The availability of detailed administrative data abroad has led to a shift in the cutting edge of empirical research in many important areas of social science away from the United States and toward the countries with better data access. Because the US retains worldwide leadership in the quality of its academic researchers, US-based researchers are often involved in research using administrative data from other countries. However, this situation is less than ideal for at least two reasons. First and most important, many questions of central importance for US policy making cannot be tackled using evidence from other countries. Access to existing administrative US data is required to evaluate the effects of various specific US government policies, such as stimulus spending, on job creation and overall personal income. US public policy would be far better served having top researchers focusing on US policies issues using US data. Second, in the long-run, the development of administrative data access abroad will foster the development of empirical and econometric research programs in those countries, in the same way that the development of US survey data was accompanied by great scientific progress in empirical methods in social sciences in the United States in the 20th century. Regaining US Leadership Over the years, the United States has developed a number of initiatives to provide access to administrative data access for research, particularly in the fields of health, and K-12 education. However, access to data on income and earnings is not as satisfactory, although some valuable initiatives exist. In principle, unemployment insurance records for many states can be accessed through the LEHD program at the Census Bureau although this is onsite at a Census RDC. In recent years, SSA earnings data have been

82

accessed by researchers through internships or co-authorship with SSA researchers. The Statistics of Income division of the US Treasury has also launched a promising tax data access program for statistical research purposes. In all these cases, however, the lack of sufficient resources and cumbersome data access severely limit the research potential. Based on experiences from other countries and these pilot initiatives, we believe that five conditions must be satisfied to make a data access program sustainable and efficient: (a) fair and open competition for data access based on scientific merit (b) sufficient bandwidth to accommodate a large number of projects simultaneously (c) inclusion of younger scholars and graduate students in the research teams that can access the data (d) direct access to de-identified micro data through local statistical offices or, more preferably, secure remote connections (e) systematic electronic monitoring to allow immediate disclosure of statistical results and prevent any disclosure of individual records We emphasize that direct access to micro-data is critical for success. Alternatives such as access to synthetic data or submission of computer programs to agency employees will not address the key problem of restoring US leadership with cutting-edge policy-relevant research. Synthetic data is simulated micro data that is constructed to mimic some features of the actual data. This approach is much less attractive than providing direct access to the full administrative data set because in practice it is virtually impossible for the researchers to fully specify the contents of the ideal synthetic dataset in advance. The option of sending computer programs, while providing some data access, is also substantially inferior to direct data access because it does not allow for the inductive phase of data analysis that is critical for many empirical projects. The Value of Competition In principle, having a centralized agency being able to obtain administrative data from all government branches and then maintain it and supply de-identified data to approved research projects, as in the Danish case, is an attractive model. However, in the US, this model is less attractive for three reasons. First, relative to other countries, the US government is far more decentralized, with multiple agencies at three different levels covered by different privacy laws, and statutory limits on inter-agency data sharing. Second, there is a long tradition of distrust of centralized government in the US, and in particular of monopoly control by a single government agency. Any successful data access program must acknowledge the salience and value of this tradition. Finally, from the perspective of both privacy and efficiency, it would seem reasonable to leverage the existing statistical offices of US administrative agencies for both their expertise and also as a base for access to such confidential data. We therefore believe that it is preferable to leverage the multiple agency setting and the principle of interagency competition by allowing and encouraging different agencies to provide their own data access systems. This could be achieved by rewarding agencies for performance. Performance in scientific production is easily measurable via metrics such as peer-reviewed publications. Rewards to agencies could take the form of resources provided by the major research funders (NSF and NIH) that would help agencies strengthen their statistical offices and develop partnerships with researchers. Currently, the main hurdle in the development of research partnerships between agencies and external researchers is the lack of internal incentives and the lack of dedicated agency resources. A well designed system would encourage agencies to improve their statistical capabilities and data access, subject to agency-specific rules that ensure the strictest standards of privacy. This model which closely parallels the model of the Centers for Medicare and Medicaid Services -- is much more robust than the centralized agency model, and would unleash the forces of innovations as agencies compete for the best research projects. This model can also be extended to private institutions that gather data valuable for research (such as utilities for the analysis of energy and resource conservation for example) to create incentives for research

83

partnerships. Both government agencies and private institutions already have multiple business contracts for data work where outside contractors access the data for a specific business purpose. Scientific research should follow the same model where NSF or NIH funds researchers to carry out scientific projects with the data. The Value of Cooperation Experience from abroad and from the United States shows that there is tremendous value in carrying research by merging data, for example educational data and earnings data. A centralized agency, as in Denmark, naturally allows such merging. However, starting from the decentralized landscape we have described, it should be possible to encourage partnerships between two government statistical agencies (or between a statistical agency and an external partner such a non-profit or business) to accommodate research requiring merged data. Such cooperation will naturally arise if all parties can share the benefits of the scientific output. Precedents for this kind of cooperation exist even in the US. Recently, the Florida Department of Education has teamed up with the state UI agency to allow linking of student education records to subsequent earnings outcomes. Another important case where data cooperation is valuable is the long-term analysis of randomized field experiments. Field experiments are a powerful but costly method for scientific evaluation of alternative policy choices, and the US was an early leader in the use of field experiments to evaluate negative income tax policies in the 1960s. The ability to systematically merge experimental data to administrative data can overcome difficulties of tracking, nonresponse, and under-reporting in conventional survey-based measures, and allow the analysis of long-term outcomes, hence substantially expanding the scientific value of randomized experiments at low cost.

84

White paper for NSF/SBE 2020: Future Research in the Social, Behavioral & Economic Sciences (October 14, 2010)

Title: Prize good research! Authors: Gary Charness (UCSB) & Martin Dufwenberg (U of Arizona) Abstract: We propose that rather than financing projects that have been proposed, the NSF should award prizes for research that has already been done. Main text: NSF/SBE has invited individuals and groups to contribute white papers outlining grand challenge questions that are foundational and transformative. We consider the following question important: Which procedures should be used for evaluating and incentivizing research? We propose a new approach: Rather than financing projects that have been proposed, the funding agency should award prizes for research that has already been done. We imagine that this proposal has several benefits and few disadvantages: As regards incentives for researchers to do good research, little will change and if so for the better. Under current conditions researchers have to make a case that they will do well in the future to get supported. Under our new proposal, they actually have to do well in the future to get supported (again). Under our scheme, evaluating applications will take less time & effort than now, since evaluators (to some extent) can view that work as delegated to the referees of the journals that have accepted the work. There will be less risk of mistakes; evaluating the quality of research already done is easier than evaluating research to be done, since published research has already gone through a review process.

In regards to the NSFs scope question #1, our proposal is important because whatever is the goal for the research that the SBE/NSF wishes to support, our method may improve the accuracy of getting there. In regards to scope question #2, the method should require less infra-structure for conducting the evaluation. Our proposal has one obvious drawback: young researchers may be disadvantaged if they have not had time to establish good track records. That takes more or less five years. Therefore we propose a junior exception. Researchers less than five years out of their PhD may choose to apply for funding for research they propose to do rather than for prizes for research that they have already completed.
This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/3.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.

85

86

Market Design: Harnessing Market Methods to Improve Resource Allocation1


Peter Cramton 2 15 October 2010
Abstract The emerging field of market design applies auctions and matching to solve resource allocation problems. This paper focuses on auction design, the branch of market design where money is used to facilitate the exchange of goods and services. Within auctions, the paper examines applications involving government regulated resources. Who should use the scarce radio spectrum and at what prices? How should electricity markets be organized? How should financial markets be regulated? And how should runway access be assigned at congested airports? All of these are important questions in major industries. Researchers in market design have made substantial progress in answering these questions over the last fifteen years. The efforts, although at the forefront of theory have been closely tied to practice, and involved interdisciplinary teams of economists, computer scientists, and engineers, all working to solve real problems. Despite this rapid progress, the field holds much promise to provide better answers in even more complex economic environments over the next two decades. The rewards to society from improved markets will be immense. As deficits grow and baby-boomers age, governments face increasing challenges in making the best use of public resources. One successful innovation to improve the allocation of scarce public resources is for the government to harness market methods to improve decision making. The spectrum auctions, conducted by the Federal Communications Commission (FCC) since 1994, are an excellent example. These auctions, which arose from a collaboration of scientists with auction expertise, industry, and the FCC, have led to over $100 billion in new non-distortionary U.S. government revenue, and more importantly, have put the scarce spectrum resource into the hands of those best able to use it. This innovation has been a win-win for taxpayers, the companies participating in the auctions, and the hundreds of millions now enjoying advanced wireless communications services. The auction program has been replicated worldwide and remains a key example of effective government, both in the U.S. and abroad (Milgrom 2004). The spectrum auction program stimulated key scientific innovations in our understanding of how to auction many related items. These innovations have been applied not just to spectrum auctions but to e-commerce, both public and private. The advances to date, while important, are only the tip of the iceberg. Tremendous opportunities lie ahead and will be realized in coming decades with further scientific advancement in auction design. Auction applications are rapidly expanding. Communication and computational advances have certainly played an important role, but the development of simple and powerful auction methods has been important too. Market designers now have a much richer set of tools to address more complex problems.
This work is licensed under the Creative Commons Attribution-NoDerivs 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nd/3.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA. 2 Professor of Economics, University of Maryland. www.cramton.umd.edu.
1

87

One example is a package auction (or combinatorial auction) in which bidders can bid on packages of items (Cramton, et al. 2006). In a package auction bidders can express preferences for complementary items without running the risk that they will win just some of what they need. This is important, for example, in spectrum auctions in which different technologies require that the spectrum be organized in different ways. In the past, the regulator has been forced to decide how the spectrum is organized with a specific band planeffectively deciding how much spectrum is available for each technology. A package auction enables the regulator to conduct a technology-neutral auction, which lets the bidders determine the band plan through their competitive bids. A good example are recent spectrum auctions in Europe, in which the quantity of paired versus unpaired spectrum is determined in the auction, not by the regulator. One of the challenges of package auctions is finding an effective way for bidders to convey preferences. There are simply too many packages to ask for preferences for all possible packages. A common approach is to begin with a clock auction. The auctioneer names a price for each product, and bidders respond with their most preferred packages. The price is then raised on all products with excess demand, and the bidding continues. This price-discovery process focuses the bidders attention on packages that are most relevant. Once this price discovery is over, the bidders are in a much better position to submit any additional bids, as well as improve the bids already submitted. An optimization is then done to determine the value-maximizing assignment, as well as competitive prices that satisfy the stability constraints. Typically, there are many such prices, so a further optimization is done to find the prices that provide the best incentives for truthful bidding. Package auctions are also proposed for auctioning takeoff and landing rights at congested airports, such as the three New York City airports. The goal of the auction is to make the best use of scarce runway capacity. Left to their own devices, airlines will overschedule flights during peak hours, creating congestion and costly delay. The package auction enables each airline to bid for its preferred package of slots. The resulting competitive prices motivate airlines to substitute away from expensive slots, either by shifting flights to less expensive times or by using larger aircraft to carry the same number of passengers with less runway use. Another example of market design is electricity markets. Modern electricity markets are organized as a number of auction markets. The markets, taken together, are designed to provide reliable electricity at the least cost to consumers. Spot markets determine how much each supplier is generating on a minute-by-minute basis; forward energy markets enable customers and suppliers to lock in mediumterm prices for electricity; and long-run investment markets coordinate new entry to cover any expansion in electricity demand. These auction markets must be carefully designed to work together to achieve the goal of least costly, reliable supply. Design failures can be quite costly, as the California electricity crisis of 20002001 demonstrated. When the stakes are high, an important step in market design is building prototypes and then testing those prototypes in the experimental lab or in the field before full-scale implementation. Design failures are all too common and persistent in government settings. For example, the Medicare competitive bidding program, which began over ten years ago, is still in a pilot stage and experiencing serious problems, in large part as a result of the implementing agency failing to apply state-of-the-art methods and principles to the problem of how to price Medicare equipment and supplies. Good auction design in complex environments involves more than good intentionsit requires exploiting the substantial advances that we have seen in market design over the last fifteen years. The recent financial crisis is another example where the principles of market design, if effectively harnessed by regulators, could have prevented or at least mitigated the crisis. These failures can involve trillions of dollars of cost to society, and certainly involve many billions.

88

One exciting aspect of market design is working on the forefront of theory and bringing that theory to practice. In both auctions and matching, solving real problems has proved to be an excellent way to develop new theory. The applications benefit from the improved markets, and the theory is enriched in the process. The process typically has involved scientists from several disciplines, especially economics, computer science, operations research, and engineering. New and powerful specialties have emerged, such as algorithmic game theory within computer science (Nisan et al. 2007). Market design is a young and vibrant field. Specialized interdisciplinary conferences are common, even within research organizations that have historically focused on traditional fields. For example, the National Bureau of Economic Research now has an interdisciplinary market design group that meets annually. About one-half of this years attendees were advanced doctoral students from elite research universities from around the world. Courses in market design are now offered at many leading research universities at both the undergraduate and graduate levels. Over the last fifteen years, the emerging field of market design has demonstrated the power of harnessing market methods to allocate scarce resources. The process has involved interdisciplinary efforts among economists, computer scientists, and engineers, focused on solving practical problems of resource allocation. The societal gains from these efforts has been substantial in several major industries such as telecommunications, energy, and transportation. The field holds much promise for future advances in both theory and application. Given the close and complementary connection between the science and the practice, these advances will produce substantial and lasting welfare gains to society over the next twenty years. References
Cramton, Peter, Yoav Shoham, and Richard Steinberg (2006), Combinatorial Auctions, Cambridge, MA: MIT Press. Milgrom, Paul (2004), Putting Auction Theory to Work, Cambridge: Cambridge University Press. Nisan, Noam, Tim Roughgarden, Eva Tardos, and Vijay V. Vazirani (2007), Algorithmic Game Theory, Cambridge: Cambridge University Press.

89

90

Why Dont People and Institutions Do What They Know They Should?

David M. Cutler Harvard University and NBER

Allegheny General Hospital is a 728-bed academic health center located just outside of Pittsburgh and serving the surrounding five-state area. The hospital is big and complex. In 2003, the medical and cardiac intensive care units at Alleghany saw 1,753 patients and placed 1,110 central lines tubes leading to a main artery to administer nutrition and monitor blood gases. That year, there were 49 Central Line Associated Bloodstream Infections (CLABs), resulting in 19 deaths. A CLAB rate of 4.4 percent is the norm for American hospitals and certainly good enough for a hospital with many other pressing issues. But it was not good enough for the chief of medicine at Alleghany General. In the next few years, the chief introduced several changes in its central line practice. It standardized the placement and duration of central lines and authorized everyone involved in patient care to stop the process if a step was not followed. It monitored infections in real time and undertook corrective action when an infection was observed. The intervention worked. Within just three years, the rate of central line infections fell by 95 percent. Since the cost of central line associated bloodstream infections is about $50,000, the hospital saved nearly $2 million. 1 So far, so good. The problem is what comes next. In economic theory, other hospitals observe what has happened at Alleghany General and imitate it, and health care as a whole gets
1

Insert cite to Alleghany General.

91

cheaper and safer. But that has not happened. Despite widespread publication of the results at Alleghany General and a few like institutions, rates of hospital-acquired infection are going up. This problem is not a minor one. Nationally, about one in twenty hospital patients are harmed because of the care provided in the hospital, and medical errors are among the leading causes of death. Hospital-acquired infections cost the medical system about $30 billion annually. Hospital infection control officers every hospital has one are frustrated. They know that medical errors lead to death and higher cost, but they cant get their institutions to focus on the problem. Standards, monitoring, and the ubiquitous checklist exist in theory, but not yet in practice. Indeed, when asked how long it would take for checklists to diffuse throughout the medical system, the leading evangelist for them, Peter Pronovost at Johns Hopkins, replied At the current rate, it will never happen. Throughout the medical system indeed, in every facet of life people and institutions do not do things that are valuable, inexpensive, and relatively straightforward to do. In addition to the central lines example, consider a few others:

American automobile firms never found a way to match the quality practices of

Japanese automakers, despite a willingness of Japanese firms to share best practices; o Only 69 percent of Americans always wear a seatbelt when they drive, even

though 95 percent of Americans believe that a seat belt would help them in an accident; o Three-quarters of Americans prescribed a drug for a chronic condition have

stopped taking the medication by one year later. Even when the drug is free, long-term adherence is low.

92

These examples share common features. In all cases, everyone agrees on the right thing to do. There is little serious debate about whether seat belts save lives and no debate that giving people infections is a bad idea. Further, the monetary costs of undertaking the actions are low. The monetary cost of the infection reduction program at Alleghany General was trivial, and fastening a seat belt costs nothing. But yet, the actions are not taken. As the list illustrates, these features are common to many economic and social settings. I propose as a central question for the social and behavioral sciences the understanding of such problems: why do people and institutions not do things that are so obviously in their self-interest, even when they want to do so? The literature in the social sciences has addressed this question in various guises. Behavioral economists have examined individual propensities to engage in different actions. Why do people not save for retirement or give up smoking? A major theme of that research is that people are prone to procrastination. People do not take their medications because the cost of not taking todays pill is trivial, if one will start taking pills tomorrow. This theory is relevant in some settings; there are demonstrated successes getting people to save more by reducing the ability to procrastinate. But the theory is not right in all settings. When queried, hospital managers rarely announce that they will start infection control operations next month. Rather, they assert that they are already doing the best they can the Alleghany General experience notwithstanding. In sociology, the peer effects literature confronts similar questions. People wear their seatbelt if others around them do as well. Again, this theory has strengths. Smoking is clearly a social action, and so too are obesity and mood. But the theory fails in other settings. Alleghany General Hospital did not get better because like-minded people came to the conclusion that it had

93

to change or because of peer interactions. It improved because the Chief of Medicine imposed changes. Further, the change had to be continually monitored and stressed, or gains made one month were at risk of being undone. In organizational behavior, there is a large focus on principal-agent problems within the firm. The firms manager wants to do something new, but does not want to create new problems while addressing existing ones. There are a variety of strategies that firms might use to surmount this issue. Organizational behavior specialists study the combination of hiring, compensation, and promotion processes that lead to better and worse outcomes. The question then becomes why some firms successfully tackle the problem and others do not. Aside from a specific person, what is different about Alleghany General relative to the thousands of other US hospitals that still have high rates of hospital-acquired infections? All of these disciplines are right in some circumstances, but they all have limits. What we need to make progress is a scientific study of doing the right thing what makes the right outcome happen or not, and what are the barriers to repeating success? I do not know what the answer to this question will be. But I believe there are some ways to address it. Three features of inquiry strike me as particularly salient. First, we need to better understand how people view their social environment. Some people are motivated by the desire to fit in with others they go with the flow as much as possible. Others have a strong moral compass to always do what they perceive as right. Still others are motivated to be at the top of the hierarchy, or to avoid being at the bottom. What are the characteristics of people in each of these groups? Do people of similar types cluster together, or do different types co-exist? To date, we are not good at this type of measurement. Analyzing

94

individual personality is likely to involve standard survey methodology, but the type of questions asked will be different from what is usual. Second, we need to understand the processes of group decision-making. When people in an organization disagree about the best strategy, how are decisions made? Initial decisions are often made in a top-down setting witness Alleghany General but they are sustained by a culture of individual belonging and empowerment. Even at Alleghany General, infection control would not happen if every nurse and every doctor did not participate. When asked, employees describe it as part of the culture. How are organizational cultures born, and how do they spread? Not all cultural changes are the same; many firms that have imitated Toyotas production methods, for example, but not all firms have been successful. For this type of analysis, we will almost surely need new measurement techniques. There is relatively little literature on how to characterize an organization, as opposed to a collection of individuals. Third, we need to conduct experiments to understand different theories of behavior and test different interventions. The most influential studies in economics have come from interventions changes in the information people possess, the incentives they face, or the environment they operate in. The use of experiments has revolutionized the study of economic development, labor economics, and health economics, to name just a few areas. Experiments are by nature costly and time-consuming. But the return more than justifies the cost. To see this, return to the health care example. In the past 18 months, the United States engaged in a prolonged health care debate. One of the major points of contention was whether health care reform could bend the cost curve that is, limit the increase in medical spending over time. If we can bend the cost trajectory, health reform will be a huge success. If we cannot, reform will be a failure, and we may well repeal the recent reform legislation.

95

Bending the cost curve is ultimately in the hands of institutions like Alleghany General. If hospitals in general can do what Alleghany General has done, overall medical costs will fall more than enough to pay for the promises made. Thirty billion dollars of medical errors, after all, is a lot of money to save. In contrast, if Alleghany General remains an outlier a decade from now, the health reform effort will have failed. At present, we know that savings are possible. If the social and behavioral sciences make turn possible into certain, or even probable, we will have contributed more than our fair share to improving human welfare.

96

A Challenge For the National Science Foundation: Broadening Black and Hispanic Participation In Basic Economics Research

William A. Darity Jr.* Gregory N. Price** Rhonda V. Sharpe***

Summary This white paper considers the low participation rate of black and Hispanic Principal Investigators (PIs) and the distribution by institution among National Science Foundation (NSF) basic economics research grants. An analysis of NSF economics grants between 1990 2010 show that black and Hispanic PIs received a very small share of awards and that 15 institutions received over 50% of the funds awarded. Such an outcome represents a challenge for science policy if indeed broadening participation is a serious objective. We conclude that NSF economics should 1) aim to cultivate and sponsor research that examines the causes and consequences of black and Hispanic underrepresentation among NSF economics grantees; 2) make concerted efforts to recruit proposal reviewers and proposal review panelists from a diverse set of institutions; and 3) incentivize broad participation and racial/ethnic diversity in the basic economics research by penalizing institutions for not achieving respectable levels of racial/ethnic diversity on their economics faculties. October 15, 2010

_____________ *Sanford Institute of Public Policy, Duke University, 302 Towerview Rd., Durham NC, 27708, (919) - 613-7336 email: william.darity@duke.edu **Department of Economics, Morehouse College, 830 Westview Dr. SW., Atlanta GA, 30314(404) 653 7870 email: gprice@morehouse.edu ***Division of Business & Economics, Bennett College for Women, 900 E. Washington St, Greensboro, NC 27401, (336) 517 2193 email: rsharpe@bennett.edu

97

As one of its core science policy goals, underscoring its commitment to promoting science is a racial/ethnically diverse society, the National Science Foundation (NSF) has gone on record as being committed to: Broadening participation in terms of individuals from underrepresented groups as well as institutions and geographic areas that do not participate in NSF research programs at rates comparable to others. 1 .

Indeed as the U.S becomes increasingly racially/ethnically diverse, so should the pipeline

of potential and actual scientists. As research funds are a core input into the growth of basic scientific knowledge, the NSF role as a provider of basic research funds is important to the community of scientists. Arguably, innovations and new discoveries in economic science have significantly transformed society over the past century. Many of these innovations and new discoveries have been funded in part by NSF grants from the Economics Programthe largest disciplinary program in the Social, Behavioral and Economics Sciences (SBE) Directorate at NSF. In this context, broadening participation to ensure that individuals from underrepresented racial/ethnic groups receive adequate basic research funding is a sound science policy goal. Notwithstanding the importance of the NSFs Economics Programs to sustaining the growth of knowledge in economic science, our analysis suggests that it falls far too short in satisfying the NSF goals of broadening participation in the basic research enterprise. As it currently stands, the participation rate of blacks and Hispanics---as measured by the percentage of grants they received in recent history---is in our view intolerably low and incompatible with a science policy goal of diversifying our nations cadre of economic scientists engaged in the basic research enterprise. A consideration of NSF economics awards made during 1990 2010 suggests that one challenge NSF faces is broadening the participation of black and Hispanic PIs in basic economic research. As it stands, the funding rate to black and Hispanic economic scientists mimics the

See: Broadening Participation At The National Science Foundation: A Framework For Action, National Science Foundation, Arlington VA, 2008.

98

apparent color line in the hiring of economics faculty in U.S Colleges/Universities (Price, 2009). An analysis of economics awards by race/ethnicity also reveals the extent to which the NSF replicates existing racial/ethnic inequalitytheir underrepresentation on the economics faculties of research universities (Price, 2009) in academia. 2 If one considers for example the number of economics awards made to black and Hispanic Principal Investigators (PIs), the results are rather sobering. Table 1 reports the number of black and Hispanic PIs we could identify over the 1990 2010 period from data made publicly available by the NSF. 3Black PIs were identified on the basis of a roster of known black economists in academia, as reported in Price (2009). Hispanic PIs were inferred on the basis of selecting those individuals that received an economics award who had either a recognizably Hispanic first and/or surname. 4 Our analysis reveals that over the 1990 2010 period NSF awarded grants to 31 and 50 black and Hispanic PIs respectively. Table 2 provides an overview of the economics awards with respect to their distribution across race/ethnicity. 5 The share of economics awards to black and Hispanic PIs were approximately 1.8 and 1.7 percent respectively. Given the total dollar value of awards received, the share of awards by black and Hispanic PIs were approximately .005 and 1.2 percent respectively of a total of 703 million dollars awarded over the time period under consideration. In general, the economics award distribution reported on in Table 2 underscores a vulgar racial/ethnic inequality in access to basic research funds in economics. At its best, a continuation of this funding policy by NSF will only serve to replicate exiting racial/ethnic inequality that exists on the economics faculties of our nations research universities. At its worst, a continuation of this funding policy by NSF sends a signal that black and Hispanic research scientists are less capable and/or worthy of engaging research that merits NSF support. As we
2

See Gregory N. Price 2009. ``The Problem of the 21st Century: Economics Faculty and the Color Line, Journal of Socio-economics, 38(2), pp. 331 343. These data are available at http://www.nsf.gov/awardsearch. We use award data for which only the Principal Investigator can be identified.

We recognize that this approach to imputing Hispanic PIs may impart an upward bias, as non-Hispanic females who marry a Hispanic male, may retain a Hispanic surname. In addition, our counts of awards made to blacks and Hispanics could be downwardly biased as our data only reflect individuals who were PIs and not Co-PIs.
5

As there were instances in which several black and Hispanic had multiple awards during 1990 2010, the percentages reported in Table 2 do not reflect the raw counts of the PIs identified in Table 1.

99

see it, this is a fundamental challenge for NSF economics. The award distribution data in Table 2 provide evidence of a vulgar color line in the funding of basic economics research that denies black and Hispanic economists an opportunity to be full participants in the funded research enterprise. Is it our hope that in this 21st century, NSF economics will rise up to this challenge, and eradicate this apparent color line in the funding of basic economics research. The analysis of awards by race/ethnicity provides insight into the pattern of awards important for the professional development opportunities afforded by NSF grants. However, it does not shed light on the pattern of awards to institutions that either do not house, or have a small fraction of black and Hispanic PIs. The examination of awards to institutions is an approach to identifying the effects of exclusionary social capital associated with particular institutions. Table 3 supports the findings of Feinberg and Price (2004) that National Bureau of Economic Research (NBER) membership matters with respect to NSF funding. Between 1990 and 2010, the NBER received 536 awards, 18%, nearly 5 times as many awards as the second ranked institution Northwestern University with 112 awards. 6 While the number of awards received by NBER is alarming, 15 institutions received 55 percent of all awards funded and 71 percent of all dollars allocated by the economics program (element code 1320). The inequality portrayed in Table 3 suggests that Feinberg and Prices (2004) recommendation that NSF and other funding agencies to consider supporting alternative scholarly networks which could generate social capital for economists who are not NBER associates 7may not be a sufficient remedy alone. These institutional inequalities could indeed be a barrier to broadening the participation of black and Hispanic PIs as NSF Economics Program Grantees. In the case of black economists, the institutions in Table 3 account for approximately 5 percent of all known black econnomists in academia 8 The dominance of these institutions in the awarding of grants by the Economics Programs thus crowds out the possibility for success by a broader cross-section of black PIs at other institutions. This is particularly ominous if these institutions have advantages
6

See: Price, Gregory N. and Robert M. Feinberg. "The Funding of Economics Research: Does Social Capital Matter for Success at the National Science Foundation." The Review of Economics and Statistics 86.1 (2004): 245-252. Ibid, page 9.

See Price (2009) for a itemized roster of black economists on the faculties of colleges/universities.

100

in the grant awarding process that is not necessarily tied to merit, but to the fact, say, that these very institutions dominate the ranks of NSF Economics advisory panels and external reviewers. We recommend that NSF economics consider and implement strategies that would increase the fraction of awards going to black and Hispanic economists. This would include for example, sponsoring research that explores the cause and consequences of submissions and success rates of black and Hispanic economistswhich would inform strategies for broadening their participation in the basic economics research enterprise that requires funding. Additionally, we urge NSF to make a concerted effort to diversify advisory review panels, the population of external referees, and rotating program officers. To the extent that the low participation of black and Hispanic PIs reflects their absence on the faculties that typically receive economics grants, we encourage NSF to consider putting real teeth into its Broader Impact merit criteria for evaluating grants by penalizing institutions that fail to achieve respectable levels of racial and ethnic diversity on their faculties. The penalty could include not making grants to institutions who for example, have either never hired a black or Hispanic economists or have such a small a persistently small share of Black or Hispanic Economists. A simple perusal of Table 3 can reveal economics programs that have benefitted tremendously from NSF economics support, but have never had a black or Hispanic economist on their faculty. We suggest that this is very low Broad Impact for the Buck. NSF can, and should do better by incentivizing behavior that is consistent with good science policy in a society that is becoming increasingly racially/ethnically diverse. While not reflected in our data, NSF Economics does warrant some noteworthy praise for its support of mentoring/education programs that enable a pipeline of minority economic scientists. For at least 20 years, NSF economics has supported the American Economic Association Summer Minority Program. In more recent years, it has supported the Minority Pipeline Project, and the Diversity Initiative for Tenure In Economics (DITE). Such support is indeed laudable, and if these program are effective, they will significantly promote racial diversity in the supply of capable economic scientists. Nonetheless our view is that minority economic scientists would also benefit from having fair and reasonable access to basic research support. We suspect that the ratio of research to non-research support to minority economists is

101

rather low, and should be increased to enable the minority pipeline catalyzed by NSF support to develop further as capable and effective research scientists. 9 However challenging the implications of our analysis is for NSF science policy, we are confident that our descriptive analysis of NSF Economics funding provides a useful framework for future science policy interventions that would make for a substantive Broader Impact as it relates to racial/ethnic diversity. Underlying our descriptive analysis exists evidence that providing funding for basic economics research to black and Hispanic economists has beneficial effects on both the pipeline and practicing community of minority economic scientists. Chung (2000) for example demonstrates that an increase in the proportion of minorities on the faculty has the effect of enhancing confidence among minority students that they too can succeed as college professors. 10 Agesa, Granger and Price (1998, 2000), find that the research productivity of minority economics faculty is positively correlated with the production of minority graduates that go on to earn doctorates in economics. 11 Lastly, Price (2007), reports evidence suggesting that the receipt of NSF funding by minority economists has a substantial effect on their research productivity as measured by publication in refereed science journals.
12

All of these findings

suggest that broadening the participation of black and Hispanic economists in the receipt of NSF economics grants would promote a valuable social goal by effecting racial/ethnic diversity in both the pipeline and practicing community of research scientists.
This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/3.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.
9

Collectively, we suspect that total NSF economics support for the AEA Summer Minority Program, Minority Pipeline Program, and IDITE between 1990 and 2010 is a nontrivial fraction of total research support to black PIs over the same periodwhich was $3, 342,630 (See Table 2.).

See: Chung, Kim-Sau}. 2000. "Role Models And Arguments For Affirmative Action", American Economic Review, 90:3, pp. 640 - 648 See: Agesa, Jacqueline, Maury Granger and Gregory N. Price. 2000. "Economics Research At Teaching Institutions: Are Historically Black Colleges And Universities Different?", Southern Economic Journal, 67:2, pp. 427 - 447., Agesa, Jacqueline, Maury Granger and Gregory N. Price. 1998. "Economic Research at Historically Black Colleges and Universities: Rankings and Effects on the Supply of Black Economists", Review of Black Political Economy 25:4, pp.41 - 54. See: Price, Gregory N. 2007. ``Would Increased National Science Foundation Research Support To Economists at Historically Black Colleges and Universities Increase Their Research Productivity?", Review of Black Political Economy 38(1/2),: pp. 87 109.
12 11

10

102

Table 1: Black/Hispanic Principal Investigators: 1990 - 2010 Black Principal Investigator Institution
University of Kansas Wake Forest University Yale University Spelman College North Carolina A&T NBER University of North Carolina /Duke University Williams College Harvard University/NBER North Carolina A&T Stanford University Syracuse University NBER University of North Carolina Columbia University New York University Rutgers University/William and Mary University of North Carolina University of Central Florida Yale University University of Michigan

Elizabeth Asiedu Sylvain Boko Donald Brown Myra Burnett Darnell Cloud Susan Collins* William Darity

Economics Pipeline
No No No

# of Awards
1 1 1 1 1 1 4 1 1+Career 1 Career 1 2 2 1 1 4 1 1 Career 1 2 1+Career 3 7 1 5 1 1 5 1 4 3 4 2 2+Career 2 1 5

Value of Awards ($)


39,999 12,500 225,771 51,905 20,000 131,307 701,992 50,000 469,789 17,972 250,001 108,000 129,398 169,887 88,690 248,272 360,994 20,000 20,000 73,858 152,280 386,666 374,320 360,339 1,055,064 150,000 753,260 81,283 20,367 966,363 130,359 686,050 571,007 675,789 216,670 801,946 428,779 20,000 717,278

No Yes No Yes Yes No No No No No Yes No No No No No No

Kaye Fealing* Roland Fryer * Maury Granger Peter Henry William Horrace Caroline Hoxby William Jackson Philip Jefferson* Yaw Nyarko William Rodgers Rhonda Sharpe Kasaundra Tomlin Ebonya Washington Warren Whatley Fernando Alvarez Manuel Amador Andre Lopez-Aradillas

Hispanic Principal Investigator NBER No Stanford University No Princeton University/University of No Wisconsin Ricardo Cabellero NBER/MIT/Columbia University No Graciela Cabana None No Ann Carlos University of Colorado No Kathyrn Dominguez NBER No Linda Fernandez UC-Santa Barbara No Raquel Fernandez NBER/New York University No Ivan Val-Fernandez Boston University No Jesus VillaverdeNBER/University of Pennsylvania/Duke No Fernandez University Edward Miguel UC-Berkeley No Jose Victor Rios-Rull University of Pennsylvania/CarnegieNo Mellon Julio Rotemberg NBER No Emmanuel Saez NBER/UC-Berkeley No Xavier Salai-i-Martin NBER No Manuel Santos Arizona State No Jose Schienkman Princeton/University of Chicago No *Denotes alumni of the AEA Summer Program and Minority Scholarship Program

103

Table 2: Summary of NSF Economics Awards by Race/Ethnicity: 1990 - 2010 Number of Awards Black Hispanic All Other 31 50 2861 Percentage of Awards .010 .017 .973 Average Award Size ($) 107,826.5 161,686.2 241,561.9 Aggregate Value of Awards ($) 3,342,630 8,084,309 703,000,000 Aggregate Value share .005 .011 .984

Table 3: Schools with the Largest Dollar Awards: 1990 - 2010


Organization University of Illinois at Urbana-Champaign National Bureau of Economic Research Inc University of Michigan Ann Arbor Stanford University National Opinion Research Center* Northwestern University University of Pennsylvania* Princeton University University of Minnesota-Twin Cities* University of Wisconsin-Madison* New York University University of California-Berkeley* Santa Fe Institute* Yale University Aggregate Value of Awards ($) 127,092,200 110,494,312 76,248,000 17,786,924 16,566,380 16,371,027 14,833,949 14,260,936 14,110,274 14,020,081 13,749,870 13,390,484 12,974,533 12,874,816 Total Number of Awards 27 536 67 111 31 112 86 102 69 86 84 77 9 81 68 59 1,605 Percentage of All Awards 0.92 18.22 2.28 3.77 1.05 3.81 2.92 3.47 2.35 2.92 2.86 2.62 0.31 2.75 2.31 2.01 54.57

University of Chicago* 11,013,098 Harvard University 11,000,721 496,787,605 Total Source: NSF Awards Data Base, http://www.nsf.gov/awardsearch

Denotes institutions that have never hired a black.

104

Three research themes Peter Diamond 1 I am an applied theorist and policy analyst. I want to identify three areas of research that are important and have large potential payoffs. In all three areas I focus on needs and opportunities for theoretical analyses, since these are areas and research methods with which I am familiar, without any intention to underplay the importance as well of other research inputs, including empirical and experimental work. The first, optimal taxation of capital income is an area of steadily advancing normal science that is making significant progress. The other two, incorporating behavioral economics into equilibrium analyses and understanding systemic risk, are more foundational. It seems to me important for NSF to have a balanced portfolio, incorporating areas where important progress is likely and others where there is higher risk and the potential of more seminal advances. Taxation of capital income As long as income has been taxed, an issue has been how capital income should be taxed relative to how labor income is taxed. The optimal tax literature addresses tax setting to accomplish social goals in light of the constraints and behavioral dimensions in the economy. Different mixes of social goals, different revenue needs, and different behavioral parameters call for different tax solutions. In other words, there is an equityefficiency tradeoff which needs to be understood when thinking how to accomplish social goals. While democratic governments will inevitably compromise different goals, understanding the links among tax structures, behavioral parameters and equilibrium outcomes is central to having an informed debate about tax policies. The famous Mirrlees, 1971, optimal income tax paper launched the modern analysis of progressive taxation of earnings. Since then, there have been repeated advances in theory, extending the framework to incorporate elements not present in the initial approach, and in understanding how to use the insights from theoretical, empirical and simulation analyses for policy recommendations. As one example I cite the analysis by Saez and by Judd and Su that moved beyond the assumption of a one-dimensional distribution of workers differing only in skill, not preferences, to a recognition of how to approach optimization while recognizing a higher dimensional and so more diverse population. Another example is in the work of Diamond, Saez, and Laroque incorporating an extensive margin (the participation decision) for lower paid workers as more important than the intensive margin (number of hours worked) in the response of workers to taxes. The assumptions used by Mirrlees did not allow for a relevant extensive margin. The Earned Income Tax Credit (EITC) approach of subsidizing work by low earners is consistent with optimal taxation with an important extensive margin, while such subsidizing of work is not part of an optimum in the Mirrlees model. With a
1

Institute Professor, MIT. pdiamond@mit.edu

105

better developed theory, we can better examine and analyze the choice of parameters for the EITC in a coherent theoretical framework. There is a standard model of taxes and labor supply in a single year that is the widely used starting place for analyzing earnings taxation, along with some recognition of intertemporal connections as workers may forego some earnings in order to accumulate more human capital. While intertemporal concerns are real, they have not been viewed as so central as to undercut the value of the insights from the one-period analyses. In contrast, one can not make sense of savings decisions and the proper role of the taxation of capital income without paying attention to the dynamic perspective that saving is being done for future rewards. Further complication comes from the diversity of savings behavior identified in empirical studies. While the life-cycle model is the standard starting place for savings analysis, there is wide evidence and modeling advances to reflect the limits on the applicability of this model. Neither precautionary balances in the presence of an uncertain future nor very large accumulations by some fit in the standard model. Moreover, savings behavior has been one of the prime areas in which behavioral economics has identified and documented widespread responses that are not consistent with the standard individual choice model. While analysis of optimal taxation of capital income began in the mid 1970s, there has been significant, important progress in recent years. I cite the work of Farhi and Werning, Golosov and Tsyvinski with various co-authors, and Diamond and Spinnewijn. Analyses now recognize a key role for uncertainty of future earnings in optimizing capital income taxation. A start has been made on incorporating diversity of savings behavior, modeled as diverse preference parameters among standardly modeled savers. While Social Security analyses have considered a mix of underlying behavioral savings models, this has not yet happened with tax analyses. Thus the literature has not yet come to grips in detail with the best tax treatment of retirement savings, including 401(k)s, IRAs, Roth IRAs, and the Savers Credit. The fundamental question for this theme is how taxes out to be set to best accomplish social goals. Taxation is extremely important for the functioning of an economy. Continuing support for advances in both theoretical understanding and empirical findings on behavioral responses is central to the potential for improving government policy. Improved access to government tax data would help this effort. Moreover, advances in techniques of analysis of intertemporal behavior under uncertainty will influence other areas of analysis. Incorporation of behavioral economics in equilibrium analyses The explosion of empirical findings and policy modifications arising in behavioral economics is extremely exciting. I have long thought that incorporating empirically supported behavioral models is of great potential for the usefulness of economics. Indeed, I was first exposed to the cognitive psychology that underlies much of recent advances in behavioral economics over 40 years ago, through the late Amos Tversky. From time to time I tried writing theoretical analyses incorporating insights from

106

cognitive psychology, with limited success. The limits were not from any resistance to psychological ideas nor from over-attachment to the standard model, but from the difficulty of making valuable theoretical advances incorporating behavioral insights. Great advances have been made in recent years. So far the advances have been in empirically documenting and formally modeling behaviors that show the workings of psychological insights in actual markets, while identifying policies that can improve outcomes. The speed with which some insights and empirical documentation have moved into government policy has been heartening. However, the impact on models of entire markets or the entire economy has been limited. One important advance is by Gabaix and Laibson. That is not to say that people do not recognize the presence of behavioral biases in phenomena such as the recent housing bubble, but that our ability to incorporate realistic aspects of such behavior in formal models has not advanced very far. The fundamental question for this theme is how more realistic pictures of individual decision-making affect the allocation of resources throughout the economy. Supporting the full range of behavioral economics, experimental, empirical, partial and general equilibrium modeling must be a very high priority. The potential for improved understanding of the workings of the economy and for improved individual, business and government policies is enormous. Systemic risk The global financial crisis and the great recession are critical events for the economy and for recognition of research needs. The magnitude of the crisis has largely surprised economists and non-economists. Finance economists and macroeconomists need to revisit the foundations of their subjects. There have been heated discussions of the extent to which existing macroeconomic studies have contact with recent experience and can help inform the design of policy. My view is that many of the important elements have been identified in individual research papers. However, these papers stand, by and large, as small parts of the macro research output and have not been put together into a systematic, widely analyzed overall view. 2 Of course we do need macroeconomics to deal with more than just great crises. But at present supporting research on great crises seems very valuable. Narrative histories of the crisis and its effects have identified a number of elements that, in combination, contributed to the magnitude of the crisis. I identify three
2

theories embedded in general equilibrium dynamics of the sort that we know how to use pretty well now-there's a residue of things they don't let us think about. They dont let us think about the U.S. experience in the 1930's or about financial crises and their real consequences in Asia and Latin America. They dont let us think, I dont think, very well about Japan in the 1990s. We may be disillusioned with the Keynesian apparatus for thinking about these things, but it doesnt mean that this replacement apparatus can do it either. It cant. In terms of the theory that researchers are developing as a cumulative body of knowledge-no one has figured out how to take that theory to successful answers to the real effects of monetary instability. (My Keynesian Education. History of Political Economy, 2004, 36(4), page 23.)

See for example the statement of Robert Lucas: The problem is that the new theories, the

107

examples of central elements: bubbles, maturity mismatch, financial engineering. These are not the only areas needing significant research support, but ones where I have identified a need and opportunity with some detail. Bubbles have happened for a very long time. Better studies of the psychological and rational bases that lead people to generate and continue to participate in a bubble are needed, as are analyses of the nature of the generating process. Out of such studies should come a picture of how policies can limit the magnitude and time extent of bubbles and when such policies might be called for. Based on their long history and experimental findings, I think bubbles will always be with us, so deep understanding will be of longlasting importance. The work of Case and Shiller and of Campbell and Shiller have been valuable, but there is much to do. Financial intermediation is central for a well-functioning economy, with a particularly important, and dangerous, role for mediating between different maturity desires of savers and investors. This issue has been well identified by Hellwig. Currently, to a large extent this intermediation is done by large financial institutions that deal with each other a great deal. Models based on price-taking behavior can not come to grips with the risks from such interconnections among large firms. (Gorton and Stein have identified some key issues.) Yet price-taking models have, historically, played a key role in the development of finance theory. Making significant advances in understanding the roles of large players in financial markets, which has begun, will be difficult and important. Financial innovation, developments in financial engineering, are very important for how the economy deals with risk. There have been important advances. And yet the properties and limited understanding of some financial assets have been important ingredients in the financial meltdown, along with the housing bubble, the behavior of large financial institutions, and their interactions. Better understanding of how to use, and how to limit the use, of existing risk sharing innovations, as well as new ones bound to appear, is of critical importance. We should understand behavior by both buyers and sellers with limited understanding of financial engineering, the role of such contracts in response to differences in subjective probability beliefs (and not just differences in information) as well as in shifting risks, and the role of large institutions and their interactions in counter-party risk. Economic equilibrium is an inherently complex phenomenon, made more so in recognition of the fact that the allocation of resources plays out in real time, not in some timeless, coordinated way as in the long-standing general equilibrium (Arrow-Debreu) model. The fundamental question for this theme is how to avoid a repeat of the global financial crisis that we have just experienced and, more generally, how to better understand the workings of asset markets in order to have better individual decisions and better regulatory policies from better understanding. Advancing the domain, building capacity, and providing infrastructure

108

The three themes I have explored all center on basic theoretical research. Having better models and better understanding of the determinants of equilibrium in models incorporating intertemporal decisions under uncertainty, behavioral decision-making, diversity of individual decision types, large financial firms, and complex assets will advance the fundamental science of economics, with the tools and approaches spreading to many other areas. The same applies to the techniques developed for optimizing the government tax and regulatory policies. Successful advances in any of these areas would immediately appear in graduate education, opening up new avenues for research for students and faculty alike. Both the model structures and the analytical methods are likely to have a wide range of applications.

109

110

111

112

113

114

115

116

TheContributionofDatatoAdvancesinResearchinInternationalTrade: AnAgendafortheNextDecade JonathanEaton1andSamuelKortum2 September2010 Abstract Observationsfromnewsourcesofdataspawnedatleasttworevolutionsinresearchininternational tradeduringthelastseveraldecades.Yetmanysourcesofdataremaininaccessibletoresearchers. Thesituationcallsforboththegatheringanddisseminationofdataandtheconstructionofmodeling frameworksthatcanlinkdataofvarioustypesatdifferentlevelsofaggregation.Applyingsucha frameworktotheappropriatedatawilllinktheaggregateoutcomesthatpolicymakersfocusonwith theirimplicationsforindividualhouseholdsandproducersintheeconomy.Suchanagendahasthe potentialtoconfrontawiderangeofissues.Animportantoneisunderstandingtheconnections betweentheinventionandinternationaldiffusionoftechnologyandgrowth,employment,andwelfare.

1 2

ThePennsylvaniaStateUniversity TheUniversityofChicago

117

Inrecentdecades,howmainstreameconomicshasapproachedresearchhasundertakenarapid evolution.Economistsarelesswillingtodeveloptheoriessolelyforthesakeoftheirlogicaleleganceor theircontributiontoacanonicaltradition.Theanalysisofdatahasplayedamuchmorecentralrolenot onlyasameansoftestingtheorybutasaguidetodevelopingtheory.Economistshavemadeuseofa vastarrayofdatasetstolearnabouttheworldandhowtomodelit. Thisevolutionisaresponsetotworelatedshiftsintheenvironment.Oneismucheasieraccesstoa widerangeofdata,bothattheaggregatelevelanddatasetsofindividuals,families,establishments,and firms.Thesecondisthecomputationalpowertohandlelargedatasetsandtoestimatemodelsthat exploitthem.Hencegatheringnewsourcesdataandmakingthemaccessibletoawiderangeof researchersisamajorpublicgoodworthyofsupport. Weprovideabriefoverviewofhowthisrapidevolutionhastransformedthefieldofinternationaltrade andwhatwethinkaretheremainingchallengesinthatfield.Wethendiscusswhatthistransformation mayhavetosayaboutthedirectionofresearchinthedisciplinegenerally. Newdatasetschangedthedirectionofresearchininternationaltrade: Beforelookingforwardadecadeto2020,itsusefultolookbackatrecentprogressininternational trade.Manyindividualresearchershavecontributedtothisprogress,butwhatstandsoutisthecrucial roleplayedbytheintroductionofrichnewdatasources. Beforethe1980s,researchininternationaltradetypicallyinvolvedtheconstructionofelegant theoreticalmodelsandtheexplorationoftheirinternallogic.Occasionallyimplicationsofthesemodels weretestedonsomeavailablenumbersbuttheanalysisofthedatathemselveswasneverseenasthe driverofresearch.Whatdatawereavailablepertainedtoasmallrangeofindustriesandproducts. Hencethetheoryneverdelvedbeneaththesebroadaggregates,treatingallproducersinthesesectors asusingcommontechnologiestomakeahomogeneousgood. Whenmoredetailedproductleveldataoninternationaltradebecameavailable,anearlyempirical studybyGrubelandLloyd(1975)documentedtheprevalenceofintraindustrytrade,afindingtotallyat oddswiththestandardapproachtothinkingaboutinternationaltrade.Thisprovocativeresultledto thedevelopmentofthenewtradetheory,withitsemphasisonproductdifferentiation.Animportant insightwasthattradenotonlybenefittedpeoplebyexploitingcomparativeadvantageanddifferences infactorendowments,butgaveproducersandconsumersaccesstoamuchwidervarietyofinputsand products. Initiallygettingaccesstothesedatawasdifficult,sofewresearchersmadeuseofthem.Theywere madewidelyavailable(andclearlydocumented)byFeenstra,Lipsey,andBowen(1997).Thesedata coverannualtradeingoodsbetweenessentiallyallcountries,disaggregatedintohundredsofindividual products.Theiravailabilitystimulatedavastliteratureonestimationofgravityequations,andontrade theoriescompatiblewiththegravityequation.Thisworkhasprovenusefulforawiderangeof questions,includingassessingthewelfarebenefitsoftariffreductions.Recently,evenricherdatahave becomeeasilyaccessiblefromCOMTRADEandWITS.

118

Morerecently,dataontheexportingbehaviorofindividualproducershavebecomeavailable.Bernard andJensen(1995)introducedsuchdataforU.S.plantsasdidRobertsandTybout(1997)fordeveloping countries.BiscourpandKramarz(2002)wentdirectlytoFrenchCustomsformicrodatatoanalyze importsandlabormarketoutcomes.Eaton,Kortum,andKramarz(2004,2008)examinedtheexporting sideofthesedata,usingthemtoestimateamodeloffirmheterogeneityandexportactivity.While thesemicrodatasetstypicallycannotbewidelydistributed,ascanthebilateraltradedata,researchers knowwheretheycangotoworkwiththem.Eachyearweseedatafromadditionalcountriesbecoming available.Thesenewdatahavestimulatedahostofnewmodelsthatincorporatetheunderlying heterogeneityoffirmsparticipatingininternationaltrade,andtheresultingdistributionalconsequences oftradepolicies.Akeyinsightwasthatinternationalcompetitioncanincreaseproductivitybyweeding outinefficientfirmsandgivingefficientonesroomtoexpand. Theresultsofallthesenewdataareabetterunderstandingofbasicfacts,theoriesthataremotivated bythesefactsratherthanonlybytheirowninternallogic,andmorepreciseestimatesofmodel parameters.Yet,whilenewdatahasstimulatedmuchempiricalwork,ithasnotundonethelong traditionofcarefulgeneralequilibriumreasoningthathasbeenahallmarkoftheinternationaltrade fieldfornearly200years.Thus,thequantitativemodelscomingoutofthisworkremainusefulforpolicy analysis.Buttodososuccessfullytheyneedtointegratethedetailinthemicroleveldatawiththe aggregatesofinteresttopolicymakers. Lackofdataimpedestheinvestigationofkeyquestionsabouttechnologyandemployment: Whilemuchprogresshasbeenmadeinthelastdecade,somebasicquestionsarestillunanswered. Lookingforward,perhapsthemostimportantquestioninthefieldiswhethereconomicopennesshas consequencesforlivingstandardsthatdiffersubstantiallyfromestimatesobtainedfromstaticmodels oftrade. Aleadingcandidateforthesedynamicgainsisthediffusionoftechnologyfromoneplacetoanother, facilitatedbyinternationalcommerce.Here,thelackofgooddatahasheldbackprogress.Whereas flowsofgoodsaremeasureddirectly,flowsofknowledgearenot.Onesourceofinformationispatent citations,whichhavebeenusefullyexploitedduetotheworkofJaffeandTrajtenberg(2002)inmaking suchdatawidelyavailable.Othersourcesareinternationalpatents,royaltypayments,andforeigndirect investmentpositions.Eachofthesesourcesofinformationisvaluablebutsuffersfromitsownsetof problems.Creativeideasaboutnewmeasuresorindicatorsinthisarea,followedupbythehardwork ofassemblingthem,isoftheupmostimportance. Whiletheeffectofinternationaltradeonlabormarketoutcomesisaconstantsubjectofpopular discussion,ourknowledgeoftheconnectionbetweenthetworemainsverylimited,mainlybecause littleinformationisavailableaboutworkersandwheretheyworked.ExceptionsareFranceand Denmark,whichhavedetaileddatasetsmatchingworkersandfirms.Thesedatasetstellushowthe employmenthistoryofworkersshapestheirearnings,andhowexposuretoforeignmarkets,bothasa destinationforsalesandasasourceofinputs,affectslabormarketoutcomes.Datasetsofthesesorts poseseriouschallengesforeconomictheoryaswell,astheyrequirerichandflexibleframeworksfor

119

understandingthem.TwostudiesthatsuccessfullydealtwiththesechallengesarePostelVinayand Robin(whose2002EconometricapaperwasawardedtheFrischMedal)andrecentworkbyLentzand Mortensen(2010).Butthesestudiesareonlyfirststepsofahighlychallengingresearchagenda. Whatisparticularlyneededhereisaccesstodataonfirmsdecisionsaboutinvestment,inparticularthe natureofthecapitalgoodstheyareusing,abouttheinputsthattheyuseandwheretheycomefrom, andhowtheseinteractwithemploymentandproductivity. Thechallengeacrossfieldsistodesignframeworkstoaccommodatedataatdifferentlevelsof aggregation Economistshavetypicallyaddressedcriticalissuesineconomicpolicymaking,suchas:(1)thesourcesof economicgrowth,(2)thedeterminantsofeconomicfluctuations,(3)labormarkettransitionsand unemployment,and(4)howthefirstthreerelatetotheinteractionsamongindividualcountriesinthe globaleconomy,withaggregatedata.ExamplesofsuchdatasetsincludethePennWorldTables, nationalaccounts,unemploymentstatistics,andCOMTRADE,datanowreadilyavailabletoresearchers. Asaconsequence,researchersworkingontheseproblemshavetendedtodevelopmodelscapableof explainingthedataatthislevelofaggregation.Therealbusinesscyclemodel,forexample,interprets nationalaccountsdataasreflectingthedecisionsofarepresentativeconsumer.Anotherexampleisthe traditionalanalysisofinternationaltradedataintermsofamodelinwhichfirmsinsectorsofthe economyshareacommontechnology. Theaggregatedatathemselves,however,areconstructedfromrecordsofindividualsand establishmentsthateconomistsrarelysee.Dataatthislevelareusuallyconfidentialandarevery difficultforresearcherstoaccess,withafewexceptions. Forsometime,however,economistshavehadaccesstodataonindividualhouseholdsandfirms,largely basedonsurveys.Thesedatahaveledtosignificantadvancesinanumberoffieldssuchaslabor economicsandindustrialorganization.Butthesurveydataarenottypicallythebasisofwhatgoesinto theaggregatedata.Henceresearchonindividualunitshasproceededquiteindependentlyfrom researchattheaggregatelevel.Inordertoaddressmacroeconomicpolicyissuesthemicrodata economistsuseneedtoserveasthebasisfortheaggregatedataofinteresttopolicymakers. Internationaltradedataprovideoneexampleofwhymodelsthatonlyoperateatthelevelofaggregates canbeinadequate.Bilateraltradeflowsarenottheconsequenceofsomeaggregateforces,butof decisionsbyindividualfirmstoselltoindividualbuyersinaforeignmarket.Insomecasesthenumber ofagentsinvolvedisquitesmall,tothepointofbecomingzero.Understandingthedecisionto participateintradeisthusasimportantasunderstandinghowmuchtotrade.Confrontingthisfeature ofthedatarequiresaverydifferentapproachtomodelingbilateraltradeflows,buildingfromthe individualdecisionmakerup.Forothermacroeconomicdataenoughagentsusuallyparticipatesothat theirindividualdecisionsdonotshowthroughsostrongly,butitisjustascrucialtounderstandthat aggregateinvestmentorresearchanddevelopmentdata,forexample,reflecttheveryheterogeneous decisionsofindividualfirms.

120

Whatisrequiredisnotonlyaccesstothedatathemselves,butthedesignofanalyticframeworks consistentwithdataatthesedifferentlevelsofaggregation.Researchersarejustbeginningtomeet thesechallenges. Whatisthechallengequestion,thecapabilitytobecreated,andthescientificstrategy? Weposeonechallengequestionfacinginternationaleconomics.Howcanweaccessandcombinedata ontrade,foreigndirectinvestment,patenting,royaltypayments,laborflows,andothermeasurablebut perhapsoverlookedphenomena,toquantifytheinventionandinternationaldiffusionoftechnology anditseffectsonemploymentandwelfare? Thecapabilitiestobecreatedaredatasets,readilyaccessibletoresearchers,thatprovideinformation abouttheindividualunitsbehindtheaggregatedata.Toaddresstheissueofconfidentialityprogress hasbeenmadeonthecreationoffuzzydatasetsthathidetheidentitiesofindividualagentswhile revealingthemomentsinthedataofinteresttoresearchers. Thescientificstrategyisthedevelopmentofmodelsthatcanconnecttheaggregatesmeasuresof interesttopolicymakerswithwhatisgoingonatthelevelofheterogeneoushouseholdsandproducers. Bernard,A.B.andB.J.Jensen.1995.Exporters,Jobs,andWagesinU.S.ManufacturingPlants,1972 1986.BrookingsPapersonEconomicActivity:Microeconomics,67119. Biscourp,P.andF.Kramarz.2007.Employment,SkillStructure,andInternationalTrade:FirmLevel EvidenceforFrance.JournalofInternationalEconomics,72:2251. Eaton,J.,S.Kortum,andF.Kramarz.2004.DissectingTrade:Firms,Industries,andExport Destinations.AmericanEconomicReview,PapersandProceedings,94:150154. Eaton,J.,S.Kortum,andF.Kramarz.2008.AnAnatomyofInternationalTrade:EvidencefromFrench Firms.NBERWorkingPaperNo.14610. Feenstra,R.C.R.E.Lipsey,andH.P.Bowen.1997.WorldTradeFlows,19701992,withProductionand TariffData.NBERWorkingPaperNo.5910. Grubel,H.G.andP.J.Lloyd.1975.IntraindustryTrade:TheTheoryandMeasurementofInternational TradeinDifferentiatedProducts.NewYork:Wiley. Jaffe,A.andM.Trajtenberg.2002.Patents,Citations,andInnovations:AWindowontheKnowledge Economy.MITPress. Lentz,R.andD.Mortensen.2010.``LaborMarketFrictions,FirmHeterogeneity,andAggregate EmploymentandProductivity,manuscript.

121

PostelVinay,F.andJ.M.Robin.2002.WageDispersionwithWorkerandEmployerHeterogeneity. Econometrica,70:22952350. Roberts,M.andJ.R.Tybout.1997.TheDecisiontoExportinColombia:AnEmpiricalModelofEntry andSunkCosts.AmericanEconomicReview,87:545564.

122

This work is licensed under the Creative Commons Attribution-NoDerivs 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nd/3.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.

Questions about the Future of the International Economy Stanley Fischer, Bank of Israel The problems that stand out are (1) Assuming the center of gravity of the global economic system is moving towards Asia and the emerging market countries more generally, what are the implications for the management of the international economy, and for the future behavior of the international economy. (Among the issues worth thinking about, there's an article in a recent, or perhaps the latest, Foreign Affairs, by Jorge Castaneda, called something like "Not ready for prime time", which is about the move from the G-7 to the G-20. The title explains what he contends.) (2) What can/should be done to try to channel this process in a constructive direction? (3) What are the political implications of this shift? (4) What are the factors that could derail this process (remember that in the 1980s many believed that Japan would take over the world) and what would be the political and economic implications of such a derailing? (5) It's clear that the futures of China and India are critical to this process, and I don't know whether enough work is being done on those sets of questions. In addition, (6) the information explosion/Google/Facebook/government censorship of their activities in many countries, is a critical and littlestudied issue. (7) Demography and demographic trends. It's hard to believe that Russia, Japan, China, Europe are simply going to stand by while their countries and economies become smaller and relatively less significant, as a result of demographic trends. Presumably at some point they will make intensive efforts to reverse current trends (in China's case, the trend that must bother them is that India will become more populous). (a) What will/can they do this end? (b) What consequences will such attempts have? (Note this is a long-term issue, but it's clear you're looking for long-term and profound issues) (8) The energy issue, including the potential role of nuclear power and all its geopolitical ramifications, is not going away. NSF could advance understanding of this issue, which is important not only for the supply of energy, but also for the global power balance and for international imbalances No doubt all these subjects are already on your list. with your well-timed and much-needed initiative In any case, good luck

123

124

Predictive Game Theory Drew Fudenberg Harvard University Abstract: Game theory is used in a variety of field inside and outside of social science. The standard methodology is to write down a description of the game and characterize its Nash or subgame perfect equilibria, but this is only sometimes a good approximation of observed behavior. The goal of predictive game theory is to develop models that better predict actual behavior in the field and in the lab. Core questions include: What determines peoples behavior the first time they play an unfamiliar game? When are social or altruistic preferences important, and what do people believe about other peoples social preferences? How do people update their play based on their observations? What sorts of theories of mind, if any, are commonly used to guide play? How do people think about games with a very large number of actions- what sort of pruning is involved? When will play resemble an equilibrium of the game, and which equilibrium will tend to emerge? Similarly, in a decentralized matching market, when will play converge to a stable outcome, and which one? To develop answers, researchers will need to combine insights from behavioral economics and psychology with formal modeling tools from economics and computer science.

125

Predictive Game Theory 1. Research Agenda, importance, and context. The standard methodology in applying game theory is methodology is to write down a description of the game and characterize its Nash or subgame perfect equilibria. This was a good starting point for game theoretic analysis, and has provided a number of qualitative insights It also yields a good approximation of observed behavior in some cases, but in many others it is either too vague to be useful or precise but at odds with how games are actually player. With the increased use of game theory in a variety of fields inside and outside of social science, it is time to go beyond equilibrium analysis to get more accurate predictions of behavior in the field and in the lab. There have already been some tentative steps towards this goal, from several different directions; the challenge is to go extend and perhaps unify these initiatives to build a coherent predictive theory. A. Relaxing Equilibrium Analysis A key component of this program is the further

development of adaptive justification for equilibrium, which holds that equilibrium arises as the long-run outcome of a non-equilibrium process of learning or evolution. Existing work has focused on tractable learning rules that yield qualitative insights about long-run outcomes. Researchers should now consider learning rules that more accurately describe how subjects update their play in light of their observations. One possibility is to take into account various cognitive limitations on learning that have been observed in decision problems, such as the use of coarse categories, errors in computing posterior probabilities, and so on. Also, the literature on adaptation and learning in extensive form games should move beyond the rational or almost-rational approach to off-path experimentation by considering other reasons that subjects might test the consequences of an apparently suboptimal action. Another avenue for improvement is the addition of explicit models of the subjects theories of mind- their beliefs about how other subjects think about the game. In addition, researchers should begin to complement results on asymptotic behavior with results on the rate of convergence, and also with results that apply to

126

Predictive Game Theory laboratory settings, where subjects typically play ten, and at most fifty, repetitions of the game. In an extensive form game, even experienced players may not have learned how opponents respond to actions that have rarely if ever been used; as a result learning processes can converge to non-Nash outcomes such as those of self-confirming equilibria. Furthermore, in many cases of in the lab and in the field, agents do not have enough experience with the game to learn even the path of play, so that their initial beliefs and attitudes can play a large role in determining what is observed over the relevant horizon. This motivates a more careful and less agnostic treatment of the players initial beliefs and attitudes. This is related to the second key component of the program, the further development of models of cognitive hierarchies and level-k thinking. These models, which describe the outcome the first time people play an unfamiliar game, take as a primitive the players beliefs about the play of unsophisticated level-0 agents. Early work focused on simple matrix games, and supposed that level-0 agents give each action equal probability, but fitting these models to more complex games requires alternative ad-hoc modifications of level-0 play, and when all distributions over level-0 play are allowed the theory has very little predictive content. Thus, the cognitive hierarchy models should be complemented with an a priori method of determining level-0 play. We also need a theory of how these beliefs are updated in light of observations and what the resulting play will be, which is especially important for applying make the technique useful for field data. Once again insights from behavioral psychology and economics should be brought to bear.

B.

Multiple Equilibria Many games of interest have multiple equilibria, even when

restricting to standard solution concepts, and allowing players to have incorrect off-path beliefs (as in self-confirming equilibrium) only makes the set of equilibria larger. Yet there is no general and empirically valid way of selecting between them. ere is a sizable theoretical literature that provides evolutionary/adaptive arguments for why cooperation should be observed in repeated games, but the existing theories are a poor match for the data from lab experiments: subjects do seem to cooperate when the gains to cooperation

127

Predictive Game Theory are sufficiently high, but do not cooperate in some settings that have cooperative equilibria. So when research question is to empirically characterize when cooperation occurs (varying payoff functions, what subjects observe about other subjects play, etc.) and to then organize the findings in a way that makes testable predictions. There is also a sizable theoretical literature on equilibrium refinements, and a literature using stochastic stability to select equilibria. The smaller experimental literature that has focused on the special cases of coordination games and signalling games; once again what is needed is an empirical characterization of behavior to serve as a constraint on theories of equilibrium selection. C. Heuristics for Tree Pruning and Similarity How do people simplify complex strategic interactions- what classes of strategies are viewed as equivalent and which ones are discarded? How do people extrapolate from past experience to one game to play in a similar one, and what sorts of games are viewed as related? Ideas from computer science as well as psychology may be helpful here: computing the set of Nash equilibria of arbitrary large games is complex, but some classes of games have more parsimonious representations that allow polynomial-time complexity. These same ideas may permit more efficient estimation of behavior rules in complex economic environments, as the behavior rules are based on the agents simplified models of the environment as opposed to the environment itself. D. Matching Theory Classic matching theory is based on the idea of a stable match, but stability is not a good approximation of the outcomes of laboratory experiments on decentralized matching except in extremely small markets with a unique stable outcome. When there are multiple stable outcomes, the analysis of decentralized markets closely parallels that of equilibrium analysis, and raises similar questions: when will a stable outcome will arise, and when it does, which one?

128

Predictive Game Theory E. Empirical Validation Work on predictive game theory should draw on lab and field data, and in many cases will be accompanied by explicit data analysis. Individual learning rules are notoriously hard to identify from laboratory data, so one focus will be the aggregate consequences of a population of agents using a distribution of rules. Another possibility is the use of exit surveys and in-game belief elicitations. A challenge in using field data is that the standard methodology imposes a form of subgame-perfect equilibrium as an identification condition to estimate model parameters. Recent work by Fershtman and Pakes relaxed this, allowing for players to maintain incorrect beliefs that are consistent with their observations. The challenges here are (1) to theoretically identify the sorts of equilibria that their algorithm tends to select, (2) test if the implicit equilibrium selection is stable over time and to changes in government policy, and (3) develop a way of testing if the equilibrium assumption is valid or if players have not even learned the path of play. A further challenge is to study non-equilibrium adaptation and learning on field data; this could be facilitated by running field experiments on the internet, either on laboratory sites or on commercial ones. Moreover, the current wave of internet-based field experiments would benefit from a grounding in the theory of non-equilibrium learning. 2. Implications This program will require the use and support of existing game theory labs, and may well justify the construction of new ones. It will also require graduate students who are trained in game theory, experimental methods, and econometrics; at present many of the best theory students neglect these more applied domains. The program would also benefit from a more modern program for lab clusters than z-tree, with cleaner code and a more intuitive interface. Both the experimental and field components would benefit from improvements in computational game theory- this literature should continue to improve methods for computing Nash or subgame perfect equilibria in economically relevant games, but it should also take up the problems of computing and estimating

129

Predictive Game Theory equilibrium concepts that allow for incorrect off-path beliefs and/or cognitive errors, and of simulating and estimating non-equilibrium dynamics. 3. Who is Doing Provocative Research? The following very incomplete list is intended to give a sense of the scope of this agenda; it is far from exhaustive and reflects the availability biases of the author. Colin Camerer, Miguel Costas-Gomes, Vince Crawford, Tek Ho, Rosemarie Nagel, and Dale Stahl are leading the surge in work on cognitive hierarchies. Pedro Dal B, Anna Dreber, Guillaume Frechette, and Dave Rand are doing intriguing experimental work on cooperation in repeated games; Andrew Schotter has made provocative use of in-game belief elicitation. Ignacio Esponda, Philippe Jehiel, and David K. Levine are leaders in studying adaptive processes in extensive form games, and the sorts of non-Nash equilibrium outcomes that can persist even when players have a lot of experience with the game. Michel Benam, Josef Hofbauer, William Sandholm, and Sylvain Sorin are making important advances in the mathematics of dynamical systems and applying them to non-equilibrium dynamics. Many people are doing exciting work on cognitive limitations in decision problems, including Xavier Freixas, David Laibson, Sendhil Mullinaithan, and Matt Rabin, but so far little of this work has been applied to learning in games. Konstaninos Daskalakis and Tuomas Sandholm are exciting algorithmic game theorists with an interest in economic problems. Federico Echinique, Muriel Niederle, and Leeat Yariv are studying decentralized matching in the lab. Tim Salmon and Nathaniel Wilcox are pioneers in the econometrics of laboratory learning rules; Chaim Fershtman and Ariel Pakes are developing estimation methods for field data that allow for incorrect off-path beliefs. Bernhard von Stengel is a leader of computational game theory, and Jeff Shamma is a pioneer in bringing techniques from the feedback-control literature to the study of learning in games.

130

Long-range Research Priorities in Economics, Finance, and the Behavioral Sciences Herbert Gintis Santa Fe Institute
September 16, 2010

Abstract In macroeconomic theory, I suggest supporting agent-based models of decentralized market systems with sophisticated nancial sectors, as well as theoretical research that provides the analytical foundation for the phenomena discovered through agent-based models. In rational choice and game theory, I suggest increasing support for laboratory and eld experiments in choice and strategic interaction, as well as support for analytical modeling of the phenomena discovered through experimental interventions. Finally, I urge the formation of a transdisciplinary department of NSF devoted to peer reviewed support of transdisciplinary work, with advisors drawn from all the behavioral sciences.

My remarks will cover two areas: macroeconomic theory and transdisciplinary research in rational choice and strategic interaction.

1 Macroeconomic Theory
Traditional macroeconomic theory has focussed on problems of monetary and scal policy in handling the stochastic nature of output and employment. This body of theory is not suited for dealing with nancial instability. Yet the nancial interdependence induced by globalization and the increasingly critical role of nance in the modern economy have elevated nancial instability to a preeminent position in the theory of economic uctuations. Traditional macroeconomic theories are not equipped to handle these new problems because these models use highly aggregated models with one nancial instrument (money) and they carry out only comparative static as opposed to dynamic analyses. In their stead, we need models of the price and quantity adjustment processes in a highly disaggregated, decentralized market economy. The obvious 1

131

candidate, and a strong one, I believe, is the Walrasian general equilibrium model. There is, however, a serious impediment to it use: while existence theorems for Walrasian general equilibrium were perfected some sixty years ago, there has been virtually no progress in analytically modeling the dynamics of general equilibrium. In Gintis (2007a) I locate the problem with attempts to dynamicize the Walrasian model in the incoherent notion that out of equilibrium, there exists a common system of prices, although one that does not clear markets. In fact, out of equilibrium there are no public prices at all, but rather, each agent has a set of private prices that he deploys in engaging in transactions (individuals and rms trade when their private price systems overlap appropriately). Using agent-based modeling techniques (an agent-based model is constructed so as to be directly implemented in a computer program), I show in Gintis (2007a) and other papers that the general equilibrium system is stable, and private prices converge rapidly to quasi-public prices that entail a quasi-equilibrium. However, I also show that the resulting model is fragile (amplifying rather than attenuating random shocks), and the economy occasionally experiences signicant excursions from its quasiequilibrium state (so-called bubbles). Figure 1, taken from Gintis (2007a), shows the history of the prices of goods in a ten-sector agent-based model.After about 100 periods, prices settle down to an average of unity, but in many periods, signicant excursions from equilibrium are experienced by one sector or another. A next step, which would require considerably more computer power, would be to extend this to one hundred sectors with endogenously generated heterogeneous sector sizes and a more realistic (set of) nancial sector(s). Based on my research in this area, I project that there is much to gain from nancing research in agent-based models of the macroeconomy with two interrelated goals. First, use the insights gained from agent-based models to develop an analytical model of a decentralized Walrasian system, and apply this in a way that includes a sophisticated, highly articulated nancial sector. Second, use largescale agent-based models to predict economic uctuations. This endeavor will involve collaboration between economists and computer programmers, of course. Economists funded in this area should be thoroughly capable of writing such agentbased programs at a professional level and supervising the work of programmers who are not trained in economic theory. It would be prudent in this rather novel research area for funders to encourage many small-scale pilot research projects for ve or ten years, and avoid the temptation to fund a massive blockbuster project with pretensions to forecasting real-economy uctuations until we know a lot more about the algorithmic and analytical foundations of dynamic market economies. This sponsored research should go hand-in-hand with support for economic theory that provides analytical founda2

132

Figure 1: Sectoral Prices in an economy with ten sectors. All relative prices are analytically computed to be unity in equilibrium.

tions for the phenomena discovered through agent-based modeling.

2 The Age of Transdisciplinary Research


Over the past two decades, the natural sciences has experienced an explosion of transdisciplinary research. Cross-disciplinary research, once considered unfruitful and ad hoc, has become creative and innovative in dealing both with foundational problems in the interactions among physics, chemistry, biology, geology and other basic disciplines. Moreover, such research has been invaluable in dealing with environmental problems, epidemiology, biomedicine and other areas of applied research. Few students of the behavioral sciences doubt but that similar rewards will ow from an increased emphasis on transdisciplinary research in biology, economics, psychology, sociology, and political science. Indeed, we have already seen some of these rewards in the maturation of experimental economics and economically3

133

oriented experimental psychology, and tantalizing preliminary ndings in neuroeconomics. These transdisciplinary areas, which can be given the transdisciplinary name behavioral game theory, have revolutionized economic theory and have provided the empirical evidence for new theories of individual choice behavior and strategic interaction. It is quite clear that there are huge gains to be made in the support of behavioral game theory executed by researchers in psychology, economics, sociology, anthropology, and biology. This research should involve both laboratory and eld studies, and theorists should be involved in both specifying what are the important questions to be empirically addressed, and in formulating models that explain the empirical data generated thereby. It must be recognized, however, that transdisciplinary in the social sciences, unlike the natural sciences, is hampered by the persistence of fundamental differences in modeling human behavior across the disciplines (Gintis 2007b, Gintis 2009). For instance, the rational actor model and game theory, which are part of the theoretical core in economics, and the biology of social behavior are widely shunned in psychology, sociology, and anthropology. Conversely, foundationally important insights from sociology and anthropology are simply ignored in disciplines that rely on game theory and the rational actor model. This situation, and especially the reluctance of most social scientists to address the situation, is a deep embarrassment to the scientic status of the social sciences, and must be overcome if transdisciplinarity in the social sciences is to have more than a brief life. I have sketched in Gintis (2009) how this might be achieved, but my efforts are at best a rst step that needs much more supported research. To this end, I think NSF should set up a transdisciplinary department or section with peer review boards drawn from all the social sciences, staffed by researchers who are intellectually at home with the theoretical and empirical materials from diverse disciplines.

R EFERENCES
Gintis, Herbert, The Dynamics of General Equilibrium, Economic Journal 117 (October 2007):12891309. , A Framework for the Unication of the Behavioral Sciences, Behavioral and Brain Sciences 30,1 (2007):161. , The Bounds of Reason: Game Theory and the Unication of the Behavioral Sciences (Princeton, NJ: Princeton University Press, 2009).
cnPapersnNSFSBEnFutureResearch.tex September 16, 2010

134

A Grand Challenge for the Social and Behavioral Sciences: Integrating Economic and Political Considerations in the Analysis of Global Environmental Policies
Lawrence H. Goulder1 October 15, 2010 Abstract: Economists and other scholars have offered useful diagnoses of the market failures that underlie major global environmental problems. However, scholars have been far less successful in devising policy approaches that can overcome political barriers. They have identified the market failures without offering solutions to political failures. Three sources of political failure seem especially important (1) special interests, (2) the public goods nature of global environmental problems and associated problem of free-riding, and (3) problems in news reporting stemming from changes in the technology for communicating environmental (and other) information to the public. I recommend that the NSF give considerable support to studies that combine attention to the economics impacts with attention to any of these three underlying sources of political failure. This note is in response to the invitation by Myron Gutmann (Assistant Director of the National Science Foundation) to contribute white papers outlining grand challenge questions that are both foundational and transformative. The note suggests that NSF offer support to research that can help overcome critical bottlenecks associated with pressing global environmental problems. The environmental problems are unprecedented in scope, and some of the bottlenecks stem from new technological developments. Environmental degradation resulting from human activities is nothing new. But human activities are now having an unprecedented global impact, a reflection of greater human numbers and higher output or consumption per capita. Particularly severe problems are global climate change, the depletion of marine fisheries, and the loss of biodiversity. These problems are especially worrisome because of system inertia and associated irreversibilities. These features imply that, to avoid huge losses of human welfare, action needs to be taken well before the worst costs or damages are observed. For example, efforts to reduce emissions of greenhouse gases must begin early enough to prevent atmospheric concentrations from reaching a level implying very serious damages from climate change. If instead emissions reductions are initiated after this point, the damages will persist for centuries, given the long atmospheric lifetimes of these gases. Similarly, given the irreversibilties, overfishing needs to stop before the fisheries in question are depleted, and species loss must be curtailed before the ecosystem services of critical species are lost.
1 Shuzo Nishihara Professor of Environmental and Resource Economics, Stanford University. E-mail: goulder@stanford.edu.

135

Economists and other scholars have offered useful diagnoses of the market failures that underlie these problems. In addition, they have provided helpful templates of first-best solutions the sorts of policies that would cure the market failures and produce efficient outcomes. However, scholars have been far less successful in devising policy approaches that can overcome political barriers. They have identified the market failures without offering solutions to political failures. This white paper encourages the NSF to devote considerable support to studies that aim to identify policies that attend to both failures that is, studies that identify policies that are both economically attractive and politically feasible. Three sources of political failure seem especially important (1) special interests, (2) The public goods nature of global environmental problems and associated problem of free-riding, and (3) Changes in the technology for communicating environmental (and other) information to the public. I recommend that the NSF give considerable support to studies that combine attention to the economics impacts with attention to any of te three underlying sources of political failure. Three sources of political failure seem especially important: Special interests. Concentrated groups can block efforts that would be beneficial to society as a whole. Arriving at practical solutions requires attention to the distribution of policy outcomes. In particular, it seems to require finding ways to achieve distributional outcomes that can oil the squeaky wheels. By definition, with lump-sum side payments any policy that yields aggregate net benefits could be designed to be Pareto improving no party would be made worse off. But institutional and informational barriers make such payment schemes difficult or impossible. Theres a strong need for innovative research that identifies how policies can be designed to achieve distributional outcomes consistent with political feasibility. The targeted distributional outcome could be achieved either through specific elements of the central policy instrument (for example, the way fishing licenses are allocated) or through side-payments that accompany the central instrument (for example, the way revenues from auctioned fishing licenses are rebated to fishing enterprises). The public goods nature of global environmental problems and associated problem of freeriding. Nations are sovereign and participation in international environmental efforts is voluntary. Because of the public goods nature of environmental problems (including the three problems mentioned above), any individual nation may have an incentive to free ride rather than make the economic sacrifices associated with participation in an international environmental agreement. Theres a strong need for analyses that indicate how international agreements can be made sufficiently attractive to overcome the free rider problem. Changes in the technology for communicating environmental (and other) information to the public. Nowadays any individual has a choice of thousands of news outlets. As a result, he or she can select for whatever radio, television, or internet source reinforces his prior convictions. Also, specific news networks may find it profitable to present a view that is deliberately slanted or customized to appeal to particular political groups. These demandand supply-side factors together imply a system of news generation that hardens prior convictions rather than educates. It can also lead to political paralysis. There is no

136

simple solution to this problem. There appears to be a need for better oversight regarding news accuracy and balance. Clearly there is great potential for abuse here. How can balance be fairly measured? How can freedom of individual expression be safeguarded? New regulations to address the selection problem could easily violate basic values and individual rights. However, the selection problem seems sufficiently serious to deserve attention, despite the risks. Indeed the problem seems so central to the effective political functioning of democracies that it seems critical to address it.

I would recommend that the NSF give considerable support to studies that combine attention to the economics impacts with attention to any of these three underlying sources of political failure. Such studies would need to harness the expertise in a range of social and behavioral sciences, including economists, political scientists, and ethicists. I hope these comments are useful. This work is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/3.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.

137

138

The Economics of Digitization: An Agenda for NSF By Shane Greenstein, Josh Lerner, and Scott Stern

This work is licensed under the Creative Commons Attribution-NoDerivs 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nd/3.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA. Motivation Our starting point is the gap between research and recent changes brought about by digitization. The increasing creation, support, use, and consumption of digital representation of information touched a wide breadth of economic activities. In less than a generation digitization has transformed social interactions, facilitated entirely new industries and undermined others, and reshaped the ability of people consumers, job seekers, managers, government officials, and citizens to access and leverage information. The increasing scale of digitization has not generated a similar increase in theoretically grounded empirical research on the economic consequences of digitization. While a dispersed set of researchers address some important questions, no research community with a recognizable identity facilitates cumulative research across this community. Also, data infrastructure for cumulative, transparent, and high-quality research on digitization is poorly developed. For example, no research institution has it as a primary mission to develop novel databases related to digitization that are easy to access for follow-on researchers and policy analysts. A related gap also motivates this agenda. The impact of digitization on the economy and society at large depend on the rules and policies that govern the economic incentives to create,

139

store and use digital information. Yet, in markets shaped by digitization, objective policy analysis of key policy issues is rare, particularly from an economics perspective. This is particularly true regarding recent (or proposed) changes in governance and policy, such as in the area of copyright. Too often the policy discussion is dominated by the narrow concerns of individual stakeholders rather than objective policy analysis. An established community of economic researchers could help to bring high-quality research to the discussion, reshaping policy evaluation of the consequences of digitization. To accurately account for the impact of digitization, key characteristics of digital content must be taken into account. These features include increasing returns, zero marginal cost, and a long tail pattern of usage of digitized content. Traditional calculations evaluating the benefits of digitization have, at best, taken these fundamental features into account in only a limited way, and there are key features that have been overlooked entirely. Most notably, the existence of a wealth of free digitized information on the Internet (including a significant amount that is created by and for users) has effectively eluded systematic economic measurement. We have identified several key areas we consider priorities for investigation. Understanding changes in market structure and market conduct: Increasing

digitization initiated significant shifts in market structure and significant revisions in longstanding competitive behavior in newspapers, music, movies, and other media markets. What are the relevant economic frameworks for analyzing behavior in markets for information where the costs of distributing, accessing, and sharing digital information are much lower than the costs of creating it? What determines market value when the fixed costs of creating information are large, but the costs of distributing and accessing are low or approach zero?

140

Rethinking the design of copyright: If the digitization of information has

dramatically changed the form of expression, its delivery, and its use, then, perhaps, so too the policies for governing its ownership may have changed its role. Copyright law is one of the most important mechanisms for protecting intellectual property in information industries. What are the short and long run economic implications of increasing digitization for the design of copyright and its redesign? What would be the economic effects of various alternative copyright arrangements and proposals for its redesign? Redesigning incentives for innovation and creativity: In broad context copyright

law is one of several mechanisms for protecting intellectual property and regulating market behavior. What new tradeoffs shape the design of other forms of intellectual property, such as patents, trade secrets, trademarks, and database protection? What types of institutions and organizations enhance the incentives for innovation and creativity while also enhancing its diffusion and impact? The Economics of Commons: Experimental forms of copyright, such as the

creative commons license, have begun to play a prominent role in online experimentation and everyday online activity. Several related experimental forms of commons licenses have begun to play an important role in scientific discourse. What factors shape the returns society receives from employing commons license? What factors shape the effectiveness of different governance structures for commons licenses? The economics of privacy: Increasing digitization has altered the fundamental cost

of collecting and retaining and distributing personal information. Commercial actors have the ability to learn an enormous range of details about consumer conduct, both in their online behavior and offline. What economic factors shape privacy in the new economics of digitization?

141

Measuring digitization with an eye towards open policy issues: Many of the

topics just discussed require measuring one or another aspect of digitization. The measurement questions are central in some cases, supplemental in others. Measuring digitization is, alas, an underdeveloped field, so the policy discourse is underdeveloped as a result. What is the appropriate way to measure the extent of activity in digitization? What framework appropriately assesses the rate of return on investments in digitization by public and private organizations? The absence of analysis untied to stakeholders: A range of policy questions

require many voices, but there is not yet a recognizable independent academic voice on a range of governance and economic policy issues affected by digitization. That independent academic voice could expresses skepticism in the face of the slanted or parochial view, and could work alongside practitioners to assemble the relevant data and answer the basic economic questions that occupy public conversation. Where will such a community come from? What needs to happen to hasten the development of such a community?

Why these topics and why an economic approach to them? It is not difficult to notice that digitization shapes a large part of the economy. While there is no definitive estimate of the size of the impact from increasing digitization, there is general agreement that digitization has affected a broad range of economic activities. Symptoms arise in many domains. Moreover, because the change has arisen rapidly, the changes have been dramatic. There has been a tremendous change in the allocation of time within many households. These changes have arisen in less than one generation. According to the NTIA, for example, over 70 million US households have Internet access. More than 90% of those households have a

142

broadband connection, up from 4% a decade earlier. According to latest data from the Census Bureau, adding across various household and business markets, the Internet access market alone accounts for $45 billion in revenue. Two decades ago this market did not register more than several hundred million dollars in revenue at households (from bulletin board services). There has also been a tremendous change in the investment of business. According to the Bureau of Economic Analysis (BEA, 2009), investment by US businesses in information and communication technology in 2009 was $522 billion (down from prior years). This number is symptomatic of the large changes ICTs have brought to how firms relate to their employees, customers, and business partners. For some industry specialists, the motivation seems self-evident, and no economic number can summarize the disruption of the last two decades. Increasing digitization has vastly reshaped economic activity in markets related to, and organizations built around, music, news, research, enterprise IT, logistics, retailing, reservations, and advertising, to name a few areas.

What do stakeholders and researchers do in the areas related to this proposal? It is not in any present stakeholders interest to provide the institutional commitment, provide funding, and provide management to coalesce a dispersed economic research community. Nor is it in any present stakeholders interest to develop an institutional home and support for expertise, founded in data analysis. Nor is it in any stakeholders interest to develop institutional support to collect, to archive, and to standardize data related to measuring the digital economy particularly, in a manner not beholden to any stakeholder. The existing economic researchers in this area lacks a community with a recognizable identity, and researchers lack the institutional commitment to support a community focused on

143

economic analysis of digitization, as well as policy analysis based on economic analysis and the measurement of digitization. These topics have attracted some interest in economics, but in no way should this activity be characterized as a unified field, or a large field. One of the healthier niches is the economics and marketing literature on the pricing of goods on and off line. So is research about market design in the search engine market and especially the related market for keyword auctions, which is a subset of the broad literature on multi-sided platforms. Researchers in these two communities do not yet address many open issues in copyright, privacy, and related topics in economic measurement. There is also a small literature on piracy online (particularly in music), and its consequences for offline sales. The literature on Schumpeterian competition has largely not focused on recent events in markets affected by digitization except in a few select areas, and usually on in areas where existing stakeholders have interests in funding research. There is a substantial group of legal scholars who study issues in intellectual property, copyright and privacy. At the same time, and in spite of some work about open source, the economics of online communities governed by Creative Commons licenses is far less developed in economics. We perceive considerable room for more economic analysis of these licenses, especially how they shape market conduct. Once again, a small amount of funding could yield high returns. Despite all the recent attention from governments, we do not see much research on the aspects that overlap with this agenda, namely, how broadband that is, widespread inexpensive access to high speed internetworking has changed user and vendor behavior in economic activity shaped by digitization.

144

Various pieces of the economy shaped by digitization do, in fact, get measured. For example, Census Service Supplements provides measures of total revenue, oriented to traditional definitions of industries, not digitization per se. The Bureau of Economic Analysis recently initiated the publication of an aggregate stock of Information and Communication Technology. There also are initiatives underway to map broadband infrastructure at the Federal Communications Commission and the National Telecommunications Information Administration. The Census Bureaus E-Stats program reports the value of goods and services sold online whether over open networks such as the Internet. The National Telecommunication Information Administration also publishes a report about Internet access at homes, using data collected as a supplement to the CPS, which the Bureau of Labor Statistics administers. Each of these efforts has their respective strengths and deficiencies, and, as a result, very little economic research has made use of this data. Besides government-sponsored surveys, a great deal of measurement of the digital economy is ad-hoc. For example, there are a range of quasi-public Internet traffic statistics & surveys: Akamai publishes statistics about web traffic, for example. So does Andrew Olydzko at University of Minnesota. As another example, Alexa and Google both publish statistics about web traffic for specific sites. The Pew Internet and American Life Project also releases general results from its many surveys about Internet adoption. Each year the survey at Pew also focuses on different (social) aspects of American use of the Internet. Unfortunately, the Pew survey tends to change focus frequently, so it has little consistency over time. There is no sustained effort to develop theoretically grounded empirical research tradition on the economic consequences of digitization, or on related policy issues. We also draw attention to the lack of institutional commitment to provide forums for objective policy research

145

with economic foundations, so there is no home for the economics and governance of copyright or Creative Commons, for example. Finally, we note the (almost) complete absence of any institutional support for data collection and economic measurement of digitization.

146

Further reading Shane Greenstein, The Economics of Digitization, An Agenda. V4.0. July 20, 2010. Accessed at http://www.kellogg.northwestern.edu/faculty/greenstein/images/research.html Scott Stern and Michael Zhang. The Economics of Digitization and Copyright: Theoretical and Measurement Challenges, August, 2010. Mimeo. MIT Sloan School of Management.

147

148

Jon Gruber
Grand Challenges in Economics: What is the Right Amount of Choice? A fundamental tenet of neoclassical economics is that more choice is good. More choices expand the possibilities set and can only lead to individuals finding outcomes that they prefer. Many econometric models of choice, such as the standard logit choice model, by definition have error structures that show an increase in welfare as choices increase. Yet what has been apparent to lay-people for many years has become clear to economists as well in recent years: too much choice can reduce welfare. A wide variety of papers in behavioral economics has shown how increasing the size of choice sets can reduce participation in the market. Other papers have shown consumers choosing clearly dominated options in choice environments, particularly the elderly who may face cognitive challenges in making appropriate choices. For example, in recent work with Jason Abaluck, I have found clear evidence that the substantial majority elders choosing prescription drug plans under the Medicare Part D plan do not choose the cost minimizing option. This existing literature suffers, however, from the standard problem with empirical work in behavioral economics: it clearly documents a positive anomaly, but leaves us with little normative guidance as to the policy implications. This research suggests that in a variety of contexts we may want to limit choice but how much? And, if choice is to be limited, should it be limited through simply reducing the number of options, or by restricting the space set in which suppliers can compete to provide a more organized choice framework? This issue is highlighted by the recent move of government social insurance policy away from government mandated monopoly options to marketplaces where individuals can choose from a variety of government subsidized options). This approach was pioneered by the Medicare Part D program, and has reached a new level with the insurance exchanges that are included in the recently passed health care reform legislation. A central element of health care reform was the establishment of state-level insurance exchanges where individuals and small firms will be offered a choice of a variety of health insurance options. But the reform provides little guidance as to the proper design of these exchanges and in particular the number and diversity of options that should be offered through the exchange. Should states operate a yellow pages sort of exchange, where any licensed insurer is allowed to offer any product they like? Or should there be a more restricted set of choices with limited differences between plan options? Addressing this question potentially requires tackling a very difficult question for economists: normative analysis with deviations from the neoclassical model. The welfare analysis in basic economic research is predicated on the premise that individuals are making the right choice. If there are failures in choice, then we need a new welfare framework and there is no alternative framework which has gathered broad acceptance. Alternatively, recent research along the lines of that by Raj Chetty and others on sufficient statistics approaches to policy analysis may allow researchers to avoid the thorny problem of specifying an alternative welfare function. If there is a reduced form approach to documenting clear welfare improvements from local changes in budget sets, then a larger welfare structure is superfluous for evaluating these local changes.

149

The challenge for economists is therefore to move beyond documenting choice anomalies to actually developing guidance for policy makers as they design choice mechanisms such as insurance exchanges. This is a tremendous challenge but a critical one.

150

Grand Challenges Making Drill Down Analysis of the Economy a Reality By John Haltiwanger The vision Here is the vision. A social scientist or policy analyst (denoted analyst for short hereafter) is investigating the impact of the great recession and anemic recovery (as of September 2010) on businesses and workers. The analyst begins by exploring the latest aggregate data showing economy-wide, sector-level and broad regional-level variation in terms of business productivity, output, capital investment, prices, wages, employment, unemployment and population. The data on employment changes can be decomposed into hiring, quits, layoffs, job postings, job creation and destruction. The data on unemployment can be decomposed into gross worker flows tracking flows into and out of unemployment. The data on workers is linked to measures from household data tracking income, consumption, wealth, consumer finances and household composition. The data are high frequency (monthly or quarterly) and timely (data for the most recent quarter or month). The data are available not only for the present time period but historically for several decades permitting analysis of both secular trends and cyclical variation. Starting at the economy-wide level, the analyst can drill down into the various key indicators by detailed worker and firm characteristics such as gender, age, immigration status, and education of workers and business size and business age for firms. The business data permits identifying firm startups and also permits tracking firm exits. The firm characteristics include measures of intangible capital assets such as investments such as R&D and innovation as well as tracking foreign trade and outsourcing activity. The sources of financing for businesses by type of financing and by type of business (e.g., by business size, age, industry and geographic location) are available (e.g., amount of financing from debt, debt type, debt terms such as the interest rate, amount of financing from equity and equity type such as angel financing, venture capital financing, hedge fund financing, private equity financing and public equity financing). The analyst can conduct empirical studies at the economy-wide, broad sectoral and broad regional level with data broken down by all of these dimensions. In addition, the analyst can drill down to the individual and firm level creating a longitudinal matched employer-employee data set with all of this information at the micro level. This permits panel data analysis using rich cross sectional and time variation data tracking the outcomes of businesses, workers and households. These outcomes can be tracked at the very detailed location (Census block or track) and detailed characteristics level. The drilled down data aggregates to the national key indicators that receive so much attention. The analyst can ascertain, for example, is it really the case that it is small, young businesses that normally would be creating jobs given their productivity and profitability who cant get credit that is accounting for the anemic recovery as of September 2010. The analyst could track what

151

type of financing has especially decreased relative to other economic recoveries. The analyst could analyze the impact of policy interventions historically and how they have or have not had influence on different types of businesses and in turn on the workers employed by these businesses. Beyond analysis of the recession and recovery, the drill down infrastructure would permit a range of analyses of the factors driving economic growth and other important economic outcomes for households and businesses. The role of business startups can be tracked in terms of their contribution to productivity, innovation, and job growth. The origins of business startups can be tracked given the longitudinal matched employer-employee data e.g., what is the career path of entrepreneurs and the factors that impact career path as well as success and failure of business startups? The drill down data infrastructure also permits rich studies of the outcomes of households and workers. The impact of immigrant flows on native and immigrant labor market outcomes can be tracked and studied. The drill down data infrastructure enables tracking the education experiences and outcomes of individuals so the factors impacting human capital investment can be studied. In principle, the drill down data infrastructure also is integrated with a variety of other measures of experiences the housing experiences of children and adults, the health experiences of children and adults, the criminal activity as well as the victims of crimes while children and adults. With this added dimensions, a wide range of socio-economic issues can be investigated. The Reality as of 2010 The above vision is not as far from reality in 2010 as one might first surmise but achieving the above vision faces many different challenges. The good news is that progress has already been made on many of these challenges. The bad news is that many very difficult challenges remain that may take decades to overcome. First, the good news. The U.S. federal statistical agencies, state agencies, and private sector data developers have been working on various components of this vision for the last decade or more. The rapid increases in computer speed and disk storage has meant that the ability to track every business, household and worker in the U.S. on many of these outcomes is already a reality. The U.S. Census Bureau has developed longitudinal business databases tracking every establishment and firm in the U.S. that can be integrated into the rich surveys and censuses of economic businesses conducted by Census. The Bureau of Labor Statistics has developed a similar longitudinal business database tracking every establishment in the U.S. that can be integrated into the surveys on businesses conducted by BLS. 1 The Census Bureau has also developed a longitudinal matched employer-employee database that tracks the employment relationships
1

Of course, one might immediately ask why this duplication? We turn to this below

152

between all employers and employees in the U.S. This data can in turn be linked to the business data at Census as well as the household databases such as the Current Population Survey and the American Community Survey. A variety of administrative databases on housing, education and health outcomes can be integrated into these data. State agencies are likewise developing longitudinal databases tracking education experiences and outcomes and in turn to labor market outcomes for their workers. Other major efforts developing the longitudinal micro data on households and businesses include the efforts tracking health and other outcomes for older Americans in the Health and Retirement Survey by the University of Michigan as well as efforts by the private sector (such as Dunn and Bradstreet) in tracking U.S. businesses. The Federal Reserve has rich data on balance sheets of the financial sector and is working on developing longitudinal data on financial sector firms and activity. The other part of the good news is that it is not only the hardware of computer processing that has advanced rapidly but so has the software. The software and methodology are increasingly available for massive data integration permitting matching of records at a variety of levels of aggregation using all available information. The software and methodology are increasingly available to address the inherent problems of protecting the confidentiality of such drill down data. However, in spite of enormous progress, the bad news is that the U.S. statistical system (broadly defined to track economic, health, education, housing, and other outcomes) remains incredibly balkanized. Legal issues still block the major U.S. federal agencies from sharing their data (hence the duplication between BLS and Census which not only leads to unnecessary duplication but also to significant limitations in the quality of key economic indicators like U.S. productivity and GDP). Legal issues block the federal and state agencies from working collaboratively. Beyond the legal issues, many technical challenges for the cyber infrastructure needed for a drill down data infrastructure remain. We turn to discussing such challenges in the next section. The Challenges The challenges are many. At a broad level, the challenge is that attaining this vision requires integration of a vast array of administrative and survey data from a variety of sources with different objectives and legal requirements for using and protecting the confidentiality of the data. There are many, many details a non-exhaustive list is as follows: 1. Attaining this vision requires a change in the way data are collected and processed. The statistical system needs to be smart in terms of using scarce resources to avoid duplication but also to collect the many different components in a fashion so they can be integrated. Even within statistical agencies, there is a silo approach towards data collection collection and processing methods are designed to produce the specific micro and aggregate data of interest without regard to data integration. Of course, the problem

153

is much worse across federal and state agencies. The efforts discussed above to integrate administrative and survey data have been severely hampered by the disparate nature of the data. The development of standards and the agreed upon use of common identifiers would greatly facilitate data integration. Of course, the use of common identifiers raises many privacy concerns which we discuss below. 2. Overcoming the legal challenges is a significant obstacle in its own right. This vision can likely only be achieved with the intensive use of administrative data that are collected for other purposes (i.e., the administration of some other program including the collection of taxes and the monitoring of public programs). There needs to be a mandate that administrative data from all of the types of sources discussed above can be used for this type of statistical analysis. 3. Privacy advocates rightfully express concerns that the above vision creates big brother. There are many issues in dealing with privacy concerns which I will not deal with adequately here. One critical issue is insuring a secure environment for developing and accessing this type of data infrastructure. One working model so far is to create secure enclaves (the Census-NSF research data centers). Secure enclaves have shown that they can provide access to many valuable projects from the research community without endangering the privacy and confidentiality of respondents. While this represents great progress relative to 20 years ago, this likely is not the long run solution. Still even if this is not the long run solution (more on this below), if over the next decade or so the Census-NSF data centers can become Federal-State data center enclaves where a host of federal and state data can be housed this would represent major progress. This has already happened to some degree with agencies such as NCHS now housing their data at the Census RDCs. 4. The long run solution is to create a cyber-infrastructure environment that creates the data infrastructure, is securely protected from hostile attacks as well as inadvertent disclosure, is accessible to the user community from many locations (ideally the desktop of the user) and generates statistically valid inferences from the above data infrastructure without the user ever being able to observe enough details to identify any business or person. This requires real-time disclosure protection on an interactive basis. It requires a 21st century approach to disclosure protection not simply cell suppression or top coding of some sort but rather the use of statistical methodology that will in a flexible manner protect confidentiality in a flexible manner. The use of synthetic methods to create and analyze micro data has made great progress in the last decade but the statistical and research community has much work to do before these methods are ready for widespread use. 5. Further advances in this drill down data infrastructure will require further advances in both hardware and software. Vast amounts of data will need to be stored and processed in a timely fashion. In addition, data integration methods need further development and

154

refinement. Currently, the data matching programs are capable of developing data matches using a variety of criteria with measures of the quality of matches. But we often need more than this we need an ability to deal with differences in units of observation and frequency. Smart systems that can integrate weekly, monthly, quarterly and annual data readily in terms of both disaggregation (e.g., create the best possible estimates of weekly data using available data) and aggregation (e.g. creating the best possible annual data). The smart systems need to be able to take data using a variety of different units of observation (e.g., households vs. individuals, establishments vs. firms, larger geographic areas and smaller geographic areas, detailed industry and broad sectoral classifications) and integrate. The smart systems need to use the insights and continuing developments from statisticians on missing data imputations with statistical software packages that can adjust standard errors appropriately based upon sampling variation as well as associated imputation. Smart systems are also needed for the collection of data. Many administrative data sources are in principle available on almost a real time basis (often with filings in the last week, month or quarter) but only become available to the statistical system with a considerable lag. 6. The private sector is already in many ways further along in attaining this vision for profitmaking purposes using data mining techniques. This competition with the public sector in the provision of statistics is not necessarily a bad thing. However, the ability of the scientific community to develop the methodology and standards for data integration and associated analysis is limited for most private sector developments. Of related concern, many in the social science community are already beginning to conduct studies using these private sector data sources. Again, this is not a bad thing per se but often the representativeness and statistical properties are not known and the access is not regulated in a manner that permits replication a critical feature in the advancement of science. A very bad outcome would be that in a few decades the private sector developments have superseded the public sector developments so much that social scientists are essentially forced to use private sector data without adequate quality and methodological standards. The private sector data mining developments will not go away and nor should they go away. Finding a way to create public-private partnerships and to help set standards for methodology, access and privacy protection that have some commonality in the public and private sector developments would be valuable. 7. Achieving this data vision requires champions. It requires champions in the executive and legislative branches of government who make both developing this type of data infrastructure available and to protecting the privacy and confidentiality of respondents. It is hard to find champions from these branches of government since data integration is not a hot button topic. The ironic thing of course is that these branches of government continually ask questions that have critical consequences for the future of the U.S.

155

economy and involve billions or even trillions of dollars and the only way to answer the questions would be to have this type of data infrastructure available.

156

Future Directions for Research on Immigration September 2010 Gordon Hanson, UC San Diego and NBER Abstract. How does the international migration of talent affect the creation of knowledge, the organization of work, and the rate of economic growth across nations? In recent decades, much of the intellectual firepower in research on immigration has been aimed at estimating the impact of the inflow of low skilled foreign labor on the economic well being of native-born workers in the United States and other high income countries. For the United States, at least, it is not at all clear that low skilled immigration matters very much for national welfare. In coming decades, it is how the world allocates skilled labor that will help determine which countries advance economically and which do not. Currently, policy makers are setting immigration policy governing skilled labor flows largely in the dark. The literature is yet to produce compelling empirical evidence on the costs and benefits of skilled migration for either origin or destination countries. Future research on immigration should focus on the empirical analysis of how the inflow of skilled foreign labor affects productivity growth and innovation in receiving countries and how the outflow of talent affects prospects for growth and development in sending countries. Sound empirical analysis requires exploiting natural experiments or conducting experiments in the field. Recent events suggest prospects are favorable on both fronts.

Challenge Question How does the international migration of talent affect the creation of knowledge, the organization of work, and the rate of economic growth across nations? 1. Background A primary lesson of research on economic growth over the last several decades is that the pace of economic progress in a country is strongly associated with its access to highly skilled labor. Countries produce skilled labor through educating and training their own workers or by importing talent from other nations. Virtually all major US corporations in technology fields search for talent internationally, no longer viewing labor markets as defined by national borders. American Universities, as well as educational institutions in other advanced countries, have long sought to attract the best and the brightest worldwide. Students see studying abroad as a way to get their foot in the door of foreign labor markets. The United States, the dominant country in higher education for the last half of the 20th century, is seeing is market lead erode as other countries, including China, rapidly improve their educational institutions, introducing more

157

competition in the global search for skilled labor (Freeman, 2009). Governments, for their part, often appear ambivalent about skilled immigration, being relatively generous in providing visas to foreign students but stingy in granting work visas to these students upon graduation or to other prospective skilled immigrants. One would think that given the importance of skilled labor for economic growth, research on immigration would have made skilled labor flows a central focus of study. Alas, this is not the case. Over the last three decades, there has in fact been an outpouring of research on immigration, motivated in part by the increase in labor inflows in high income countries. In the United States, for instance, the share of foreign born individuals in the population increased from 5% in 1970 to 13% in 2008. While the literature has covered a wide range of topics including immigrant assimilation, the education of foreign students, the causes of illegal immigration, and the political economy of immigration policies the bulk of intellectual firepower has been aimed at estimating the impact of low skilled immigration on the economic well being of native-born workers in the United States and other high income countries (Hanson, 2010). To be sure, the consequence of low skilled immigration for wages is an important issue. Because immigration changes the national supply of labor, economists are predisposed to consider the impact of such supply shifts on prices. Since 1980, earnings inequality in the United States has increased sharply, leading many economists to ask whether the arrival of large numbers of low skilled foreign workers could be driving changes in the US wage structure. Despite the immense volume of work, we are far from a consensus on how much immigration matters for the low end US labor market. Reputable economists can be found on both sides of the issue, with some claiming that immigration has significant negative effects on wages and others claiming that no such effects exist. For the United States, at least, it is not at all clear that low skilled immigration matters very much for national welfare. Whether or not immigration hurts low skilled workers, most economists agree that the aggregate effects which involve summing gains to employers and losses to workers are small. The academic debate, then, has been primarily about the distributional consequences of immigration. The impact of immigration on growth has been lost in the mix. 2. Immigration and economic growth If immigration is going to transform an economy, it must be through its effect on innovation and total factor productivity. Once we raise the issue of productivity, the focus immediately shifts from low skilled to high skilled immigration. Research on economic growth identifies the relative supply of high skilled workers and in particular those in science and engineering as a key factor underlying a countrys R&D capacity and thereby its growth rate (Jones, 1995). In the United States, foreign born students account for over 40 percent of PhDs awarded in science and engineering. While doubling the supply of illegal immigrants (currently 5% of US workers) in

158

the US labor force would likely have at most second order effects on economic growth, doubling the supply of high skilled immigrants could have a first order effect. Of course, if the US absorbs more high skilled labor from the rest of the world, countries losing these workers will be affected. If an engineer leaves, say, Pakistan to work in the United States, the supply of engineers will change by a larger percentage amount in Pakistan than in the US, owing to the fact that in Pakistan skilled labor is relatively scarce. That scarcity, however, isnt sufficient to yield high wages for Pakistani engineers. The paucity of physical capital, the use of outdated technology, and the persistence of weak legal and political institutions hold down the productivity of high skilled labor in Pakistan, keeping wages low. The development economics literature has devoted considerable attention to brain drain from developing countries, arriving at a conclusion that any two-handed economist would love. One possibility is that the exodus of skilled labor hurts Pakistan by directly reducing the supply of human capital. Another possibility is that the prospect of migrating to the United States is sufficiently attractive that capable students in Pakistan obtain more education than they would have absent the opportunity to emigrate, giving Pakistan more human capital on net with emigration than without it (yielding a brain gain). Whether high skilled emigration raises or lowers the stock of human capital in developing countries is therefore an empirical question, which the literature has failed to answer. The literature has produced intriguing evidence in international cross sectional data in support of the brain gain hypothesis, but we have not yet seen convincing time series or panel data which shows that increasing prospects for emigration causes students in a country to increase their schooling by enough to offset the exodus of talent. 3. The future of immigration research We are left then with two fundamental and interrelated questions about international labor flows: if we move one skilled worker from a low income country to the United States, by how much does US productivity growth change and by how much does the low income countrys stock of human capital change? In coming decades, how the world allocates skilled labor will in part determine which countries advance economically and which do not. Currently, policy makers are setting immigration policy governing skilled labor flows largely in the dark. The literature is yet to produce compelling empirical evidence on the costs and benefits of skilled migration for either origin or destination countries. How should we go about attempting to understand the impact of international migration on growth? Arguably, theory is well ahead of empirical analysis. We have well developed bodies of theoretical work on how migration affects growth rates internationally. What we lack is empirical analysis that identifies the sign and magnitudes of these effects and the theoretical mechanisms that account for them. Simply correlating the supply of immigrants or emigrants with productivity growth or other outcomes is not informative, as the migration of labor is not

159

random. Labor moves across borders in response to economic incentives, leaving countries with poor growth and moving to ones with better prospects. Rigorous empirical analysis requires sound experimental design, either by exploiting natural experiments in the data or by conducting experiments in the field. Regarding natural experiments, environmental shocks (eg, earthquakes in Haiti, tsunamis in Indonesia, floods in Pakistan) and geopolitical events (eg, the events of 9/11) disrupt either the outflow of labor from sending countries or the inflow of labor in receiving countries, providing opportunities for causal identification of migrations impacts on productivity growth in one set of countries or the other (the receiving countries in the first case; the sending countries in the second). One or two recent papers exploit such an approach. Field experiments would require getting governments to agree to randomize how they allocate visas across individuals, companies, countries, and/or time. Such randomization may not be as farfetched as it sounds. Currently, the United States already allocates about five percent of its permanent residence visas through an annual lottery. And when applications for temporary work visas for high skilled labor (H-1B visas) exceed the allocated quota in a given year, the entire stock of visas is allocated through a lottery among applicants (as occurred in 2007 and 2008). Simply giving researchers access to data on these randomization episodes would advance migration research immensely. Further, given the desire for knowledge among government officials regarding how immigration affects growth, one would expect at least some high income countries to be willing to subject their immigration policies to rigorous analysis involving at least some degree of randomization. 4. What disciplines would be involved? Economists come first to mind, as scholars in the discipline have spent a great deal of time thinking about the determinants of economic growth and the consequences of international migration. However, the set of questions involved extends well beyond the economics discipline and even beyond the social sciences. For political scientists, there are the questions of what determines political support for high skilled immigration, why countries more open to international trade than to international labor flows (which has received some attention in the recent literature), and how the exodus of skilled labor affects decisions governing economic policy in origin countries. For sociologists, the questions include how combining native and foreign workers within a business affects the organization of firms, how corporations manage innovation across borders, how international labor markets are organized, and the manner in which migration affects the international transmission of ideas. And for engineers, there are questions about how combining native-born and foreign workers, either in one country or in multiple countries affects the optimal organization of production processes both within and across firms.

160

5. Closing thoughts The political debate on immigration in the United States, as in other advanced countries, remains mired in tired invective about the adverse consequences of admitting individuals from poor countries. While the popular discussion may occasionally make for entertaining political theatre, it is poor guide for rigorous analysis of international migration. The literature has misallocated time and energy on the low skilled end of the spectrum, where the aggregate welfare consequences for the United States are likely to be small. It is time that empirical research shifted towards the consequences of skilled labor flows for economic growth.

References Freeman, Richard. 2009. What Does Global Expansion of Higher Education Mean for the US? NBER Working Paper No. 14962. Hanson, Gordon. 2010. The Economic Consequences of International Migraiton. Annual Review of Economics, 1: 179-208. Jones, Charles. 1995. R&D Based Models of Economic Growth. Journal of Political Economy, 103(4): 759-784.

161

162

Developing a Skills-based Agenda for New Human Capital Research


Eric A. Hanushek Stanford University

Recent research points to a need for expanding the research agenda related to the production and the impact of human capital. The central element of this is expanding analysis to identify and to incorporate different dimensions of skills including new study into underlying measurement issues. Newly available data and newly minted researchers make this a propitious research investment. 1. How would an expanded and refined new human capital concept improve our understanding of economic and social outcomes? For the last half century, the concept of human capital has become thoroughly integrated into theoretical and empirical studies in economics and other social sciences so much so that policy makers routinely pick up on it and infuse discussions into a wide variety of policies with this terminology. At the same time, much of the discussion both in research and in its public incarnations has been reduced to very simplistic shells of the underlying ideas. In empirical work, for example, human capital is frequently taken as synonymous with school attainment, requiring no discussion or explanation. In other cases, investment in human capital is measured simply by spending on schools or other training activities. This simplification to spending is perhaps even more prevalent in theoretical work. These narrowed perspectives have resulted largely from efforts to develop testable hypotheses, and they represent clever and powerful adaptations to available data. But, there is now substantial reason to believe that many of our models and perspectives have been seriously distorted in the process. Understanding the role of schools in society and the economy needs little justification. The $1.1 trillion annual spending on formal schooling ($660 billion for K-12) represents 7 percent of GDP. But more than that, it represents the largest component of state and local budgets, reflecting the belief in the central role of schooling for the future of society and our economy. The current state of research on schools and human capital does not, however, reflect either its importance or the possibilities that currently exist for a much deeper and useful research program. A number of factors point to the productivity of a new initiative that would take the research and analysis into new lines of research related to the skills of individuals. First, recent research has highlighted the importance of both cognitive and noncognitive skills for individual earnings and careers. 1 While measurement issues remain, the existing analyses show clearly how expanded measures of human capital , ones that indicate more reliably the

Heckman, Stixrud, and Urzua (2006), Hanushek and Woessmann (2008)

163

variations in skills across individuals, vastly improve our ability to understand the underlying economic outcomes and processes. This improvement in explanation holds for individual income and employment determination, for consideration of the distribution of incomes, and for aggregate productivity and economic growth. Second, once the focus turns to the analysis of more refined skills, other research becomes relevant and suggests a modified direction for much analysis. Specifically, work on the determinants of achievement and cognitive skills (often labeled educational production functions) suggest that skills come from a range of inputs including families and neighborhoods in addition to schools. And, because these factors are correlated, simple analyses of schooling, say in the context of income determination, cannot generally yield unbiased estimates of the impacts of schools. Third, recent advances in understanding the analysis of causal effects provide relevant approaches to refining our understanding of the role of human capital in determining outcomes and in how skills are produced. The recent, and warranted, skepticism about the interpretation of many past statistical analyses indicate that much of what we know should be revisited. This work also shows various paths to refining our knowledge. Fourth, the potential availability of extensive administrative data on schools, public support programs, labor market outcomes, incarceration, military service, and the like indicates a dramatic expansion in the detailed information that is relevant for consideration of human capital issues. Fifth, the considerable increase in study of education issues, particularly among economics PhD students, indicates a vast group of bright, energetic, and well-trained scholars who can take a new research agenda in human capital forward.

The new human capital agenda would pursue a rich view of the measurement and range of skills that are important. It would consider both the variety of factors that influences the success of investments in these skills and how these skills affect lifetime outcomes. 2. The foundation elements for a new and exciting initiative around the new Human capital now exist. Such an initiative would serve to develop the intellectual roots of much of the current policy discussion, and it would contribute quite directly to governmental policy making at all levels. It would capitalize on the growth in well-trained new researchers and on the new data that are becoming available. There are two complementary lines of research that are important one looks at how skills are produced while the other looks at the effects of skills on individual and societal outcomes. Central to both, however, is an overarching third question of basic measurement that will fit into both fundamental lines of research.

164

The measurement issues reflect the fact that most empirical research into either cognitive or noncognitive skills has been opportunistic. It has relied on the measures currently available such as measures from math or science tests used in school accountability or survey indicators of personality factors in the noncognitive domain. It has not involved much comparison of alternative measures. Nor has it followed a purposeful program of development based on the external validity of any measures. With the intensive efforts currently under way to develop measures of achievement at the K-12 level and the nascent efforts for higher education, a program that focused on the external validity of any measures could be extraordinarily productive. It could capitalize on these other development efforts ones that emphasize internal validity and common standards developed almost entirely within existing schools while adding a useful dimension to thinking in those developments. The fact that school histories of specific schooling experiences and measured outcomes are now routinely included in administrative data bases at both K-12 and higher education levels suggests a variety of opportunities. These school histories are beginning to cover the full length of schooling experiences for individuals, allowing researchers to follow individuals as they progress through school including moves across schools and programs. Related to this, many states now make it possible to link to experiences out of school such as unemployment insurance records, Medicaid usage, juvenile justice involvement, military records, and more. One element of this is much larger and more accurate data on human capital development and outcomes than previously available. But, more than that, because of a variety of exogenous factors that impinge on individual schooling and career paths, it becomes much more feasible to identify causal influences on individual outcomes. These greatly expanded data dovetail nicely with the recent appreciation for the potential biases from incomplete identification of statistical models. The recent explosion of work on the determinants of achievement i.e., educational production functions has followed the wider availability of state administrative databases, particularly in Florida, New York, North Carolina, and Texas. The movement toward developing longitudinal educational records across the nation and making them available to researchers has been proceeding rapidly. The U.S. Department of Education has already given $500 million in grants to states to develop their capacity. It is important that researchers become involved early in these state developments, because they will still be malleable for a number of years. Each existing database has some common elements of following students and their achievement over time but then also has a variety of special elements and advantages. A concerted effort to develop the capacity for interstate comparisons and analyses could provide considerable new evidence about the operations and effectiveness of schools. An additional element of developing analyses of the new human capital is the international dimension. There has been an effort to assess student achievement across countries particularly in math and science since the mid1960s, but this effort has now picked up in terms of numbers of participating countries, frequency of the testing cycle, and quality of the assessments. The two major assessments PISA and TIMSS now cover all OECD countries and a large number of developing

165

countries. These assessments routinely include survey information of students and their schools. A natural extension of any U.S. based research program would be to incorporate the international analyses and comparisons made possible by these. This cross-country analysis, for example, offers the possibility of investigating issues such as national institutions that cannot be assessed within a single country or of how labor markets in different countries demand and reward skills. 2 It also fits naturally in terms of understanding the importance of STEM education for economic growth and development. The recent evolution of understanding the impacts of different skills have been led largely by economists. The work on educational production functions and the analysis of specific policies, such as the impacts of accountability or of charter schools, have been carried out by economists and by a broader social science community including sociologists and political scientists, although the largest expansion of new PhDs in the area has been in economics. There are of course a series of challenges in this area. One that is apparent now is the need to maintain confidentiality of individual data. Currently much of the administrative data from schools is covered by federal law (the Family Education Rights and Privacy Act, or FERPA). It will be increasingly important to develop research procedures and protocols that satisfy FERPA (and the underlying ideals). The concerns about confidentiality of data become particularly acute when one considers merging the administrative records with other survey and programmatic information. These issues will require separate attention. Summary The paradigm of individuals investing in skills that falls under the heading of human capital has proved to be very useful across theoretical and empirical research endeavors. But it has also developed in a constrained way, driven by data availability and by a longstanding set of research questions. Recent analyses have suggested that expanding research to investigate both the production and the impacts of a range of individual skills would yield large dividends. The availability of new, much richer longitudinal databases have attracted large numbers of new PhDs into the study of schooling and other aspects of skill production. As a result, the area is poised for dramatic expansion that could in part provide extraordinarily valuable research that supports a range of crucial policy issues facing the U.S.

The majority of such international studies has been produced within the last decade, and the flow of such work has now increased significantly. Much of this international research is currently being conducted by researchers outside of the U.S. See Hanushek and Woessmann (2010).

166

References Hanushek, Eric A., and Ludger Woessmann. 2008. "The role of cognitive skills in economic development." Journal of Economic Literature 46, no. 3 (September): 607-668. . 2010. "The economics of international differences in educational achievement." In Handbook of the Economics of Education, Vol. 3, edited by Eric A. Hanushek, Stephen Machin, and Ludger Woessmann. Amsterdam: North Holland. Heckman, James J., Jora Stixrud, and Sergio Urzua. 2006. "The Effects of Cognitive and Noncognitive Abilities on Labor Market Outcomes and Social Behavior." Journal of Labor Economics 24, no. 3: 411-482.

167

168

Making the Case for Contract Theory


Oliver Hart Abstract: Economics has changed a great deal in the last thirty years and there is every reason to think that the changes in the next twenty to thirty years will be at least as great. Although theory may not be as prominent as it once was, it remains essential for understanding the (increasingly) complex world we live in. One cannot analyze the bewildering amount of data now available, or make sensible policy recommendations, without the organizing framework that theory provides. Contract theory is a good example of an area where great progress has been made in the last thirty years, and yet where much remains to be done. In this short essay I will discuss some of the major themes of contract theory and also issues that are still not well understood. Economics has changed a great deal in the last thirty years and there is every reason to think that the changes in the next twenty to thirty years will be at least as great. In the 1970s and 80s theory was dominant. In the first part of the twenty first century this is no longer the case: there has been a huge shift towards empirical work. Also new fields have become established that were in their infancy in 1980: behavioral economics is the most obvious example. At the same time although much has changed some things stay the same. Although theory may not be as prominent as it once was, it remains essential for understanding the (increasingly) complex world we live in. One cannot analyze the bewildering amount of data now available without the organizing framework that theory provides. I would also suggest that one cannot understand the extraordinary events that we have recently witnessed, such as the financial crisis, or make sensible policy recommendations in response to these events, without the organizing framework of theory. Moreover, although exciting developments in other fields of economics understandably attract attention, basic research in theory remains vital. There is much that we still do not understand. Contract theory is a good example of an area where great progress has been made over the last thirty years, and yet where much remains to be done. There is, of course, a sense in which contracts have always been basic in economics. Any tradeas a quid pro quomust be mediated by some form of contract, whether explicit or implicit. However, much of traditional economics is concerned with spot trades, where the two sides of the transaction occur simultaneously, and where the contractual element is relatively trivial. In recent years economists have become much more interested in longterm relationships where a considerable amount of time elapses between the quid and the quo. In these circumstances a contract becomes an essential part of the trading relationship. The basic philosophy behind contract theory is the idea that parties can design their relationship to be efficient and that a contract is the means to do this. In this respect there is significant overlap with the mechanism design literature. However, there are also important differences. In mechanism design theory it is usually assumed that there is an impartial planner who oversees the system, and may indeed design it. In contract theory the mechanism is designed by the parties themselves and the only (possibly) impartial player is a judge who adjudicates disputes. Each literature has learned from the other, but they have developed independently. The techniques of contract theory have permeated many areas of economics, including labor economics, industrial organization, macroeconomics, corporate finance, international trade, public finance, and development economics. Contract theory also draws on and contributes to ideas in law and

169

economics. In this short essay I will discuss some of the major themes of contract theory and also issues that are still not well understood. A classic topic of contract theory is the design of incentive schemes. Principalagent theory studies how a principal, e.g. an employer, can motivate an agent, e.g., an employee, to act in her interest. A formal contract can tie the agents compensation to the outcome of the agents actions. The early literature emphasized the employees desire to shirk as the main incentive problem, and the employees risk aversion as the main reason why making compensation very sensitive to outcomehighpowered incentivesmight not be a perfect solution. The more recent literature has emphasized different issues. Suppose that the principals are the shareholders of a public company and the CEO is the agent. The problem may not be that the CEO does not want to work hard: rather it may be that the CEO is an empirebuilder, takes excessive risks, pays himself too much (or in the wrong sort of way), or is overconfident about his ability to run things. Or suppose that the principals are parents and the agent is the teacher of their children. The problem may be that it is hard to measure the true outcome of teaching. Performance on tests can be assessed but this may be a very imperfect measure of what children should be learning. Paying a teacher according to test performance may encourage the teacher to focus on the wrong things: rote learning rather than more creative material. Also educating a child is a team process, and, if a teacher is rewarded narrowly according to the test scores of children directly under her control, she may be discouraged from collaborating with other teachers. The compensation of CEOs, teachers, and others, is highly topical. There is no shortage of proposals for improving matters. Contract theory is enormously useful in clarifying the tradeoffs and helping us to avoid the adoption of policies that may actually be counterproductive. Advances in technology make it possible to measure performance more finely and in the future it will become feasible to pay people in increasingly subtle, and possibly highpowered, ways. But is such a trend desirable? Or might it interfere with the reason that the employees are under the umbrella of a single firm in the first place? The question of what constitutes a firm, whats different about transactions inside and between firms, and what determines the boundaries of firms, is one that contract theorists have studied intensively. The early transaction cost literature on this topic, by Coase, Williamson, and others, was insightful but largely informal. In recent years, contract theorists have developed formal models to elucidate these issues. The starting point of this recent literatureknown as the property rights approachis the idea that if parties can anticipate all future eventualities and include these in a contract then the boundaries of the firm are irrelevant: it is only if contracts are incomplete that boundaries matter. In practice contracts are incomplete and a key question is who has residual rights of control, that is, the right to make decisions not covered by the contract. The property rights approach takes the view that the owner of an asset has residual control rights. In the simplest property rights model parties can renegotiate an incomplete contract once an unforeseen contingency has occurred and, under symmetric information, they will reach an ex post efficient outcome. However, the division of surplus will depend on the assets they own. This division of surplus will in turn influence the incentives of parties to invest. An implication of the theory is that assets will be owned by those whose investments are important. To the extent that one can identify a firm with the assets it owns this yields a theory of firm boundaries. As an example of how this more formal approach can be useful, consider the question of how improvements in information technology will affect firm boundaries. It is often argued that, because more information makes it easier to write good contracts, advances in information technology will favor

170

independent contracting: carrying out transactions outside the firm. Indeed this is an implication of transaction cost economics. The property rights approach provides a more nuanced perspective. A reduction in contracting costs also makes it easier to carry out transactions inside a firm and so firms may become bigger rather than smaller. Support for this possibility has been found in empirical work on the trucking industry by Baker and Hubbard (2004). The property rights approach has been applied extensively in the recent international trade literature on the structure of multinational companies. Antras (2003) uses the approach to explain why U.S. companies are less likely to own foreign suppliers if the goods they import are labor intensive (in which case the human capital investment of the foreign firm is likely to be important) than if they are capital intensive (in which case the physical capital investment of the U.S. firm is likely to be important). Many other papers have extended this work. One limitation of the property rights approach is that the standard model does not explain why transactions inside firms have a different character from those between firms: the theory supposes that parties will use monetary sidepayments to bargain to an ex post efficient outcome whether the parties are in the same firm or in different firms. This does not square with an observation of Coase that inside firms the price mechanism is superseded. Recent work has argued that it is possible to explain Coases observation if one is willing to step outside the standard framework and introduce some psychological considerations, including the idea that contracts are reference points for entitlements. Psychological and behavioral elements can broaden the scope of contract theory in many interesting ways. Recent theoretical and experimental work has argued that explicit contracts can interfere with feelings of fairness and trust and as a consequence extrinsic motivation can crowd out intrinsic motivation. Given this, informal and incomplete contracts may outperform formal and complete contracts even when the latter are feasible. This provides new insights into why highpowered incentives may be costly, and why parties may deliberately write incomplete contracts. Contracts may also be written by one party to take advantage of the cognitive limitations of another party. All this work is informed by experiments. It seems likely that in the future collaborations between contract theorists and experimentalistsboth in the lab and in the fieldwill yield important new insights, and help contract theorists to refine the assumptions they make. Another significant application of contract theory has been to understand firms financing decisions. Consider an entrepreneur who has an idea for a firm or project but does not have the funds to finance it. The entrepreneur might borrow from an investor. But should the borrowing be shortterm or longterm? How much collateral does the entrepreneur need to provide? Might it be better for the entrepreneur to issue equity rather than debt? Or might some sort of hybrid security be preferable to both? Many of these questions are, of course, studied in the standard corporate finance literature. The difference is that this literature tends to take the form of the securities a firm issues as given: equity or debt. In contrast, the financial contracting literature considers all possible contracts or securities and tries to explain why debt or equity may be optimal among these. This has yielded new insights. Economists are still grappling with the causes of the recent financial crisis. Although there is not yet consensus, most explanations are based on the idea that key institutions had excessive debt, that much of this debt was shortterm, and that the failure of one institution triggered the failure of others. There is also a widely held view that banks and other financial institutions are different: they are more

171

sensitive than regular industrial companies, and hence their failure is more serious. But why? Economists do not have fully convincing answers to these questions. Did institutions write suboptimal contracts with their investors (or for that matter with their customers, e.g., homeowners), or were these contracts individually optimal but collectively suboptimal? What does a bank do that makes it different from other firms? How should large financial institutions be regulated to prevent the next financial crisis? The tools of modern contract theory seem indispensable if we are to make progress on these vital questions. But inevitably answering these questions will require new thinking. Understanding the financial crisis requires putting contract theory into a general equilibrium perspective. Although Kiyotaki and Moore (1997), among others, have made a notable start in this direction, much remains to be done. The next twenty years promise to be both challenging and exciting. References Antras, Pol (2003), Firms, Contracts, and Trade Structure, Quarterly Journal of Economics, November, 13751418. Baker, George and Thomas N. Hubbard (2004), Contractibility and Asset Ownership: OnBoard Computers and Governance in US Trucking, Quarterly Journal of Economics, November, 14431479. Kiyotaki, Nobuhiro and John Moore (1997), Credit Cycles, Journal of Political Economy, April, 211248

172

A Research Agenda For Understanding the Dynamics of Skill Formation


James J. Heckman October 4, 2010

American Society is Becoming Polarized and Less Productive

In the past 30 years, American society has polarized. A greater percentage of children is attending and graduating college. At the same time, a greater percentage is dropping out of secondary school producing a growing underclass, neither working nor going to school. 20% of the U.S. work force has such a low rate of literacy that it cannot understand the instructions on a vial of pills. The slowdown in the growth of the skills of the workforce is reducing U.S. productivity. These problems are usually discussed in a piecemeal fashion. Analysts blame the public schools, rising tuition costs, or the failure of a number of other social institutions. This has produced an array of competing proposals that lack coherence or a rm grounding in science and social science. This position paper summarizes a body of research that articulates a coherent approach to addressing these problems that is rooted in the economics, psychology, and biology of human development.
James Heckman is the Henry Schultz Distinguished Service Professor of Economics at the University of Chicago and a senior fellow of the American Bar Foundation. This paper is based on Heckman [2008] and the references therein.

173

II

A Coherent Approach to Skill Policy

The current state of the literature can be summarized by eighteen points. 1. Many major economic and social problems such as crime, teenage pregnancy, obesity, high school dropout rates, and adverse health conditions can be traced to low levels of skill and ability in society. 2. In analyzing ability, society needs to recognize its multiple facets. 3. Current public policy discussions focus on promoting and measuring cognitive ability through IQ and achievement tests. For example, in the U.S. the accountability standards in the No Child Left Behind Act concentrate attention on achievement test scores, not evaluating a range of other factors that promote success in school and life. 4. Cognitive abilities are important determinants of socioeconomic success. 5. So are socioemotional abilities, soft skills, physical and mental health, perseverance, attention, motivation, and self condence. 6. They contribute to performance in society at large and even help determine scores on the very tests that are used to monitor cognitive achievement. 7. Ability gaps between the advantaged and disadvantaged open up early in the lives of children. 8. Family environments of young children are major predictors of cognitive and socioemotional abilities, as well as crime, health and obesity. 9. More than genetics is at work. 10. The evidence that documents a powerful role of early family inuence on adult outcomes is a source of concern because family environments in the U.S. and many other countries around the world have deteriorated over the past 40 years.

174

11. Experimental evidence on the eectiveness of early interventions in disadvantaged families is consistent with a large body of non-experimental evidence that adverse family environments, especially adverse parenting, substantially impair child outcomes. 12. If society intervenes early enough, it can raise the cognitive and socioemotional abilities and the health of disadvantaged children. 13. Early interventions reduce inequality by promoting schooling, reducing crime, and reducing teenage pregnancy. 14. They also foster workforce productivity. 15. These interventions have high benet-cost ratios and rates of return. 16. Early interventions have much higher economic returns than later interventions such as reduced pupil-teacher ratios, public job training, convict rehabilitation programs, adult literacy programs, tuition subsidies or expenditure on police. 17. Life cycle skill formation is dynamic in nature. Skill begets skill; motivation begets motivation. If a child is not motivated and stimulated to learn and engage early on in life, the more likely it is that when the child becomes an adult, she/he will fail in social and economic life. The longer society waits to intervene in the life cycle of a disadvantaged child, the more costly it is to remediate disadvantage. Similar dynamics appear to be at work in creating child health and mental health. 18. A major refocus of policy is required to understand the life cycle of skill and health formation and the importance of the early years in creating inequality and opportunity and in producing skills for the workforce. A fruitful direction for future research is to improve the core evidence on the dynamics of skill formation.

175

III

The Importance of Cognitive and Noncognitive Skills

Recent research has shown that earnings, employment, labor force participation, college attendance, teenage pregnancy, participation in risky activities, compliance with health protocols and participation in crime strongly depend on cognitive abilities and noncognitive skills. By noncognitive abilities I mean socioemotional regulation, delay of gratication, personality factors and the ability to work with otherswhat are sometimes called soft skills. Much public policy discussion focuses on cognitive test scores or smarts. The No Child Left Behind initiative in the US focuses on achievement on a test administered at certain grades to measure the success or failure of schools. Yet much evidence shows that, as is intuitively obvious and commonsensical, much more than smarts is required for success in a number of domains of life. Recent research documents the predictive power of motivation, sociability, the ability to work with others, attention, self-control, self-esteem, delay of gratication, and health in a variety of life outcomes. The importance of noncognitive skills tends to be underrated in current policy discussions because they are thought to be hard to measure. Yet they have been measured and have been shown to be predictive of success. Cognitive and noncognitive ability are important determinants of schooling and socioeconomic success. In the U.S. and many countries around the world, schooling gaps across ethnic and income groups have more to do with ability decits than family nances in the school-going years. Those with higher cognitive and noncognitive abilities are more likely to take schooling, company job training, and to participate in civic life. They are less likely to be obese and have greater physical and mental health. Cognitive and noncognitive skills are equally predictive of success in many aspects of life.

176

IV

Ability Gaps Are the Major Reason for the Schooling Achievement Gap

Controlling for ability measured at the school-going age, in the U.S. minorities are more likely to attend college than others despite their lower family incomes. Decits in college going between minority and majority groups are not caused by high tuition costs or family income at the age children are deciding to go to college.

Ability Gaps Open Up at Early Ages

Gaps in the abilities that play such an important role in determining diverse adult labor market and health outcomes open up at early ages across socioeconomic groups. Schooling after the second grade plays only a minor role in alleviating these gaps. Schooling quality and school resources have relatively small eects on ability decits and only marginally account for any divergence by age in test scores across children from dierent socioeconomic groups. The evidence on the early emergence of gaps leaves open the question of which aspects of families are responsible for producing ability gaps. Is it due to genes? Family environments? Family investment decisions? The evidence from intervention studies suggests an important role for investments and family environments in determining adult capacities above and beyond genes, and also in interaction with genes.

VI

Family Environments

The evidence that family environments matter greatly in producing abilities is a source of concern because a greater fraction of American children is being born into disadvantaged families. This trend is occurring in many countries around the world. Measured by the quality of its parenting, American family life is under challenge. A divide is opening up in early family environments. Those born into disadvantaged environments are receiving relatively less stimulation and child development resources than those from advantaged families. The real source of child 5
177

disadvantage is the quality of parenting. More educated women are working more, but, at the same time, are spending more time in child development. Less educated women are also working more but are not increasing their child investments. Those born into disadvantaged environments are receiving relatively less stimulation and child development resources than those from advantaged families, and the gap is growing over time. This creates persistence of inequality across generations through the mechanism of dierentials in parenting.

VII

Critical and Sensitive Periods

There is a large body of evidence on sensitive and critical periods in human development. Dierent types of abilities appear to be manipulable at dierent ages. IQ scores become stable by age 10 or so, suggesting a sensitive period for their formation below age 10. On average, the later remediation is given to a disadvantaged child, the less eective it is. A lot of evidence suggests that the returns to adolescent education for the most disadvantaged and less able are lower than the returns for the more advantaged. The available evidence suggests that for many skills and human capacities, later intervention for disadvantage may be possible, but that it is much more costly than early remediation to achieve a given level of adult performance.

VIII

Key Policy Issues

From the point of view of social policy, the key questions are how easy is it to remediate the eect of early disadvantage? How costly is it to delay addressing the problems raised by early disadvantage? How critical is investment in the early years and for what traits? What is the optimal timing for intervention to improve abilities?

178

IX

Enriched Early Environments Can Compensate In Part For Risk Features of Disadvantaged Environments

Experiments that enrich the early environments of disadvantaged children show that the eects of early environments on adolescent and adult outcomes are causal. Improvements in family environments enhance childrens adult outcomes and operate primarily through improvements in noncognitive skills. Reliable data come from experiments that provide substantial enrichment of the early environments of children living in low-income families. Longitudinal studies of the experimental groups demonstrate substantial positive eects of early environmental enrichment on a range of cognitive and non-cognitive skills, schooling achievement, job performance, and social behaviors, long after the interventions end.

Summary

Many current social problems have their roots in decits in abilities. Ability decits open up early in life and persist. They produce inequality and reduce productivity. Evidence from a variety of studies shows that there are critical and sensitive periods for development. Sensitive periods come earlier in life for cognitive traits. The age pattern is less pronounced for noncognitive traits. This pattern is associated with slower development of the prefrontal cortex. Noncognitive traits stimulate production of cognitive traits and are major contributors to human performance. The powerful role of noncognitive traits and the capacity of interventions to improve these traits is currently neglected in public policy discussions. Later life investment is less productive if an adequate base has not been created in early life. The econometric evidence is consistent with the evidence from neuroscience. Later investment is more productive if early investment is made. A portfolio of childhood investment weighted toward the early years is optimal. Society currently ignores this pattern in its investment in disadvantaged children, devoting more resources to adolescent remediation than childhood prevention. Children from advantaged environments by and large receive substantial early investment. Children from disadvantaged environments typically do not. 7
179

The appropriate measure of disadvantage is the quality of parenting, not income per se. Quality of schools and tuition do not matter as much as is often thought. Late remediation is very costly. Interventions should be directed toward the malleable early years, if society is to successfully reduce inequality and promote productivity in American society. Making these arguments more precise and rooting them more rmly in data on biology and behavior will lay the groundwork for addressing the core problem of rising inequality in a rigorous and meaningful way.

Reference Heckman, J. J. (2008, July). Schools, skills and synapses. Economic Inquiry 46(3), 289324.

180

SOME COMPELLING BROAD-GUAGED RESEARCH AGENDAS IN ECONOMICS Glenn Hubbard, Columbia University Since the financial crisis, many political leaders (and indeed social scientists in universities) have called for putting economics in its place and redirecting support to other disciplines. The concerns are that economists are too axiomatic, too doctrinaire, and too unwilling to learn from other disciplines. But this is actually the time to increase support for broad-gauged economic research substantially in my view. And there are scholars and agendas that could benefit from and quickly deliver results from this support. To wit: INTERDISCIPLINARY WORK ON ECONOMIC QUESTIONS

n Can we use failures in the financial crisis to discriminate between models of poor incentives and models of overconfidence? n Can we use economic and behavioral insights to design and evaluate products in "consumer finance"? (We are doing this at Columbia in research in our Center for Decision Sciences and in a new MBA course entitled Consumer Finance.) Much research is done on topics in corporate finance, but consumer finance is at least as important conceptually and empirically. n Can we discriminate between models of entrepreneurial risk-taking and innovation from economics and models based on overconfidence and/or tolerance for ambiguity from psychology? THE FRONTIER OF RESEARCH ON ECONOMIC GROWTH

n How can we enrich our knowledge about the impact of management practices and firm-level productivity growth? n Can we explain differences in recoveries from severe financial crises across countries and time periods? n What more can we learn from historical episodes of major innovations about determinants of major changes and incremental innovations? n How can we model and estimate effects of major fiscal reforms (e.g., entitlement reform in the United States) - as spending changes and tax changes - on economic growth? ECONOMIC ANALYSIS OF MAJOR POLICY QUESTIONS

n How can we enrich our understanding of health policy choices on insurance and care arrangements (many insights still date to the old RAND study)? n How can encourage more systematic modeling and estimation of fiscal policy multipliers (outside of the heat of battle of individual policy debates)?

181

n How important are rising health care costs in explaining wage income stagnation for many Americans in the past decade? n How effective are large scale asset purchases by the Federal Reserve in altering the term structure or risk structure of interest rates? n What kinds of financial contracts can best address risk-sharing for job loss or retirement or disability? Dan, there are many more topics one could cover, but I think this short list makes the point. There are big areas in which progress can be made and in which scholars are ready. Large-scale NSF support for new data or to support teams and research colloquia could have a very high payoff. I would be happy to discuss any of this with you, of course.

182

Challenges in Econometrics Guido W. Imbens - Harvard University, Sept, 2010

1. Introduction To frame what is in my view of the main challenges facing researchers in econometrics, let me set the stage by describing the current state of research. Much of the traditional research in econometrics can be divided into two branches, the rst comprising cross-section and panel data econometrics and the second time series analysis. In the cross-section branch of econometrics researchers have data on a large number of units, often individuals, or groups of individuals, rms, or markets. For each unit there is information on a relatively small number of variables, sometimes measured at a single point in time, sometimes with repeated measures as in panel data. The units are viewed as exchangeable, or independent in the sense that there is no interaction between the units: what happens to one unit does not aect other units. In time series analysis the typical setting is one with observations on a small number of variables, at many points in time, with relatively unrestricted dependencies between the dierent variables. For models designed for data congurations of these two types we have learned much in the last few decades. In fully parametric models, as well as in the more exible semi and non parametric models we have gained an impressive understanding of the appropriate ways of analyzing such data, and the properties of many estimators and methods for inference. In my view the biggest challenges faced by economists in terms of analyzing economic data concern fundamentally dierent congurations of the data, with complex, largely unknown, dependence patterns and a relatively large numbers variables per unit. In such cases the current methods to do approximate inference based on large sample results, which are specically designed to exploit laws of large numbers and central limit theorems, are likely to be inadequate. Moreover, trying to t these more

1
183

complex data congurations into the old methods would be unlikely to lead to much progress. In some cases econometricians and statisticians have made some progress on such alternative data congurations, but for the most these are unexplored areas for research. 2. Data Configurations More and more data are becoming available to researchers that do not t the standard mold. We may have information on units located in physical or economic spaces that exhibit strong, but complex and partly unknown, dependencies in economic behavior. These dependencies are likely to weaken as the distances between units increase, but the appropriate notion of distance is likely to be partly unknown. Correlations may be stronger in some parts of the population than in others. One branch of econometrics that has studied such questions is spatial econometrics, but this is still a relatively undeveloped part of the econometrics profession, relative to the number of questions. Much of the work in spatial econometrics relies heavily on methods imported from the statistics literature where the focus was on dierent questions. For example, in the statistics literature the focus was often on predicting outcomes in particular locations given outcomes in nearby locations, e.g., presence of natural resources at one location given measurements on measures of resources or proxy variables at nearby locations, with often strong prior beliefs about the appropriate distance measures. In economics it may be of more interest to understand how the spatial correlations generate eects of policies implemented in one location on outcomes in another nearby location. Such eects may operate with unknown lags, necessitating the combination of time series methods and spatial analysis. There may be little prior knowledge about the relative importance of dierent distance measures. Related to spatial statistics but with a dierent set of challenges, the dependencies between economic behavior may arise from what is sometimes called peer eects, or social interactions. Here distances between units are often modeled as discrete, typically binary: individuals either inuence each other in a constant way, or not at

2
184

all. In an important paper Manski (1993) studied identication questions in a special case where a population was divided into peer groups. Important is the fact that the peer groups in Manskis analysis partition the population. Behavior of units in dierent peer groups is not correlated. Within groups correlations may arise from correlated backgrounds, from a shared environment, or from feedback in behavior. Often individuals within a peer group are viewed as exchangeable: all individuals inuence each other to the same degree. Many questions arise when the groups within which the dependencies are present are partly the result of choices made by individuals. Observed correlations may simply aect choices of individuals to team up with similarly minded individuals, rather than eects on peers behavior. Controlling for shared background is also a dicult challenge. While there have been numerous empirical studies documenting correlations in outcomes for individuals in the same class, both in the short and in the long run, there is still a great deal of uncertainty whether these arise from teacher eects or interactions between students. Ultimately a key question is whether these social interactions can be exploited by policy makers to improve the distribution of outcomes in society, through, for example, tracking in educational settings. An interesting paper in this respect is Carrol, Sacerdote, and West (2010) who attempt to improve average test scores by optimally assigning incoming recruits at the Air Force Academy to dierent squadrons. If successful, the mechanism would be through the induced interactions associated with the squadron assignments. The analyses get even more complicated when the peer groups do not simply partition the population. Some individuals may be connected to many others through self-chosen friendship links, and the eect of two dierent peers on the same individual may be dierent. Economic theorists have analyzed such network settings in considerable depth (e.g., Jackson, 2008), and empirical work has demonstrated the presence of correlations in behavior associated with such networks, but our understanding of the statistics and econometrics of these models is still in its infancy. For example,

3
185

the literature almost exclusively deals with exogenously formed networks, with links between individuals either present or absent rather than of varying intensity, and with little attention to the dynamics of and feedback in the network formation processes. None of these are plausible assumptions, and there is little knowledge about the sensitivity of empirical results to violations of these assumptions. Theorists have focused on the diculty of dening useful equilibrium concepts in the context of network formation. When taking account of the changing environment the dynamics of the equilibrium may lead to even more problems. Questions of interest for economists include the eect of encouraging interactions by facilitating opportunities to form links, and the eects of interventions in some individuals on outcomes for those connected to them. There are a number of specic challenges in analyzing such data sets. They arise from common features of such data. They often contain information on a large number of units, as well as detailed information per unit. Especially with some of these data sets drawn from internet communities, one may have information on a very large number of individuals, followed over a period of time during which they were subject to many stimuli from outside and during which many interactions with other individuals took place. With possible dependence in behavior for many individuals in such networks, the basis for conventional large sample results is unclear for even for simple statistics such as sample averages. A general question in this area concerns the presence of data sets with a many variables relative to the number of units, sometimes more even variables than units. For example, we may have for a moderate number of individuals extremely detailed information about their behavior, including all web sites visited, all social interactions experienced, or all purchases made during visits to a supermarket, or, in the biostatistics literature, we may have detailed genetic information on a small number of individuals. Using such data to infer patterns in behavior that can inform policy questions is fundamentally dierent from that of inferring parameters of parsimonious models in large samples. Simply following the

4
186

standard approach of approximating the distribution of estimators by joint normal distributions is unlikely to be a generally satisfactory approach in such settings with many parameters. A specic example of this is the study of regression models with more potential explanatory variables than individuals. Some methods have been developed for the covariate selection problem in the statistics literature (e.g., Lasso and related methods), but these methods have not found many applications yet in economics. There are also huge computational challenges in this literature. Most of the sophisticated modeling has been done in the context of very small data sets. Even in such settings the number of possible links and networks can quickly be very large. In practice even the number of units in the networks can be very large, leading to even greater computational problems. Research related to these questions has been conducted in multiple disciplines and is a fertile area for interdisciplinary research. Sociologists have a long tradition of studying communities and social interactions, and have contributed many substantive questions to this area. They have also collected interesting data sets, as well as some statistical methodology. Statisticians have developed methodology for spatial data, although little specically for network data (see the Holland and Leinhardt (1981) paper and the subsequent literature). Computer scientists have focused on properties of networks emerging from various network formation processes. Specically they have looked at models that generate few large connected networks rather than many disconnected groups. None of these disciplines have focused much on the type of questions economists tend to be interested in, but many have made progress on related issues. References Carrell, S., B. Sacerdote, and J. West (2010), Beware of Economists Bearing Reduced Forms? An Experiment in How Not To Improve Student Outcomes Unpublished Working Paper.

5
187

Holland, P., and S. Leinhardt, (1981), An Exponential Family of Probability Distributions for Directed Graphs,Journal of the American Statistical Association, 76(373): 33-50. Jackson, M, (2008) Social and Economic Networks, Princeton University Press. Manski, C., (1993), Identication of Endogenous Social Eects: The Reection Problem,Review of Economic Studies, 60, 531-542.

6
188

Research Opportunities in the Study of Social and Economic Networks

Matthew O. Jackson, Stanford University, September 21, 2010 White paper prepared for the NSF/SBE

Abstract: Social network patterns of interaction influence many behaviors including consumption, career choice, employment, investment, voting, hobbies, criminal activity, risk sharing, and even participation in micro-finance. Networks of relationships among firms and political organizations also impact research and development, investment decisions and market activity, international trade patterns, and political alliances. The study of how network structure influences (and is influenced by) economic activity is becoming increasingly important because it is clear that many classical models that abstract away from patterns of interaction leave certain phenomena unexplained. For example, the fact that information about jobs is largely disseminated through social networks has significant implications for patterns of wages, unemployment, and education. Beyond the many economic settings where social structure is critical, the study of social and economic networks can also benefit from an economic perspective. Tools from decision theory and game theory can offer new insight into how behavior is influenced by network structure; and can also be used to analyze network formation. In addition network analysis provides new opportunities and challenges for econometrics, laboratory and field experiments; and they are beginning to shed new light on the impact of social interactions ranging from favor exchange to corruption and economic development.

Our beliefs, decisions and behaviors are influenced by the people with whom we interact. Examples of the effects of social networks on economic activity are abundant and pervasive, as social interaction plays a key role in the transmission of information about jobs, new products, technologies, and political opinions. Networks also serve as channels for informal insurance and risk sharing, and influence decisions regarding education, career, hobbies, criminal activity, and even participation in micro-finance. Beyond the role of social networks in determining various economic behaviors, there are also many business and political interactions that are networked. Networks of relationships among various firms and political organizations affect research and development, investment decisions, patent activity, trade patterns, and political alliances.

189

Research Opportunities in the Study of Social and Economic Networks/Jackson

Given their importance, the study of social and economic networks is expanding rapidly and naturally cuts across many disciplines including economics, sociology, anthropology, education, political science, applied mathematics, statistical physics and computer science. It is an exciting area not only because of the explosion of ``social networking'' that has emerged with the internet and other advances in communication, but also because of the fundamental role that many varieties of social networks play in shaping human activity. Social network analysis has already taught us a great deal and it holds tremendous potential for future application, especially in economics. The study of social networks has a rich history in sociology, with a variety of detailed case studies, theories of social structure, and a perspective of social structure being symbiotic with social behavior. The sociology literature includes the seminal references on studies of opinion leaders, homophily (the tendency of similar individuals to associate with each other), strength of ties, and many other things. Nonetheless, a substantial portion of the current explosion in the study of social networks comes from expansions outside of sociology. The other disciplines have much to contribute because they bring new perspectives on applications, as well as new tools for analyzing social interactions. As an example, it was noted early on in both the sociology and economics literatures that substantial amounts of information about job opportunities often comes from friends and acquaintances. Despite this observation, there was limited study of the wage and employment implications of that fact. It was only in the last decade that it has been shown that incorporating network-based models of job information provides significant new insights into patterns of unemployment, time series of wages, and persistent racial wage gaps. Let us briefly outline in turn these two important dimensions of economic studies of networks: supplying new perspectives on the role of networks in many applications, and providing new tools for analyzing social interactions and their relation to human behavior. These are complementary aspects of the study of networks, and suggest abundant and pressing areas for research. The study of how network structure relates to economic activity is becoming increasingly important because many classical economic models that abstract away from patterns of interaction are unable to provide insight into certain phenomena. For instance, stylized views of markets as anonymous systems miss details that are critical in understanding many empirical patterns of trade, prices, and resulting inefficiencies. As mentioned above, the role of social networks in disseminating job information affects wages and unemployment patterns, and has implications for inefficient investment. Other important questions of how patterns of interaction

190

Research Opportunities in the Study of Social and Economic Networks/Jackson

affect economic outcomes include: How does price dispersion in markets depend on network structure? How do new market technologies (e.g., the internet) change interaction patterns, the efficiency of markets, and which goods are traded? How do the patterns of liabilities among financial intermediaries relate to the potential for financial contagion? How are education and other human capital decisions influenced by social network structure? More generally, when and how are consumption and voting patterns influenced by friendships and acquaintances and what does this imply for efficiency in decision making? How do people learn and communicate by word of mouth? Will the networks of interactions that emerge in a society be the efficient ones in terms of their implications for economic growth and development? As an encouraging example, the recent awakening of network research in development economics has provided exciting new insights into a diversity of important questions such as how people choose production technologies in agriculture, how they share risk and exchange favors, and how they learn about new programs and opportunities. A second important area of the study of networks from an economic perspective derives from the fact that economic tools and reasoning are very useful in analyzing both network formation and network influence, and these tools are quite complementary to those from other disciplines. That is, even beyond the implications of network structure for economic activity and welfare, economic reasoning provides important new insights regarding how people self-organize and why certain patterns will emerge. Particularly effective economic tools come from decision theory, behavioral economics, and game theory. These sorts of reasoning can be used to predict behavior along the lines discussed above: which choices people make and how choices depend on friends choices; and such reasonings are also very useful in analyzing network formation. For example, explicit modeling of individual choices can help us to understand homophily: why people tend to associate with other people who are similar to them along a number of dimensions. It also provides new implications for resulting behavior, including how students study habits and human capital investment decisions depend on their peers and how the choice of their friendships relate to these choices. It also provides new insights into why the average social distance between people is so small even in very large societies and what this implies for the spread of information. In addition to the modeling tools that economics can provide, new econometric and statistical techniques are needed (and starting to emerge) to analyze network data and improve our understanding of peer effects in many areas. Empirical analyses of social interactions provide many issues for research in applied econometrics, as well as opportunities in experimental economics, both in the lab and in the field. In particular, the endogeneity of social structure leads to a pervasive problem in analyzing behavior as a function of social structure. Do friends behave similarly because of their influence on each other, or are they friends because of their

191

Research Opportunities in the Study of Social and Economic Networks/Jackson

similar behavior, or even because of some latent trait that correlates with their behavior? Given this, and other related issues that lead to hurdles in determining causality, it is important to base empirical analyses of social and economic networks either on careful structural models that account for endogeneity, or to be able to take advantage of laboratory, field, or natural experiments to control for that endogeneity. One very positive aspect in this regard is that recently emerging research in networks exhibits natural and healthy interactions between theory, empirics, and econometrics. In summary, there are many important and pressing areas for the study of social and economic networks. This derives from the facts that (i) there are many instances where the network patterns of interactions are fundamental to understanding emergent economic behaviors, (ii) economic reasoning can lead to new insights regarding social interaction patterns, and (iii) the endogeneity of social interaction presents challenging hurdles in interpreting data that requires the use of structural models, new statistical tools, and various field and laboratory experiments. Thus, the study of social and economic networks provides many exciting opportunities.

Three References to Relevant Readings: David Easley and Jon Kleinberg (2010) Networks, Crowds, and Markets: Reasoning about a Highly Connected World. Cambridge University Press: Cambridge UK. Matthew O. Jackson (2008) Social and Economic Networks, Princeton University Press: Princeton, NJ. Stanley Wasserman and Katherine Faust (1994) Social Network Analysis: Methods and Applications, Cambridge University Press: Cambridge UK.

This work has a Creative Commons Attribution Non-Commercial Share Alike license:
http://creativecommons.org/about/licenses/

192

HARVARD UNIVERSITY
DEPARTMENT OF ECONOMICS

Dale W. Jorgenson
Samuel W. Morris University Professor 122 Littauer Center Cambridge, MA 02138-3001

PHONE: (617) 495-4661


FAX: (617) 495-4660 EMAIL: djorgenson@harvard.edu WEB: http://post.economics.harvard.edu/faculty/jorgenson

A NEW ARCHITECTURE FOR THE U.S. NATIONAL ACCOUNTS: by Dale W. Jorgenson September 20, 2010 Introduction. The purpose of this Grand Challenge is to accelerate the development of new economic data for the resolution of policy issues involving longterm growth. Significant examples include public and private provision for retirement income and the outlook for health care expenditures and public programs to cover health care costs. The public programs for retirement income and health care are critical components of the long-term development of the federal budget. Other important examples include broadening the concept of investment to include investment in human capital through health care and education and investment in intangibles, such as research and development. The first question to be addressed is, why do we need a new architecture for the U.S. national accounts? In this context architecture refers to the conceptual framework for the national accounts. An example of such a framework is the new seven-account system employed by the Bureau of Economic Analysis (BEA). 1 A second example is the United Nations System of National Accounts 2008. 2 Both provide elements of a complete accounting system, including production, income and expenditure, capital
The BEAs seven-account system of summarized by Dale W. Jorgenson and J. Stephen Landefeld, Blueprint for Expanded and Integrated U.S. Accounts: Review, Assessment, and Next Steps, in Dale W. Jorgenson, J. Stephen Landefeld, and William D. Nordhaus, eds., 2006. A New Architecture for the U.S. National Accounts, Chicago University of Chicago Press, pp. 13-113. An electronic version of Jorgenson and Landefeld is available in Blueprint for Expanded and Integrated U.S. National Accounts: Review, Assessment, and Next Steps, with J. Steven Landefeld, in D.W. Jorgenson, J.S. Landefeld, and W.D. Nordhaus, eds., A NEW ARCHITECTURE FOR THE U.S. NATIONAL ACCOUNTS, Chicago, University of Chicago Press, 2006, pp 13-112. 2 United Nations, Commission of the European Communities, International Monetary Fund, Organisation for Economic Co-operation and Development, and the World Bank, 2009. System of National Accounts 2008. ST/ESA/STAT/SER.F/2/Rev.5, New York: United Nations.
1

193

formation, and wealth accounts. The purpose of such a framework is to provide a strategy for developing the national accounts. The U.S. national accounts were originally constructed to deal with issues arising from the Great Depression of the 1930s, focusing on the current state of the economy. The basic architecture of the national accounts has not been substantially altered in fifty years. Recovery from the economic crisis of 2007-2009 has shifted the policy focus from economic stabilization to enhancing the U.S. economys growth potential. In addition, the economy is confronted with new challenges arising from rapid changes in technology and globalization. Meeting these challenges will require a new architecture for the U.S. national accounts. New Architecture. The key elements of the new architecture are outlined in a Blueprint for Expanded and Integrated U.S. Accounts, by Jorgenson and Landefeld. 3 They present a prototype system that integrates the national income and product accounts with productivity statistics generated by BLS and balance sheets produced by the Federal Reserve Board. The system features GDP, as does the National Income and Product Accounts; however, GDP and domestic income are generated along with productivity estimates in an internally consistent way. The balance sheet covers the U.S. economy as a whole and fills a gap in the existing Flow of Funds Accounts. The prototype system of accounts developed by Jorgenson and Landefeld incorporates the cost of capital and the flow of capital services for all productive assets employed in the U.S. economy. This provides a unifying methodology for integrating the National Income and Product Accounts generated by BEA and the productivity statistics constructed by BLS. The parallel flow of labor services is broken down by age, sex, education and class of employment. Hours worked for each category of labor services are weighted by total labor compensation per hour worked. The underlying source data on employment, hours worked, and labor compensation include public use data for individuals from the decennial Censuses of Population and the monthly Current Population Surveys generated by the Bureau of the Census. The production account for the prototype system of accounts is based on the gross domestic product (GDP) and gross domestic income (GDI) in current and constant prices. This production account has been disaggregated to the level of individual industries, by Jorgenson, Ho, and Samuels (2010), New Data on U.S. Productivity Growth by Industry. 4 The methodology follows that of Jorgenson, Ho and Stiroh (2005), Information Technology and the American Growth Resurgence. This methodology conforms to the international standards presented in the OECD Productivity Manual (2001). 5 The European Union (EU) has recently completed a project to develop systems of production accounts based on this methodology for the economies of all EU member states. 6 This has been expanded to an initiative involving more than forty countries on all six continents. 7
3 4

See Jorgenson and Landefeld, op. cit. http://www.worldklems.net/conferences/worldklems2010_jorgenson.pdf 5 See Paul Schreyer, 2001. Productivity Manual: A Guide to the Measurement of Industry-Level and Aggregate Productivity Growth. Paris: Organisation for Economic Cooperation and Development, May. 6 For details on the EU project, see: www.euklems.net/. 7 http://www.worldklems.net/

194

The prototype system of Jorgenson and Landefeld begins with the NIPAs and generates the income and product accounts in constant prices as well as current prices. An important advantage of beginning with the NIPAs is that the impact of globalization on the U.S. economy is reflected in BEAs system of international accounts. This system includes the Foreign Transactions Current Account, which records imports and exports, as well as receipts from the Rest of the World, payments to the Rest of the World, and the Balance on Current Account. The international accounts also include the Foreign Transactions Capital Account, which registers Net Lending and Borrowing from the United States to the Rest of the World. BEAs international accounts are undergoing substantial improvements intended to enhance the quality of information available to policy makers dealing with globalization. 8 Another important advantage of beginning with the NIPAs is that the existing U.S. national accounts could be incorporated without modification. Improvements in the NIPAs could be added as they become available. For example, the BEA is currently engaged in a major program to improve the existing system of industry accounts. This program integrates the NIPAs with the Annual Input-Output Accounts and the Benchmark Input-Output Accounts produced every five years. Improvements in the source data are an important component of this program, especially in measuring the output and intermediate inputs of services. The Census Bureau has generated important new source data on intermediate inputs of services and BLS has devoted a major effort to improving the service price data essential for measuring output. 9 Next Steps. The next step in unifying the National Income and Product Accounts with the productivity statistics is to develop a more detailed version of the production account. This would incorporate BEAs new system of official statistics on output, intermediate input, employment, investment, fixed assets, and imports and exports by industry. The system of industry production accounts would use the North American Industry Classification System (NAICS) employed in BEAs official statistics. The accounts would include capital and labor inputs for each industry, based on the methodology of Jorgenson, Ho, and Stiroh (2005). Industry outputs, as well as intermediate, capital, and labor inputs would be presented in current and constant prices along with productivity, as in Jorgenson, Ho, and Samuels (2010). The next step in integrating the NIPAs with the Flow of Funds Accounts would be to extend the national balance sheet for the U.S. economy generated by Jorgenson and Landefeld to incorporate balance sheets for the individual sectors identified in the Flow of Funds Accounts. The Integrated Macroeconomic Accounts for the U.S. produced by Teplin, et al., have focused on the income and expenditure accounts, rather than balance sheets and the wealth accounts. 10 A comprehensive wealth account for the U.S. economy

See, for example, Ralph Kozlow, Globalization, Offshoring, and Multinational Companies: What are the Questions and How Well Are We Doing at Answering Them, BEA Working Paper, January 6, 2006. 9 See the Panel Remarks by Thomas L. Mesenbourg of the Census and Kathleen P. Utgoff of BLS in Jorgenson, Landefeld, and Nordhaus, op cit., pp. 611-625. 10 BEA national income and FRB flow of funds data on income and expenditure are combined by Albert M. Teplin, Rochelle Antoniewicz, Susan Hume McIntosh, Michael G. Palumbo, Genevieve Solomon, Charles Ian Mead, Karin Moses, and Brent Moulton, Integrated Macroeconomic Accounts for the United States:

195

is currently unavailable. Such an account is essential for measuring the accumulation of wealth to meet future financial needs for both public and private sectors, as well as assessing the levels of domestic and national saving and their composition. The new architecture project would involve close collaboration with the statistical agencies. The final question to be addressed is, why not leave this as a Grand Challenge to the statistical agencies? The answer is that no agency has responsibility for producing a new architecture for the national accounts. Each of the agencies has a well-defined scope of activities supported through the federal budget. These activities have been developed over many decades of experience of operating within the decentralized U.S. statistical system. The existing architecture of the U.S. national accounts was developed through collaboration between the statistical agencies and intellectual leaders in the private sector such as Simon Kuznets and Wassily Leontief, but this architecture has important gaps and inconsistencies and is now in need of major updating and extension. The initial steps described above were carried out through collaborations among the agencies and between private and public sector investigators. Future Research. The creation of a new architecture for the U.S. national accounts will open new opportunities for development of our federal statistical system. The boundaries of the U.S. national accounts are defined by market and near-market activities included in the gross domestic product. An example of a market-based activity is the rental of residential housing, while a near-market activity is the rental equivalent for owner-occupied housing. The new architecture project is not limited to these boundaries. Under the auspices of the National Research Council, the Committee on National Statistics has outlined a program for development of non-market accounts, covering areas such as health, education, household production, and the environment. 11 An example of future opportunities for development of federal statistics is the integration of rental values for housing, the asset value of the housing stock, and level of investment in residential structures. All three have been the focus of intense media attention during the recent housing boom and bust, in part because of the importance of housing as a component of national wealth. Investment in housing also involves important long-term policy issues, such as the impact of federally subsidized mortgages, the effect of tax incentives for housing through income tax deductions for mortgage interest and state and local property taxes, and the role of investment in public housing. The value of the housing stock includes the value of residential structures, as well as the value of residential land. The value of land is included in the national wealth, but not in BEAs accounts for reproducible assets. New accounts for health and education could make use of new data sources, such as the American Time Use Survey (ATUS), recently instituted by the Bureau of Labor
Draft SNA-USA, in Jorgenson, Landefeld, and Nordhaus, op. cit., pp. 471-541. These accounts are updated annually be BEA and the FRB. 11 The NRC report in summarized by Katharine G. Abraham and Christopher Mackie, A Framework for Nonmarket Accounting, in Jorgenson and Landefeld, op. cit., pp. 161-192. The conceptual framework for non-market accounts is presented by Nordhaus, Principles of National Accounting for Nonmarket Accounts, in Jorgenson and Landefeld, op. cit., pp. 143-160.

196

Statistics. 12 This provides detailed accounts for time use for the U.S. population. Jorgenson and Barbara M. Fraumeni have provided estimates of investment in human capital, including education. 13 An important part of investment in education is the value of time spent by students enrolled in educational programs. Since this time is not evaluated in the labor market, the value of investment in education is outside the boundary of the national accounts, but could be included in non-market accounts. The Jorgenson-Fraumeni estimates of education incorporate a detailed system of demographic accounts for the U.S. population, based on the work of Land and McMillan. 14 This includes a breakdown of the population by age, sex, education, and labor force status. Employed members of the labor force are included in the labor data base that underlies the prototype system of accounts developed by Jorgenson and Landefeld. Time spent in labor market activities is also included in the labor data base. Time spent in non-market activities, such as education, is included in the data base employed by Jorgenson and Fraumeni. BEA has recently undertaken a project to update the Jorgenson-Fraumeni estimates of investment in education as part of a program to measure the output of public educational institutions. The availability of data on time use would also facilitate the implementation of measures of well being that incorporate social and psychological dimensions, as well as the economic dimension captured by the measure of income in constant prices employed by Jorgenson and Landefeld, following Paul Samuelson, William Nordhaus and James Tobin, and Martin Weitzman. 15 For example, a System of National Well-Being Accounts has been proposed by Daniel Kahneman and Alan Krueger. 16 This is based on the Day Reconstruction Method in which time use is associated with domain-specific satisfaction. Measures of satisfaction can be compared over time and among groups of individuals to measure levels of well-being and their evolution over time. Finally, the World KLEMS project is now generating industry-level production accounts, like those described above for the U.S., for the economies of EU members and fifteen other major U.S. trading partners such as Brazil, China, India, Japan, and Korea. These data will greatly facilitate international comparisons and research into the impact of globalization on the major industrialized economies and the future impact of globalization on the U.S. economy.
See the BLS website for details about ATUS: www.bls.gov/tus/. See Dale W. Jorgenson, 1996. Postwar U.S. Economic Growth, Cambridge, The MIT Press. An overview of issues in measuring investment in education is presented by Katharine G. Abraham, Accounting for Investments in Formal Education(PDF). The estimates of Jorgenson and Fraumeni have been updated by Michael S. Christian, Human Capital Accounting in the United States: 1994 to 2006, (PDF).
13 12

See Kenneth C. Land and Marilyn M. McMillen. 1981. Demographic Accounts and the Study of Social Change, with Applications to Post-World War II United States. In F. Thomas Juster and Kenneth C. Land, eds., Social Accounting Systems. New York, Academic Press, pp. 242-306. 15 See Paul A. Samuelson, 1961. The Evaluation of Social Income, In Fredrich A. Lutz and Douglass C. Hague, The Theory of Capital, London, Macmillan, pp. 32-57. William D. Nordhaus and James Tobin, 1973. Is Growth Obsolete? In Milton Moss, ed., The Measurement of Economic and Social Performance, New York, Columbia University Press, pp. 509-532. Martin Weitzman, 2003. Income, Wealth, and the Maximum Principle. Cambridge, Harvard University Press. 16 See Alan B. Krueger, ed., Measuring the Subjective Well-Being of Nations: National Accounts of Time Use and Well-Being, Chicago, University of Chicago Press, 2009.

14

197

198

October 8, 2010
This work is licensed under the Creative Commons Attribution-NoDerivs 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nd/3.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.

Measurement and Experimentation in the Social Sciences


Arie Kapteyn RAND Abstract I propose to build an advanced data collection environment for the social sciences that maximizes opportunities for innovation, and is fast, cost effective, and easy for everyone in the scientific community to use. The core of this laboratory is a representative panel of households in the United States who have agreed to be available for regular interviews over the Internet. The Internet panel is representative in the sense that respondent recruitment is based on a probability sample. Internet access will not be a prerequisite for participation in the panel. If a respondent does not have Internet access at the time of recruitment into the panel, he or she will be provided with a laptop and broadband access. The laboratory will incorporate and pioneer new forms of data collection including, but not limited to, smartphones, self-administered measurement devices for the collection of biomarkers, experience sampling, web cameras, Global Positioning System (GPS) devices, accelerometers to measure physical activity, and eye tracking equipment.

The challenge of measurement


Social scientists use many sources of information to construct their models of human behavior in a social and societal context. These sources include introspection, participatory observation, surveys, physical measurements and biomarkers, administrative data, laboratory experiments, field experiments, and natural experiments.1 Every source of information has strengths and weaknesses. Which source is used may depend on the research question at hand, but in many instances also reflects the personal preferences and skills of researchers and the size of research budgets. More importantly, measurement is typically limited to one domain or at most a few. Thus numerous studies are conducted each addressing one of a variety of domains of human life but mostly ignoring the relationship with other domains. This situation can partly be attributed to budgetary limitations: After all, who has money to study everything? Yet in fact, resources are wasted because different studies often collect overlapping information
1

This list is not an exhaustive typology and several finer distinctions can be made, e.g. between experiments with real stakes and hypothetical experiments, panel surveys and cross sectional surveys, etc.

199

October 8, 2010 while missing opportunities to capture a more complete understanding of human behavior in its various aspects. In short, empirical social sciences tend to be fragmented, not only because of disciplinary differences in approach and the communication issues associated with these differences, but also because data themselves are fragmented. Admirable exceptions exist: The US Health and Retirement Study (HRS, http://hrsonline.isr.umich.edu/) is the most prominent example2. Since 1992, the HRS has collected information biennially on individuals 50 and older about income, work, assets, pension plans, health insurance, disability, physical health and functioning, cognitive functioning, and health care expenditures. Additionally, off-year surveys cover topics such as the Consumption and Activities Mail-Out survey (CAMS) and one-off surveys on topics such as Medicare part D and time use. Recently (since 2006), HRS has started to collect biomarkers such as grip strengths, breathing tests, saliva (to extract DNA), dried blood spots (for Hemoglobin A1c, total cholesterol and HDL cholesterol), etc. In addition, administrative data (e.g. social security earnings records) have been added, subject to consent by respondents. The HRS is such a successful scientific model that it has been reproduced in some 20 countries (England, several continental European countries, South Korea, Mexico, China, and India), with largely similar set-ups and comparable questionnaires. Despite the collection of additional information in off years and the growing breadth of information that is being collected (like biomarkers), there are obvious limitations to the amount of information that can be collected in any single domain and to the number of experiments one can do. Thus although the HRS has been revolutionary in its multidisciplinary approach and in its continuous incorporation of innovations, many factors continue to limit what we can be learn (including the age requirement for respondents).

What would be a next step?


Imagine a survey like the HRS that would allow researchers to recontact study participants at any time, include all ages, combine conventional surveys with physical measures and biomarkers use modern technology to monitor behavior, and allow for data collection across a broad swath of domains and over an extended time period. Naturally we would also want to link administrative data to individual records (subject to respondents consent and with adequate data protection safeguards). This kind of survey would allow researchers across multiple disciplines to consider all relevant domains for empirical analysis, to quickly monitor the effects of major events (e.g. the financial crisis or the swine flu pandemic), and to design experiments that take advantage of a wealth of readily available background information. The format would also support in-depth studies on subsamples (e.g., conducting functional Magnetic Resonance Imaging experiments or qualitative interviews with a subset of study participants).

This is not to say that other surveys dont cover material from different disciplines. For instance, the PSID, which started as primarily a socio-economic panel has added content over the years and now has a substantial health component.

200

October 8, 2010

What is possible today?


My proposal is not to build something totally new with unproven technology, but rather to build on what has been proven to work, using technology that currently exists or is right around the corner. I propose to build a virtual laboratory, an advanced data collection environment for the social sciences that maximizes opportunities for innovation, and that is fast, cost effective, and easy for everyone in the scientific community to use. The core of this laboratory is a representative panel of households who have agreed to be available for regular interviews over the Internet. The Internet panel is population representative in the sense that respondent recruitment is based on a probability sample. Internet access is not a prerequisite for panel participation: If a respondent lacks Internet access at the time of recruitment, he or she is provided with a laptop and broadband. The virtual laboratory will develop and test new modes of data collection and the collection of new types of data, including, but not limited to, self-administered measurement devices for the collection of biomarkers (e.g., infrared blood sugar monitors), web cams, accelerometers and heart rate monitors for measuring physical activity and physiological responses, devices for experience sampling, Day Reconstruction Methods, intensive methods to increase both unit and item response, preloading, and data quality checks. Current technology allows respondents to participate in surveys using their preferred hardware device, such as a netbook, desktop computer, iphone or other smartphone, or any other device like the browser on a game console. It is neither possible nor useful to describe the many kinds of data that might be collected. The idea is that the laboratory we propose will be able to follow new technological and scientific developments, without committing to one particular technology ex ante. A few examples will illustrate the point. Example 1, GPS tracking: With more and more cell phones equipped with GPS, GPS tracking is becoming more sophisticated and yet more affordable. Software is now available or can be easily developed to track a respondent's GPS-enabled cell phone from the web and combine it with real-time location based information. This combination would have many possible applications, such as allowing researchers to initiate a small survey by text messaging questions to a respondent when s/he leaves the gym or visits a tax consultancy office. Example 2, Eye tracking: In recent years, the increased sophistication and accessibility of eye tracking technologies have generated a great deal of interest in the commercial sector. Most applications focus on web or software usability, presenting a target stimulus to a sample of consumers while an eye tracker is used to record the activity of the eye. By examining fixations, saccades, pupil dilation, blinks, and a variety of other behaviors, researchers can determine a great deal about the effectiveness of the web or software interface. This technology can be easily adopted and used to test alternative interviewing techniques and to examine respondent behavior during an interview, e.g. to gauge which information on a screen is actually taken into account when answering a question.

201

October 8, 2010 Example 3, Accelerometers to measure physical activity: An accelerometer is a device that measures proper acceleration, the acceleration experienced relative to freefall. Accelerometers are increasingly being incorporated into personal electronic devices like the iPhone and allow researchers access to objective measurements of physical activity. These measurements can be retrieved in real time or can be uploaded to a central location when the respondent has access to a computer, allowing for follow-up questions based on the measured activity. Example 4, Telemetry: Telemetry is a technology that allows the remote measurement and reporting of information of interest to a central location for further analysis. Thus it can be used to link the output of all these new technologies. Example 5, Integrating survey information with social network information: Having access to social networking sites like Facebook (only with a respondents permission, of course) provides researchers with ample information about a respondent without actually asking questions. This technology can help reduce the respondent burden, gives the respondent more flexibility and a familiar interface, and allows for consistency checks based on the data retrieved from the social networking site.

What exists today?


MESS An existing facility that comes closest to what I am proposing is the MESS project in The Netherlands (http://www.centerdata.nl/en/TopMenu/Projecten/MESS/index.html ).3 The core element of the MESS project, currently about mid-way through its first seven years of funding, is the so-called LISS panel (Longitudinal Internet Studies for the Social Sciences). The LISS panel consists of approximately 5,000 households representing the Dutch-speaking population in the Netherlands. The panel is based on a probability sample drawn from the population registers. Households without prior Internet access are provided with broadband access (and a PC) to participate. The LISS panel has been fully operational since early 2008 and has now collected two years of data. Annual interview time is about 300 minutes. Panel members complete relatively brief (30-minute) online questionnaires monthly and are paid for each completed questionnaire. Half of the interview time is reserved for the LISS core study. This core study is repeated yearly (spread out over several months) and borrows from various national and international surveys to facilitate comparison with other data sources. The core survey covers a much broader range of topics and approaches than would be possible with other surveys using more traditional interview methods. The remaining interview time is used for experiments and innovation: Respondents can complete online questionnaires at any time during the month.

For the sake of full disclosure, I am one of the principal investigators of the MESS project. I am also the director of the American Life Panel, discussed below.

202

October 8, 2010 The application and review procedures for experiments are similar to those of TESS (see below), but there is no a priori restriction on the size or duration of the experiment that one can propose. In the first two years, about 40 proposals for experiments were accepted. TESS TESS (Time Sharing Experiments for the Social Sciences) is somewhat similar to the MESS project in its use of a standing Internet panel, Knowledge Networks. The panel is available at no charge to researchers who complete an application. The TESS web-site lists about 125 papers based on experiments conducted with the panel between 2003 and 2008. MESS and TESS do have some notable differences: TESS does not collect much core information about the panel members, except for basic demographics, and the number of items in a questionnaire as well as sample sizes are strictly limited (essentially, the more items, the smaller the sample size). Also, unlike LISS (MESS), TESS considers proposals only for experiments, not for regular surveys. Nevertheless, TESS services are clearly in demand. American Life Panel The RAND American Life Panel (ALP) is similar to the Knowledge Networks and the LISS panel in its reliance on a probability based sample and its ability to include respondents without prior Internet access by offering a laptop and Internet subscription. The panel currently includes approximately 3000 US households, with firm plans to increase the number to 5000 (including a Spanish language subpanel). Since 2007, some 120 experiments or surveys have been conducted. The HRS survey instrument has been programmed for the ALP and administered to the ALP respondents, so the full HRS core information on all panel members is available. Use of the panel for surveys or experiments is open to all researchers, but is not free4. The ALP is used intensively (approximately three surveys or experiments per month) and one might worry about survey fatigue and hence increased attrition. The annual attrition is between 5 and 6% a year. The low attrition rate may be partly due to the relatively generous incentives offered to respondents ($20 per half hour of interview time). Occasional comparisons with other surveys about similar topics show broad consistency. Data are disseminated through a web-site that allows free download of datasets, the construction of a custom made dataset by combining variables from different waves and putting them in a shopping cart. One can download data at any time during or after the field period. (https://mmicdata.rand.org/alp/index.php/Main_Page)

A substantial part of the experiments and data collection is supported by grants from the National Institute on Aging. Other major funders are the Social Security Administration and several non-profit institutions. The pricing is $3 per interview minute for the first 500 respondents, $2.50 for the next 500 and $2 per interview minute beyond 1000 respondents.

203

October 8, 2010

Conclusion
A laboratory as proposed will both dramatically expand opportunities for social science research and be highly cost effective. The technology exists; we only have to put it together.

204

Implications of the Financial Crisis for the Grand Challenge Questions for the NSF/SBE Randall S. Kroszner October 2010 The recent crisis has highlighted areas and questions that would be extremely valuable to investigate in greater detail. I will choose to touch only on a few of these topics in this limited space and will purposely range widely rather than try to deeply develop each subset of ideas. The Role of Economic History and Comparative Economics: Perhaps the single most important piece of economic research that provided guidance to Federal Reserve Board members during the crisis was Milton Friedman and Anna Schwartz Monetary History of the United States, especially the sections related to the Great Contraction. In a crisis, policy-makers must act quickly and on limited information. Although the banking and financial markets have changed dramatically since the 1930s, the Friedman and Schwartz book provide a detailed exposition of a previous crisis and analysis of policy interventions, or lack thereof, that mitigated or exacerbated the crisis. This style of analysis motivated by economic theory but heavily focused on institutions and data to provide a broad coherent view of how to think about and respond to a crisis is one that may not be fashionable today but can be extremely powerful. Understanding past crises, both domestically and internationally, is crucial to the advancement of macroeconomic and finance theories. Comparative and historical analyses are also fundamental to capacity building. Students will gain an enormous amount by having a greater appreciation for what has happened and history can provide valuable examples, possibly even natural experiments, to advance our understanding of the fragilities of the macroeconomy and financial system. Gathering more systematic historical and comparative data sets on asset prices and the structure of markets is a necessary part of this capacity building. The Expectations Formation Process and Learning: The process through which key economic agents develop and modify expectations is not well understood. There is much evidence, for example, that explicit inflation targets are correlated with lower and less volatile inflation. The usual explanation is that the articulation of a target leads expectations to be better anchored and thereby allows a central bank to achieve its goals more easily. While this may be true, we understand little about the learning process itself: what types of actions and/or communications cause economic actors to update their beliefs and to change their behavior? Once beliefs become anchored, in which circumstances can they become unanchored? Precisely how does this affect their behavior, as price-setters and price -akers? Work in psychology and sociology could be helpful in developing richer data sets and theories of learning and updating behavior. Such research would be extremely valuable for helping central banks to decide the most effective communication strategy and help to shape how to respond to a crisis.

205

Similarly, we understand little about the expectations development and change process in asset markets. In a wonderful paper titled Noise, Fischer Black emphasized that the most we can hope for in pricing and valuing many classes of assets in markets is to try to be within two standard deviations of what our models might suggest are the fundamental values. In many cases, those standard deviations can be quite wide, so large movements in asset prices could be a normal part of the financial markets. Add to that updating of beliefs about appropriate discount rates, future cash flows, etc. and the result could be swings in market prices. Such volatility could in turn affect saving behavior and macroeconomic outcomes. Building a greater understanding of the interaction of the formation and change of expectations at the individual level and the implications for market-wide and economy-wide behavior by perhaps drawing on experimental methods and theories in other disciplines, is a great challenge for economics but one with potentially high pay-offs. Interaction between and Effectiveness of Monetary and Fiscal Policy: The crisis has underscored the knowledge gaps that we have in the effectiveness of and interaction between different aspects of government policy in mitigating (or exacerbating) economic and financial volatility. On the fiscal side, we have a paucity of systematic empirical research on the fiscal multipliers what types of spending seems to be most/least effective, what types of tax changes seem to have the largest/smallest impacts in the short run? Much of the current debate has been focused on the size of fiscal actions rather than on what types of changes in taxes and spending have the greatest impact on behavior and incentives in the short and intermediate/long runs. Historical and comparative data sets once again can be extremely helpful here in building capacity. Also, interaction effects between fiscal and monetary policy have not been fleshed out. Can monetary policy simply accommodate fiscal policy changes and offset them? Is this symmetric, that is, if tighter monetary policy can offset looser fiscal policy in an expansion, is it also true that looser monetary policy can be effective in offsetting tighter fiscal policy in a downturn? Studying past combinations of monetary and fiscal actions, such as the tightening that occurred in the late 1930s in the US that led to a form of a double dip would be valuable. It will also be an important part of further capacity building to develop our theoretical understanding of monetary policy effectiveness when the interest rate falls towards the zero lower bound. Interconnectedness and Too Big/Too Interconnected to Fail: The interconnectedness of financial markets and institutions and the implications for macroeconomic outcomes is another grand challenge for economics (see Kroszner 2010). Moral hazard problems arise from the existence of any private insurance or public safetynet scheme. Studying mechanisms to mitigate excess risk taking behavior is a longstanding endeavor, but one that certainly needs increased attention going forward. In particular, there is relatively little systematic evidence on the size and types of distortions that arise from implicit or explicit safety nets. For example, does the potential for moral hazard distort incentives in financial innovation towards products with difficult-tomeasure tail risks? In addition, how does the interconnectedness of markets and institutions make the system more fragile and does too interconnected to fail make this problem worse?

206

Improving the dialogue between financial economics and macroeconomics to better understand the sources of fragilities, propagation mechanisms, and macroeconomic implications of moral hazard in markets and institutions would be valuable. Making students aware of these interactions and unanswered questions and encouraging the recruitment of faculty who do not easily fit into a macro or finance slot but can teach and research across these fields would provide important capacity building. References: Black, Fischer. (1986) Noise, Journal of Finance, Presidential Address to the American Finance Association. Friedman, Milton and Anna Schwartz. (1963) A Monetary History of the United States, 1867-1960. Princeton: Princeton University Press. Kroszner, Randall. (2010) Interconnectedness, Fragility, and the Financial Crisis, presented to the Financial Crisis Inquiry Commission, February, http://www.fcic.gov/hearings/pdfs/2010-0226-Kroszner.pdf

This work is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/3.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.

207

208

Virtual Model Validation for Economics


DavidK.Levine,www.dklevine.com,September9,2010 White Paper prepared for the National Science Foundation, Released under a Creative Commons

AttributionNonCommercialShareAlikelicense Abstract: How can economic policies lead us to greater wealth, welfare and happiness? There is no
bigger question in economics. The answer lies in correct economic theories that capture the causality linking policies to outcomes. Economic theories are a dime a dozen we have more theories than we have human beings. The key need to answer any economic question lies in our ability to validate theories.DoweliveinanAustrianworld?InaKeynesianworld?Aworldofrationalexpectations?This WhitePaperproposesthatmajoradvancesinsimulatingvirtualeconomiesispossibleandwouldform the basis for rapid and accurate assessment of current and future economic models. I make general proposalsfordevelopinginfrastructure,aswellaspresentingspecificideasaboutthenatureofmodels of sophisticated expectations that are needed to allow artificial agents to mimic the behavior of real humanbeings. One of the most essential needs for developing better economic theories and policy prescriptions are improved methods of validating theories. Originally economics depended largely on field data usually gathered from large or small scale surveys. The introduction of laboratory experimentsaddedanewdimensiontovalidation:agoodtheoryoughttobeabletopredictoutcomes intheartificialworldofthelaboratory.Moderneconomicshasextendedthisintwodirections:tofield experiments,keepingmanyofthecontrolsoflaboratoryexperiments,whileconductingexperimentsin more natural environments, and through internet experiments, extending the size and scope of the populationsusedinexperiments.Theimportanceoftheseinnovationsisgreat,andhavebeendiscussed in depth by List, among others. Not only is it easier, faster, and more practical to validate theories, reducing the time needed to improve and develop new theories, but through the greater control possibleintheexperimentalsetting,issuesofcausalitythataredifficulttoanalyzewithfielddatacanbe addressed.Ontheotherhand,laboratory,fieldandinternetexperimentsallhaveimportantlimitations. Even the largest internet experiment is orders of magnitudes smaller than a small real economy: thousandsofsubjectsratherthanmillionsofrealdecisionmakers.Experimentsarefasterthanwaiting fornewdatatoarrive,butarestilltimeconsumingthemoresowiththeNationalInstituteofHealth tryingtoapplyinappropriatemedicalethicstoharmlesseconomicsexperiments.Subjectsareexpensive to pay, especially in large scale experiments. Finally, control in experiments is still and necessarily imperfect.Inparticular,itisnotpossibletocontrolforeitherriskaversionorsocialpreferences. Analternativemethodofvalidatingtheoriesisthroughtheuseofentirelyartificialeconomies. To give an example, imagine a virtual world something like Second Life, say populated by virtual robotsdesignedtomimichumanbehavior.Agoodtheoryoughttobeabletopredictoutcomesinsuch avirtualworld.Moreover,suchanenvironmentwouldofferenormousadvantages:completecontrol for example, over risk aversion and social preferences; independence from wellmeant but irrelevant human subjects protections; and great speed in creating economies and validating theories. If we
209

weretolookatthephysicalsciences,wewouldseethelargecomputermodelsusedintestingnuclear weaponsasapossibleanalogy.Intheeconomicsettingthegreatadvantageofsuchartificialeconomies istheabilitytodealwithheterogeneity,withsmallfrictions,andwithexpectationsthatarebackward looking rather than determined in equilibrium. These are difficult or impractical to include in existing calibrationsorMonteCarlosimulations. Thenotionofvirtualeconomiesisnotnew:thegeneralconcepthasbecomeknownasagent basedmodeling.Yet,despitethreedecadesofeffort,agentbasedmodelsarelargelylimitedtostudying phenomena such as traffic patterns. In economics, the most influential work has been that of Nelson and Winter examining the evolution of growth and change. Yet this work has not had a substantial impactonourunderstandingofeconomics.Theproblematicaspectofagentbasedmodelinghasbeen the focus on frameworks for agents interacting the development of languages such as SWARM or Cybeleandthefactthatagentsarelimitedtofollowingsimpleheuristic decisionrules. Agentbased models are interesting from the perspective of modeling order arising from the interaction of many simpledecisionrulesalongthelinesofBeckersobservationthatdemandcurvesslopedownwardsif peoplechooserandomlyalongthebudgetline.Thesemodelsarealsousefulinconstructingexamplesto illustrate special points. However, existing agentbased models are too primitive to be used either for evaluatingeconomicpoliciesorforvalidatingeconomictheories. Although some behavioralists argue that people are very simpleminded and follow simple heuristicrules,thepracticalproblemfacedbyvirtualmethodologyisthatpeoplearefarbetterlearners and vastly more sophisticated than the best existing computer models. Simple rules are not a good representationforexampleofhowstockmarkettradersoperate.Whatisneededareagentswho usesophisticatedalgorithms,alongwiththestudyofeconomiesthatareofinteresttoeconomistsand policy makers. Existing agentbased models focus on the simple evolution of rules; real people in the laboratoryandthefieldareabletorecognizesophisticatedpatternsandanticipatefutureevents.One ofthesimplestexamplesisthelearningthattakesplaceinthelaboratorywhensubjectsdiscoverthe ideaofdominatedstrategies. Thekeytodevelopingusefulvirtualeconomiesismodelinginferencesaboutcausality.Auseful place to start thinking about the issues is with Sargents The Conquest of American Inflation and the followonpaperswithCogley.TheretheFederalReserveismodeledasasophisticatedBayesianlearner equipped with powerful econometric methods and sophisticated intertemporal preferences but limited to the data on hand. Dynamic Bayesian optimization including the use of policy experiments enablestheFedtolearnthetruerelationshipbetweenunemploymentandinflationleadingovertimeto superiormonetarypolicy.Themodelisvalidatedagainstthelast50yearsofdataonmonetarypolicy, inflationandunemployment. Notice that in the SargentCogley world, the decision problem is relatively narrowly circumscribed:howbesttochoosetherateofmonetaryexpansion.Theincomingdataisalsonarrowly circumscribed,andissuessuchaslearningbyanalogydonotarise.Moreover,theyassumeoneofthe underlying models is correct: in an environment where none of the underlying models are correct, Bayesianmethodsarenotsouseful.
210

A useful framework for thinking about this problem is the computer science problem that underliesboosting:thechoiceamongexperts.Acarefullychosenrandomizationstrategygivinggreater weighttoexpertswithbettertrackrecordscandoaswellasymptoticallyasthebestexpertthisistrue evenwhenalltheexpertsarewrong.Theframeworkcanbeextendedtodynamicdecisionmakingby putting time into blocks the technique often used in analyzing repeated games. If the block is long enoughthepayoffisapproximatelythesameastheinfinitepresentvalue.Whilethismaybeauseful benchmark for learning about causality, it is a weak criterion. First, blocking periods means that the length of time taken to learn is enormous. While the evaluation of dynamic plans requires that those plans be maintained for some period of time, there is little point in sticking with an expert when it is clear that he is doing a poor job. Second, causality between periods is ignored. To take a simple example,imaginearepeatedPrisonersDilemmagamewhereyouropponentplaystitfortatstartingby notcooperating.Anexpertwhosaysyouropponentwillalwayscheatwillleadyoutocheatandhis forecastswillbecorrect.Ofcourseanexpertwhosaysyoushouldalwayscooperateandyouropponent willcooperateafterthefirstperiodisequallycorrect,andyouwilldomuchbetterfollowinghisadvice. Ifweacceptthebasicframeworkofreplacingapriorovermodelswithaprobabilityofchoice over experts where an expert is a tractable rule for making forecasts and recommendations it is possible to outline the issues that need to be resolved. Experts make recommendations that can be evaluated directly the weak criterion for asymptotic success has already been described. They also provide suggestions of evidence that demonstrate their ability as experts. That evidence needs to be assessedonseveraldimensions: 1. Calibrationhowaccuratearethepredictions? 2. Precision are the predictions vague or are they sharp? Does the expert always say itmight rainorshinewithequalprobabilityordoeshesayhalfthetimeitwillrainforsureandhalf thetimeitwillshineforsure.Thelatterpredictionismoreprecise. Calibration and precision are the traditional criteria for model evaluation. In the economic setting thereareadditionalconsiderations. 3. Relevance: A molecular biologist may be able to make very accurate forecasts about the formation of molecules but why should that lead me to take his investment advice? Notice thatthereissurelyheterogeneityamongpeopleinevaluatingtherelevanceofforecasts:some will believe that a good molecular biologist can better forecast stock prices than a bad one; others will be more focused on their records as stock forecasters. Utility is directly related to relevance:twoexpertsmaybothrecommendthatInotjumpoffabridge.Onemaysimplysay ifyoujumpyouwilldiewhiletheothermayprovideamoredetailedandaccurateevaluation of the consequences the speed at which I will hit the water, how far my body parts will be flung, and so forth. But this additional information is of no use in decision making. Detailed informationaboutinferiorplansisnotespeciallyuseful.

211

4. Scope:expertsdifferinthenumberofthingstheycanforecast.Itisnaturaltoputmoreweight ontheadviceofanexpertwhocanpredictagreatmanythingswelloveronewhocanpredict onlyafewthingswell.Noticethatthisgoestheoppositedirectionfromrelevance. 5. Ease of implementation. Some advice may be difficult to follow in practice. Here models of impulsivebehaviorsuchasthoseofFudenbergandLevineandtheempiricalworkofCunhaand Heckmanmayplayausefulrole. Notice that the expert approach gets at several tricky issues. One is the issue of generalization. An expertwhomakesforecastsinmanydomainsimplicitlyprovidesaformulaforgeneralizingresultsfrom onedomaintoanother.Forexamplewemaywanttogetattheideathatwhensomeonelearnstheidea ofdominatedstrategiestheydonotmerelylearnnottoplayadominatedstrategyinaparticulargame, buttheylearnnottoplaydominatedstrategiesinanygame.Thiscanbedoneintheexpertframework by providing an expert who advises against playing dominated strategies in all games. Second, the frameworkdealswellwiththetransmissionofideasexpertscanbecommunicatedfromonepersonto anotherunlikethesendingofmessagesorprovisionofdatathereisnoissueofthereliabilityofthe information;therecipientscantesttheideasimplicitintheexpertforthemselves.However,itdealsless well with the need to experiment with off the equilibrium path behavior to determine the causal consequencesbecauseitdoesnottelluswhatistheoptionvalueofexperimentation. A large part of the advancement of the science must be the development of this and other learningmodels,understandingwhichoneshavethebesttheoreticalproperties;whichonesworkbest inpractice;andwhichonesaremostdescriptiveofactualbehavior.Thevalidationagainstbehaviormay benefit from neuroeconomic experimental methods such as that of Glimcher and Rustichini. At the extremeeffortssuchasthebluebrainprojectcanprovideadditionalpathsofvalidation. The infrastructure requirements for this project are large. The development and validation of sophisticatedagentmodelsisonlyapartofthehugeinfrastructurerequired.Tocombinemanyagent modelsintoasingleeconomyrequiresreliablehighspeednetworkingandsubstantialcomputerpower at each end, as well as thoughtful and welldeveloped models of production, trade and consumption. Existingagentbasedmodelingframeworksmayprovideastartingpoint,butarenotequippedtohandle the kind of information flows or individual agent computations that simulating an artificial economy requires. At the human level the infrastructure requires the collaboration of economic theorists and practitionerswithcomputerscientists,psychologistsandneuroscientists.

References
Fudenberg,D.andD.K.Levine[2009]:"LearningTheoreticFoundationsforEquilibriumAnalysis," AnnualReviewofEconomics,1

Levitt,S.andJ.List,FieldExperimentsinEconomics:ThePast,ThePresent,andTheFuture," EuropeanEconomicReview,forthcoming,2009.

212

Nelson,R.andS.Winter[1982]AnEvolutionaryTheoryofEconomicChange.Cambridge: HarvardUniversityPress

213

214

SBE 2020: A Complete Theory of Human Behavior


Andrew W. Lo , September 30, 2010
Abstract
I propose the following grand challenge question for SBE 2020: can we develop a complete theory of human behavior that is predictive in all contexts? The motivation for this question is the fact that the different disciplines within SBE do have a common subject: Homo sapiens. Therefore, psychological, sociological, neuroscientific, and economic implications of human behavior should be mutually consistent. When they contradict each otheras they have in the context of financial decisionsthis signals important learning opportunities. By confronting and attempting to reconcile inconsistencies across disciplines, we develop a more complete understanding of human behavior than any single discipline can provide. The National Science Foundation can foster this process of consilience in at least four ways: (1) issuing RFPs around aspects of human behavior, not around disciplines; (2) holding annual conferences where PIs across NSF directorates present their latest research and their most challenging open questions; (3) organizing summer camps for NSF graduate fellowship recipients at the start of their graduate careers, where they are exposed to a broad array of research through introductory lectures by NSF PIs; and (4) broadening the NSF grant review process to include referees from multiple disciplines.

If economists could manage to get themselves thought of as humble, competent people on a level with dentists, that would be splendid. John Maynard Keynes, 1931

Harris & Harris Group Professor, MIT Sloan School of Management, and Chairman and Chief Investment Strategist, AlphaSimplex Group, LLC. Please direct all correspondence to: Andrew W. Lo, MIT Sloan School of Management, 100 Main Street, E62618, Cambridge, MA 02142. I would like to thank Dr. Myron Gutmann of the National Science Foundations Social, Behavioral, and Economic Sciences Division for soliciting white papers on the grand challenge questions facing our respective disciplines, and Dr. Daniel Newlon for encouraging me to submit a response. By nature, such an exercise is meant to be speculative, not rigorous; nevertheless, I feel an obligation to apologize in advance to my academic colleagues for the informal and discursive nature of this essay. Research support from the National Science Foundation (SES0624351) and the MIT Laboratory for Financial Engineering is gratefully acknowledged. This work is licensed under the Creative Commons AttributionNonCommercial-ShareAlike 3.0 Unported License. To view a copy of this license, visit: http://creativecommons.org/licenses/by-nc-sa/3.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.

215

I believe the most important grand challenge question facing the NSFs Social, Behavioral, and Economic Sciences Directorate is relatively easy to state, but extraordinarily difficult, if not impossible, to achieve by 2020. Can we develop a complete theory of human behavior that is predictive in all contexts? That this should be the grand challenge question for SBE 2020 is by no means clear. But before attempting to defend this proposal, let me explain more fully what the question asks. By all contexts, I mean all situations in which humans may find themselves, including economic, social, cultural, political, and physical. By predictive, I mean an empirically validated and repeatable cause-and-effect relation. And by complete theory, I mean a theory that is consistent with all known facts of human behavior, and which is sufficient for making correct predictions of human behavior in novel contexts. The motivation for seeking an answer to this ambitious question is the simple observation that the social, behavioral, and economic sciences have a single common focus: Homo sapiens. Because these disparate fields share the same object of study, their respective theories must be mutually consistent when there is any overlap in their implications. For example, anthropological theories of mating rituals must be consistent with the biology of human reproduction; otherwise flaws exist in one or both of these bodies of knowledge. Of course, in many cases, implications may not overlap. The particular mechanisms of genetic mutation have no direct bearing on the sources of time-varying stock market volatility, so checking for consistency between the former and the latter is unlikely to yield new insights. But because all SBE disciplines involve the study of the very same human behaviors and institutions, opportunities for consistency checks should arise often. One of the most prominent inconsistencies among the SBE disciplines is the rational expectations paradigm of economics and the many behavioral biases documented by psychologists, behavioral economists, sociologists, and neuroscientists. Rational expectations, and its close cousin, the efficient markets hypothesis, have come under fire recently because of their apparent failure in predicting and explaining the current financial crisis. Some of this criticism is undoubtedly unwarranted populist reactions to the life-altering economic consequences of the national decline in U.S. residential real-estate prices from 2006 to 2009. In such an emotionally charged atmosphere, it is easy to forget the many genuine breakthroughs
NSF SBE 2020 Andrew W. Lo

216

that have occurred in economics over the last half-century such as general equilibrium theory, game theory, growth theory, econometrics, portfolio theory, and option-pricing models. But any virtue can become a vice when taken to an extreme. The fact that the 2,319-page Dodd-Frank financial reform bill was signed into law on July 21, 2010, six months before the Financial Crisis Inquiry Commission is scheduled to report its findings, and well before economists have developed any consensus on the crisis, underscores the relatively minor scientific role that economics apparently plays in policymaking. Imagine the FDA approving a drug before its clinical trials are concluded, or the FAA adopting new regulations in response to an airplane crash before the NTSB has completed its accident investigation. There are legitimate arguments that the rigorous and internally consistent economic models of rational self-interestmodels used implicitly and explicitly by policymakers, central bankers, and regulators to formulate laws, manage leverage, and rein in risk-taking in the economyhave failed us in important ways over the past decade. Even the most sophisticated stochastic dynamic general equilibrium models did not account for the U.S. housing market boom and bust, nor were they rich enough to capture the consequences of securitization, credit default insurance, financial globalization, and the political pressures influencing Fannie Mae and Freddie Mac. But rather than discarding rationality altogether, a more productive response is to confront the inconsistencies between economic models of behavior and those from other disciplines, and attempt to reconcile them and improve our models in the process. While frustrating, contradictions often present opportunities for developing a deeper understanding of the phenomena in question. Consider the example of probability matching: an experimenter asks a subject to guess the outcome of a coin toss, where, unknown to the subject, the coin is biased75% heads and 25% tailsand the experimenter agrees to pay the subject $1 if she guesses correctly, but will expect the subject to pay $1 if she guesses incorrectly. This experiment is then repeated many times with the same subject and coin (and the tosses are statistically independent). After a sufficiently long sample of tosses, it should be possible for the subject to observe that the coin is biased toward heads, at which point the subject should always guess heads so as to maximize her cumulative expected winnings.
NSF SBE 2020

However, the vast majority of subjects do not follow this


Andrew W. Lo

217

expected-wealth-maximizing strategy; instead, they appear to randomize, guessing heads 75% of the time and tails 25% of the time! This strange and well-known example of irrationality in human judgment may not be so irrational after all when viewed from the perspective of evolutionary biology (Lo and Brennan, 2009). To see why, consider the hypothetical case of animal deciding whether to build its nest in a valley or on a plateau. If the weather is sunny, nesting in the valley will provide shade, leading to many offspring, whereas nesting on the plateau provides no cover from the sun, leading to no offspring. However, the opposite is true if the weather is rainy: the valley floods, hence any offspring will drown in their nests, but nests on the plateau survive, yielding many offspring. Now suppose the probability of sunshine is 75% and the probability of rain is 25%. The rational behavior for all individuals to follow is to build their nests in the valley, for this maximizes the expected number of each individuals offspring. Suppose the entire population exhibits such individually optimal behaviorthe first time there is rain, the entire population will cease to reproduce, leading to extinction. Similarly, if the entire population behaves in the opposite manner, always choosing the plateau, the first time sunshine occurs, extinction also follows. Lo and Brennan (2009) show that the behavior that maximizes the growth of the population is for individuals to randomize their nesting choice by choosing the valley with probability 75% and the plateau with 25% probability. Matching probabilities confers an evolutionary advantage, not for the individual, but rather for the population as a whole. And since, by definition, the current population consists of the survivors, it will reflect such advantageous behavior disproportionately to the extent that behavior is heritable. expected wealth, its evolutionary advantage is clear. This broader perspective suggests that the economic notion of rationality is not wrong, but simply incompletehumans usually do maximize their expected wealth but, under certain circumstances, they may engage in other types of hard-wired behavior that are far more primitive. Probability matching is likely to be a vestigial evolutionary adaptation that may not increase the chances of survival in the current environment, but nevertheless is still part of our behavioral repertoire. Using a simple binary choice model, Brennan and Lo (2009) show that While probability matching is, indeed, irrational from the perspective of maximizing an individuals

NSF SBE 2020

Andrew W. Lo

218

several commonly observed behaviors such as risk aversion, loss aversion, and randomization are adaptive traits that can emerge organically through evolution. The natural follow-on questionone that lies at the heart of the grand challenge question posed aboveis why do we choose one particular behavior from our repertoire for a given occasion and not another, and how does that repertoire change over time and across circumstances? The answer to this question has obvious consequences for virtually all economic models, yet the tools by which we will solve this challenge may come from other disciplines such as psychology, neuroscience, ecology, and evolutionary biology. Other examples of important questions about economic behavior that fall outside standard economics are: How do emotions affect the stability of preferences over time and circumstances? What role does memory play in economic decisionmaking? What do theory of mind experiments imply for strategic behavior? Can robust optimal control explain the regulatory challenges of fast-paced innovation? Does network analysis provide new insights for systemic risk in the financial system?

By reconciling the inconsistencies and contradictions between disciplines, we can develop a broader and deeper understanding of Homo sapiens. These examples illustrate the value of consilience, a term re-introduced into the popular lexicon by E. O. Wilson (1998), who attributes its first use to William Whewells 1840 treatise The Philosophy of the Inductive Sciences, in which Whewell wrote The Consilience of Inductions takes place when an Induction, obtained from one class of facts, coincides with an Induction, obtained from another different class. This Consilience is a test of the truth of the Theory in which it occurs. In comparing the rate of progress in the medical vs. the social sciences, Wilson (1998, p. 182) makes a thought-provoking observation: There is also progress in the social sciences, but it is much slower, and not at all animated by the same information flow and optimistic spirit The crucial difference between the two domains is consilience: The medical sciences have it and the social sciences do not. Medical scientists build upon a coherent foundation of molecular and cell biology. They pursue elements of health and illness all the way down to the level of biophysical chemistry

NSF SBE 2020

Andrew W. Lo

219

Social scientists by and large spurn the idea of the hierarchical ordering of knowledge that unites and drives the natural sciences. Split into independent cadres, they stress precision in words within their specialty but seldom speak the same technical language from one specialty to the next. This is a bitter pill for economists to swallow, but it provides a clear directive for improving the status quo. Although economics occupies an enviable position among the social sciences because of its axiomatic consistency and uniformity, Homo economicus is a fiction that can no longer be maintained in light of mounting evidence to the contrary from allied fields in SBE. For disciplines in which controlled experimentation is possible, consilience may be less critical to progress because inconsistencies can be generated and resolved within the discipline through clever experimental design. But for disciplines such as economics in which controlled experimentation is more challenging, consilience is an essential means for moving the field forward. And even in fields where experiments are routine, consilience can speed up progress dramatically. The revolution in psychology that transformed the field from a loosely organized collection of interesting and suggestive experiments and hypotheses to a bona fide science occurred only within the last three decades, thanks to synergistic advances in neuroscience, medicine, computer science, and even evolutionary biology. economics. The NSFs SBE Directorate has a unique opportunity to foster consilience in the Social, Behavioral, and Economic sciences by taking up the grand challenge question proposed at the start of this essay. Developing a complete theory of human behavior that is truly predictive in all contexts will require contributions from and collaborations between many disciplines: economics, engineering sociology, anthropology, psychology, neuroscience, ecology, evolutionary biology, and computer science. However, unlike the usual inter-disciplinary This could be the future of

grantswhich are often as effective as arranged marriagesRFPs centered on particular aspects of human behavior rather than specific disciplines will naturally draw the relevant fields together in productive ways. Beyond issuing new RFPs, the NSF can encourage consilience through other means. Holding annual conferences at NSF in which principal investigators from difference disciplines

NSF SBE 2020

Andrew W. Lo

220

are invited to come together to share their latest research, as well as their frustrations and open challenges, would be a natural extension of the NSFs activities. Providing summer camps for NSF graduate fellowship recipients at the start of their graduate careers, where they are exposed to a broad array of NSF PIs, who would be asked to deliver overview lectures about the biggest challenges in their respective disciplines, is another way to seed the next generation of scholars. Finally, changing the very review process of NSF grants to be more cross-disciplinary may create greater diversity in the type of research conducted, increasing the likelihood of consilience in the SBE Directorate and across the entire NSF research portfolio.

References
Brennan, T. and A. Lo, 2009, The Origin of Behavior, to appear in Quarterly Journal of Finance. Lo, A. and M. Mueller, 2010, WARNING: Physics Envy May Be Hazardous To Your Wealth, Journal of Investment Management 8, 1363. Wilson, E., 1998, Consilience. New York: Alfred A. Knopf.

NSF SBE 2020

Andrew W. Lo

221

222

Language and Interest in the Economy: A White Paper on Humanomics


Deirdre N. McCloskey 1
Economics ignores persuasion in the economy. The economics of asymmetric information or common knowledge over the past 40 years reduces to costs and benefits but bypasses persuasion, sweet talk. Sweet talk accounts for a quarter of national income, and so is not mere cheap talk. The research would direct economics and the numerous other social sciences influenced by economics back towards human meaning in speechmeaning which has even in the most rigorously behaviorist experiments been shown to matter greatly to the outcome. Sweet talk is deeply unpredictable, which connects it to the troubled economics of entrepreneurship, discovery, and innovation. The massive innovation leading to the Great Fact of modern economic growth since 1800 is an important case in point. Some economic historians are beginning to find that material causes of the Great Fact do not work, and that changes in rhetoric such as the Enlightenment or the Bourgeois Revaluation do. A new economic history emerges, using all the evidence for the scientific task: books as much as bonds, entrepreneurial courage and hope as much as managerial prudence and temperance.

A worrying feature of economics as presently constituted is that it ignores language working in the economy. To put it another way, economics has ignored the humanities and related social sciences, such as cultural anthropology, with their studies of human meaning. Adam Smith spoke often of the faculty of speech, and considered meaning, but his followers gradually set them aside. Until the 1930s the setting aside was gentle and non-dogmatic, allowing for occasional intrusions of human meaning such as Keynes on animal spirits or Dennis Robertson on economized love. But in the shadow of 20th-century positivism, and under the influence of Lionel Robbins and Paul Samuelson and Gary Becker and others, the study of the economy was reduced strictly to behavior (yet oddly ignoring linguistic behavior). But what, an economist would ask, of studies by Marschak and Stigler and Akerlof and others on the transmittal of information? Yes: information is linguistically transmitted, and surely one of the main developments in economics since the 1970s has been the introduction of information and signaling. But the sort of language that can be treated by routine application of marginal benefit and marginal costwhich is the bed on which all studies of language in the
1

Distinguished Professor of Economics, History, English, and Communication, University of Illinois at Chicago

1
223

economy have been laid downis merely the transmittal of information or commands: I offer $4.15 for a bushel of corn; I accept your offer; Youre fired. The trouble is that a large part of economic talk is not merely informational or commanding but persuasive: Your price is absurdly high; We need to work together if our company is to succeed; I have a brilliant idea for making cooling fans for automobiles, and you should invest in it; The new iPhone is lovely. Does it matter? Does persuasive economic talk have economic significance? Yes. One can show on the basis of detailed occupational statistics for the U.S. that about a quarter of income in a modern economy is earned by sweet talknot lies or trickery always, but mainly the honest persuasion that a manager must exercise in a society of free workers or that a teacher must exercise to persuade her students to read books or that a lawyer must exercise if a society of laws is be meaningful. The economy values sweet talk at one quarter of its total income, a gigantic and economically meaningful sum. If language in the economy was merely cheap talk, as the non-cooperative game theorists put it, then ignoring it would not matter, and its share of economic would drift towards zero: an economic agent would be no more valuable if she were sweet than if she were a mere pipe for transmitted bids and asks. The chattering character of people in markets and firms and households about their economic affairs would be like left-handedness or red hair: interesting for some purposes doubtless in the Department of English, but irrelevant to the tough, scientific matter of the economy. But that is not the case. Formal maximum-utility economics cannot explain the sweet talk. The research would need to establish the fact beyond doubt, bringing together for example mathematical economists and rhetorical theorists. It can be treated mathematically by showing that cooperative equilibria (for example) cannot be achieved without a trust created by earnest talk. In a way it is the oldest and most obvious finding of game theory that games have of course always a context of rules and customs and relationships, all of them affected by language. But the main emphasis in a research that would matter for the future of the social sciences would focus steadily on the facts of the matter, and not chiefly on the abstract theory (the abstract theory can yield any conclusion if permitted to choose any assumptions, as a matter of logic; the facts constrain the conclusions scientifically). At one level, sweet talk emerges as crucial to experiments and field studies, as Eleanor Ostrom and her colleagues have shown. Indeed, experimental economics in the past twenty years has shown that allowing experimental subjects to establish relationships (true) through conversation radically changes the degree of cooperation. "The bonds of words are too weak, Hobbes declared, to bridle men's ambitions, varice, anger, and other passions, without the fear of some coercive power." It appears that Hobbes was wrong. Businesses work with trust, Good old trustworthy Max not Max U, the maximizer of utility in a Samuelsonian way, who cannot be trusted at all. 2
224

Maximizing utility is not human meaning, as one can see in mothers and suicide bombers. The framing of bargaining anyway depends on the stories people tell. The language, the trust, the sweet talk, the conversations, all depend on ethical commitments beyond Im all right, Jack. The literature bearing on the matter even in economics alone has become quite large, ranging from Vernon Smith to Herbert Gintis. In particular the Austrian economists such as Friedrich Hayek and Israel Kirzner have long recognized the importance of discovery and other human activities beyond maximization, but stop short of grasping the role of language. They point out that real discoveries, such as that a separate condenser makes a steam engine much more efficient or that treating the bourgeoisie with something other than contempt results in enormous economic growth, arise as it were by accident. They cannot be pursued methodicallyor else they are known before they are known, a paradox. The research would show in empirical detail that conversation is the crux of discovery, and especially the astounding series of discoveries that have made the modern world. Once a discovery is made by what Kirzner call alertness it requires sweet talk to be brought to fruition. An idea is merely an idea until it has been brought into the conversation of humankind. And so the modern world has depended on sweet talk. The cooperative equilibria are gigantically important to the success of a modern economy. The best way to persuade that a multi-disciplinary study of language in the economy and society might matter is to exhibit a possible sub-project, itself of great importance, on which a good deal of preliminary work has been done (Mokyr 2010, Goldstone 2009). Thus: what was the conversational context of invention and the sweet talk entailed by innovation in the era of the Industrial Revolution? The Great Fact of an enrichment by a factor of 20 or 30 or much, much more since 1800 is the most astounding economic change since the domestication of plants and animals. Historians, economists, and economic historians have been trying to explain it since Smith, and recently have come to concentrate on it, as in the work of the economic historians Joel Mokyr and Eric Jones, the historian Margaret Jacob, the historical sociologist Jack Goldstone, the anthropologist Alan MacFarlane. The Great Fact has usually been explained by material causes, such as expanding trade or rising saving rates or the exploitation of the poor. The trouble is that such events happened earlier and in other places, and cannot therefore explain the Industrial Revolution and its astounding continuation. One can show in considerable detail, as in McCloskey 2010, that the material causes, alas, do not work. One can also show how attitudes towards the bourgeoisie began to change in the 17th century, first in Holland and then in an England with a new Dutch king and new Dutch institutions. What appears to be needed to explain the Great Fact is a humanomics, that is, an economics and sociology and history that acknowledges humans as speakers of meaning.

3
225

Two things happened 1600-1848, and the more so 1848-the present. For one thing, the material methods of production were transformed. For another, the social position of the Third Estate was raised. Whether the two were connected as mutual cause and effect through language remains to be seen. What appears to be the case (say many of the economic historians who have been looking into the question since the 1950s) is that foreign trade, domestic thrift, legal change, imperial extractions, changing psychology, and the like do not explain the onset of economic growth in northwestern Europe (while the Rest stagnated). Material causes do not appear to work. And so we must recur to non-material causes. Humanomics to the scientific rescue. (1.) One hypothesis would go as fellow: if the social position of the bourgeoisie had not been raised in the way people spoke of it, aristocrats and their governments would have crushed innovation, by regulation or by tax, as they had always done. And the bourgeois gentilhomme himself would not have turned inventor, but would have continued attempting to rise into the gentle classes. Yet if the material methods of production had not thereby been transformed, the social position of the bourgeoisie would not have continued to rise. One could put it shortly: without spoken honor to the bourgeoisie, no modern economic growth. (This last was in essence the late economist Milton Friedman's Thesis). And without modern economic growth, no spoken honor to the bourgeoisie. (This last is in essence the economist Benjamin Friedman's Thesis.) The two Friedmans capture the essence of freed men, and women and slaves and colonial people and all the others freed by the development of bourgeois virtues. The causes, one might conclude (I repeat: it remains to be seen), were freedom, the scientific revolution (not, however, in its direct technological effects, which were postponed largely until the 20th century), and above all a change in the rhetoric of social conversations in Holland and then in England and Scotland and British North America about bourgeois virtue. Or perhaps not: that is the matter for research. (2.) Another question is the ethical: can a businessperson can be ethical without abandoning her business? What then was the role of ethical change in the Bourgeois Revaluation of 1600-1800 in the Industrial Revolution. One might reply that the seven primary virtues of any human lifeprudence, temperance, justice, courage, faith, hope, and lovealso run a business life. Businesspeople are people, too. "Bourgeois virtues" would therefore not be a contradiction in terms. On the contrary, capitalism works badly without the virtuesa fact long demonstrated by economic sociologists, and now admitted even by neo-institutional and behavioral economists. The virtues can be nourished in a conversation about the market, and often have been. You can see why the neologism humanomics is appropriate here: a serious inquiry into the ethical context of the Industrial Revolution (and of development in presently poor countries, too) would require collaboration between the social sciences as behavior and the 4
226

humanities of philosophy, anthropology, history, and even theology as meaning (as in Robert Nelsons books on economic theology). (3.) One can ask how an explicitly and persuasive bourgeois ideology emerged after 1700 from a highly aristocratic and Christian Europe, a Europe entirely hostileas some of our clerisy still areto the very idea of bourgeois virtues. In 1946 the great student of capitalism, Joseph Schumpeter, declared that "a society is called capitalist if it entrusts its economic process to the guidance of the private businessman" (Encyc. Brit. 1946). It is the best short definition of that essentially contested concept, "capitalism." "Entrusting" the economy to businesspeople, Schumpeter explained, entails private property, private profit, and private credit. (In such terms you can see the rockiness of the transition to capitalism in Russia, say, where agricultural land is still not private, and where private profit is still subject to prosecution by the state, the jailing of billionaires, the cutting down of tall poppies.) Yet what Schumpeter leaves aside in the definition, though his life's work embodied it, is that the societyor at any rate the people who run itmust admire businesspeople. That is, they must think the bourgeoisie capable of virtue. (It's this admiring of the bourgeois virtues that Russia lacks, and has, whether ruled by boyars or tsars or commissars or by secret police.) (4.) Attributing great historical events to ideas was not popular in professional history for a long time, 1890-1980. A hardnosed calculation of interest was supposed to explain all. Men and women of the left were supposed to believe in historical materialism, and many on the right were embarrassed to claim otherwise. But the dream of objectivity, as the historian Peter Novick called it, hasnt work out all that well. Actual interestas against imagined and often enough fantasized interestdid not cause World War I. The Pals Brigades did not go over the top at the Somme because it was in their prudent interest to do so. Non-slave-holding whites did not constitute most of the Confederate armies for economic reasons. Nor did abolition became a motivating cause because it was good for capitalism. And on and on, back to Achilles and Abraham. We do well to watch for cognitive-moral revolutions, and not simply assume that Matter Rules, every time. A showing that ideas matter is not so unusual nowadays among historians, such as in works by Skinner or Israel. But it is another matter to show that the material base itself is determined by habits of the lip and mindthat conclusion evokes angry words among most people on the economistic side of the social sciences, and often enough from historical materialists in the humanities. In short, the sub-project proposes to give a big example of the force of language in the economyits linguistic embeddedness as the sociologists would put it. The larger point, I repeat, is to demonstrate that in the economy the force of language is not to be ignored. (Or that it is to be ignored: if the research is genuine the possibility must be lively that the hypothesis turns out to be wrong.) Thus humanomics. Ignoring the burden of art and literature and 5
227

philosophy in thinking about the economy is strangely unscientific: it throws away, under orders from a largely unargued law of method, a good deal of the evidence of our human lives. I do not mean that findings are to be handed over from novels and philosophies like canaps at a cocktail party. I mean that the exploration of human meaning from the Greeks and Confucians down to Wittgenstein and Citizen Kane casts light on profane affairs, too. A human with a balanced set of virtues, beyond the monster of interest focusing on Prudence Only, characterizes our economies. And so (the hypothesis goes) economics without meaning is incapable of understanding economic growth, business cycles, or many other of our profane mysteries. The research extends, but also to some degree calls into question, modern economics, and the numerous other social sciences from law to sociology now influenced by an exclusively Max U economics.

Bowles, Samuel, and Herbert Gintis. Forthcoming. A Cooperative Species: Human Sociality and Its Evolution. Goldstone, Jack A. 2009. Why Europe? The Rise of the West in World History, 15001850. New York: McGraw-Hill. Mokyr, Joel. 2010. The Enlightened Economy: An Economic History of Britain 17001850. London: Penguin Press; New Haven: Yale University Press.

Creative Commons License:


This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/3.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.

6
228

A New Household Panel in the U.S. Robert A. Moffitt Krieger-Eisenhower Professor of Economics Johns Hopkins University October 13, 2010 The NSF SBE has invited the research community to submit white papers outlining grand challenge questions that are both foundational and transformative. This white paper argues that a new long-term household panel would be foundational and transformative both to economics and related social sciences. (1) The important questions are the major ones concerning economic and social dynamics in the U.S. What are the causes of economic and social disadvantage, and where in the life course do their origins lie? How have the dynamics of cohabitation, marriage, child-bearing, and divorce and remarriage changed over time, what are the reasons for these trends, and what are their implications for US society? How do US workers earnings and labor market success or lack of success evolve over their lifetimes, how has this changed, and what policies might the US society follow to address those challenges? What are the contributors to disadvantage during childhood, and how does childhood disadvantage affect later life outcomes? How important are schools, neighborhoods, and other social groupings to the evolution of economic and social wellbeing over the life course? Economics, sociology, psychology, and related disciplines have long studied these questions. (2) Current understanding both of trends and of their causes and implications is poor, and one of the major reasons lies in limitations in the data infrastructure in the US for studying these issues. The major data set for studying long-term economic and social dynamics in the population as a whole is the Michigan Panel Study of Income Dynamics (PSID) which, while in some sense a national treasure because of its unique ability to study intragenerational and intergenerational dynamics over a 40-year period, has limitations which will increasingly prevent it from serving the necessary role in the future. A major investment in new data infrastructure is needed to provide the capability for new research and to inject new energy into social science research on economic and social dynamics. Such an investment would have enormous payoffs to the research community, including educators and students, as well as to policy-makers. In the remainder of this paper, these points are elaborated. The U.S. has long been a leader in the world in the development of household panel surveys. The NSF-supported Panel Study of Income Dynamics (PSID), begun in 1968, was the first of the modern panels to follow a representative sample of the population over time. It is by now well understand that the dynamics of population, family, labor market, education, health, residential mobility, and other key features of US society cannot be properly understood without data that follow the same individuals over time. The research accomplishments of the PSID are

229

virtually incalculable, having accumulated over 40 years of data on both the initial sample and the children of the sample. The research output from the PSID is enormous, and there are certain areas of research (the long-term dynamics of poverty, intergenerational dynamics) which are essentially defined by the PSID, for it is the only extant panel which can study those issues. However, despite its accomplishments, the PSID is suffering from problems of its age. It has had major cumulative attrition over time, attrition which has affected the representativeness of second and third generation PSID respondents in ways that are difficult to ascertain. While a number of studies have shown that the PSID has roughly maintained its cross-sectional representativeness, there are serious questions about its representativeness of dynamic patterns, and there is evidence that it is increasingly composed of individuals with more stable life trajectories. The original sample of the PSID was also quite small, only 5000 families, and less than that if the low-income oversample (which has since been largely dropped) is omitted. Weights in the PSID do not necessarily restore its representativeness. It necessarily omits immigrants to the US since 1968 (an attempt to bring them in was unsuccessful). Also, it is to some extent locked into its history by using what are now regarded as outdated methods of data collection in its first few years (e.g., not attempting to recontact attritors from earlier rounds). While the NSF should continue its vital support for the PSID in the next and future renewal rounds, since it will continue to be the only data set in the country capable of examining medium-term and long-term economic and social dynamics, in the long run it should not be the only US national household long panel. There is no other panel in the US to fulfill this function. The closest substitute is the Survey of Income and Program Participation, the representative Census Bureau survey. However, the SIPP is focused on short-term dynamics and no panel has lasted more than a few years, so the long-term dynamics that have been possible to study with the PSID cannot be examined with the SIPP. The SIPP has also always never been interested in some of the innovative survey additions like biomeasures, which would probably not be allowed by the Census Bureau. The Department of Labor has conducted two panels of youth, one started in 1979 and one started in 1997, but a third (originally, the idea was for a new one every ten years) has not been started. These surveys are focused on labor market issues and are only aimed at examining specific birth cohorts. There are several panels of the aged, most prominently the Health and Retirement Survey, but these necessarily are not useful for studying the non-aged population. At the same time, other countries are starting new household panels that completely dominate the PSID or any other general-population survey in the U.S. The most prominent example is the UK panel USoc (Understanding Society), a new 20,000 household panel with an ethnic minority oversample, collection of biomeasures, capability for linkage to administrative data sets, and an additional 2,000 household panel for methodological experimentation (the Innovation Panel). All members of the household are interviewed, plus a self-completion instrument for 10-15 year olds. Fieldwork started in January, 2009, and interviewing is annual. Germany is copying the USoc design with a similar new survey with similar sample sizes. Canada is in the advanced stages of planning a new, large household panel. Other countries in Europe are discussing adopting these new, large, innovative panels.

230

With these developments, the US is in danger of falling behind other countries in its ability to analyze important economic and social issues for the society as a whole. The piecemeal approach in the US, where individual agencies fund special-population panels, cannot provide the same level of research potential as those being developed in other countries. With the myriad socioeconomic problems facing the US at the current time, it is essential that we have the research basis to address those problems. The primary obstacle to starting a new household panel of such a large size has always been cost. A 20,000 household survey would cost, by some estimates, somewhere in the neighborhood of $20 million to $30 million on an annual basis. If the panel were followed for a sufficient number of years (say, long enough to generate a longer panel than the SIPP), the cumulative cost would be high indeed. It may be that the US does not have the resources, or political will, to support this cost even though several other countries are able to do so. Given this, other modes of data collection must be considered. One alternative widely discussed but never attempted on a very large scale are internet surveys. Such surveys have a difficulty being representative and tend to underrepresent lower income households, and the underrepresentation cannot be satisfactorily addressed simply by reweighting. Innovative alternatives, such as offering to hook up household TVs to allow households without computers to participate in the survey, are one means of reaching more households in the country. However, research on internet surveys is still in its infancy, and we know relatively little about whether participating households are representative or about what nonresponse rates would look like over a longer-term panel. Another possibility is a mixed-mode survey which combines traditional in-person or telephone household surveys with internet surveys, with the mix designed to optimize the relative advantages of both and to generate a more representative sample. The different modes of collection might generate mode effects, but this has, again, not been studied. The appropriate course of action at this point would be for NSF to sponsor a working group, a conference, or a series of working groups and conferences of experts to discuss the possibilities for a new household survey, alternative designs, and the costs of each. Experts at traditional surveys as well as experts in new modes of data collection should be involved, as well as economists and other social scientists who would be using the data for applications. Experts from survey firms who are capable of generating realistic cost estimates should also be brought into the discussion. The NSF has invited the research community to submit ideas that would unlock a new cycle of research. Nothing would unlock a new cycle of research on the dynamics of economic and social behavior in the U.S. more than a fresh household panel which was adequate to investigate the key social issues. The number of new theories and ideas for exploring social dynamics have far outrun the available data, and no amount of new ideas for research are capable of advancing social knowledge if there are not the data to test them on. This is the reason for the critical importance of a new U.S. panel.

231

This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/bync-sa/3.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.

232

Economics, Climate, and Values: An Integrated Approach Julie A. Nelson Department of Economics, University of Massachusetts Boston Evelyn Fox Keller Program in Science, Technology and Society, Mass. Inst. of Technology Abstract How can we integrate the role of values and ethics in economic analysis of climate change without sacrificing the positive aspirations of that science? Given the urgency of the measures required by climate change, economic analysis has never been as important as it is today. And given the necessity of value judgments in economic analyses of policy options, the tension between fact and value has never been more conspicuous. But while significant strides have recently been made in the understanding of both the inadequacy and impracticality of a fact/value dichotomy in scientific research, many in economics seem to continue to adhere to outmoded (and now clearly inappropriate) images of science. The net effect has been to undermine the usefulness of economic advice to policy makers. The ideal of objectivity to which economists aspire needs to be reframed and broadened in ways that take advantage of new resources from the philosophy of science, environmental philosophy, and other social sciences. Ultimately, changes in the education of young economists, as well as in patterns of support for practicing economists will be necessary to effect a shift to an ideal of objectivity in which the role of values can be properly integrated.

233

1. The Need for a New Understanding of the Role of Values in Climate Economics How can we integrate the role of values and ethics in economic analysis of climate change without sacrificing the positive aspirations of that science? It is not hyperbole to say that this generation's major challenge is climate change. What we do about it will have larger consequences for future human well-beingor future human sufferingthan our actions on any other issue. As the findings of climate science gather increasing scientific support, attention shifts to the question of how we can effect the transformations in our economies required by the expected changes in climate . For this, economic analysis is indispensable. Unfortunately, however, the economics profession in the United States is in large part failing to meet its responsibilities in this area. Trapped in an outmoded view of science as an enterprise that must eschew discussion of values in order to preserve detachment, the analyses of our most prominent. economists lend themselves to a critical undermining of responsible policy responses. This was acutely apparent in the response of economists such as William Nordhaus and Gary Yohe to the British Treasury's Stern Review on the Economics of Climate Change. The Stern Review's choice of a near-zero discount rate, was, they claimed, evidence of unjustified moralizing. By contrast, claiming the high status of science and rationality for their own work, they ignore the morally preposterous implications of its results. (One might ask re the contrast between Europe and the US?) Significant strides have recently been made in the understanding of both the inadequacy and impracticality of a fact/value dichotomy in scientific research, but many in economics seem to continue to adhere to outmoded (and now clearly inappropriate) images of science. The belief that mathematical formalization combined with rigorous empirics automatically provides value-free results remains a foundational assumption of the contemporary mainstream discipline. But as many have pointed out, such techniques give one only, the assurance that someone else starting from the same assumptions and data will reach the same conclusions. Nordhaus's rationale for using a market rate of interest as a discount rate, for example, is based on the intuition that such a rate might in principle be observable by anyone. 1 Yet this way of attempting to achieve unbiased research actually leads to a pronounced bias--a bias in favor of the status quo: evaluation of most meaningful changes requires the sort of explicit ethical reflection that is being avoided. It is often supposed that any alternative to such methodology-based objectivity implies a rejection of science and a slide into relativism and unfounded emotion-based claims. Indeed, views such as Nordhaus's have given ammunition to those who argue that economic analysis is worse than useless, and should be entirely abandoned in favor of exercises in, for example, visioning and participatory methods. Both sides of that debate, however, remain entrapped by the same fact-value dichotomy.
The fact that there actually is no such single market rate on which economists agree, however, obviously weakens Nordhaus's argument.
1

Economics, Climate, and Values

234

There is, however, another solution, which involves recognizing the inescapable intertwining of fact and value, while continuing the systematic search for reliable knowledge. Amartya Sen has called this "transpositional" objectivity. This (in fact more exacting) standard of objectivity requires that the viewpoints and values underlying the analysis be brought out into the open and subjected to scrutiny. Because viewpoints may be shaped by factors such as nationality, class, race, gender, status, generation, and habits peculiar to particular professions, this requires being able and willing to articulate the reasoning behind one's research in ways that can be understood by a larger community than the one composed of one's closest peers, and an openness to dialog with such larger communities. In the case of climate economics, the perspective of future generations, while it cannot be actually brought to the table, cannot be neglected. Adherence to methodological strictures alone cannot assure this. 2. How to Advance Re-evaluating the role of ethics in economics challenges assumptions that are deep-seated in the mainstream of U.S. economics. Accordingly, improving economic analysis of climate change will require a multi-pronged effort. The rising generation, given their energy and larger stake in the outcomes of climate change policy, should be a key part of this transformation. One prong should therefore seek to transform and revitalize economics education, from the undergraduate (or even K-12) level and on upwards. Both environmental concerns and questions of ethics are currently largely neglected in the core curriculum: Students in are generally taught that resource-blind growth models, that assume complete substitutability among different kinds of capital, reflect simply "the way the world works." Building the capacity for graduate students to think competently about the relation of ethical questions to their work would require special interventions such as summer institutes and innovative teaching materials, since in many cases current faculty are largely unprepared to take this on. A more ethically grounded approach may also appeal to some groups who currently may be disproportionately disaffected with economics, including women and minorities. The climate change questions are of such urgency, however, that we cannot wait for the the outflow from such a new pipeline of training. Creating an environment in which presently practicing economists could receive support, rather than censure, for ethically-sophisticated and sustainability-promoting work hence must be another priority. To the extent that review boards that make decisions about funding and promotion remain dominated by those who confuse adherence to methodological conventions with objectivity, projects that hide important value judgments under a veneer of technical sophistication will continue to receive funding, while explicit discussions of values will be considered "soft" and "not economics." This is further amplified by systems of peerreview in economics journals, when the group of peers is constrained to an overly small group of like-minded scholars. The NSF could intervene in an important way in these professional systems by examining its own funding priorities. Funding individuals and institutes whose work exemplifies a healthy consideration of both facts and values, and promises productive

Economics, Climate, and Values

235

work on transformative economic change, could help shift the mainstream from its current course. Actual dollar awards would help persuade economists through extrinsic incentives, and the imprimatur of NSF approval of "strongly objective" research would reinforce investigators intrinsic motivation to act in accord with our important values. 3. Relevant Research Fortunately, there are rich resources that can be inform a better understanding of the relationship of ethics and knowledge. And, while less prominent than some of the other voices in U.S. debate, there are also climate economists who are not afraid to make explicit their valuing of the future. The fact/value dichotomy has been well exploredand explodedby economist Amartya Sen and a number of those who work in his wake. Philosophers of science Evelyn Fox Keller (author) and Phillip Kitcher, as well as philosophers Martha Nussbaum and Hilary Putnam (2003), give rich and convincing arguments on the subject. A number of critical or heterodox groups within economics, including Institutionalist, socio-, ecological, feminist, and evolutionary economists, have also developed analyses which challenge the fact/value distinction and pioneer innovative methodologies. Recent research in behavioral economics, cognitive psychology, and social psychology have also greatly advanced our understandings of topics including human motivation toward ethical action. Oddly, much of economics still retains the assumption that economic investigators are ourselves untouched by emotional motivation, cognitive bias, or social mores. So, along with discussions from the philosophy of science, some of the results from these disciplines could be drawn on to enrich the discussion. Within climate economics, a number of economists are pursuing economic analysis with an explicit goal of valuing human well-being. These include Frank Ackerman (2009), Paul Baer, Stephen DeCanio, Richard Howarth, Julie Nelson (author, 2008), Kristen Sheeran, and Elizabeth Stanton. Baer's work, for example, includes a proposal for "greenhouse development rights" which looks at equity issues affecting more-and less-affluent groups around the globe. As another example, one of Stanton's essays examines how regionally disaggregated Integrated Assessment Models are slanted to preserve rich world privilege, under the cover of merely-technical-seeming "Negishi weights." Also worth mention, for their potential contribution to better economics, are the works of environmental philosophers including Stephen Gardiner, and Dale Jamieson, and Karen Warren. It is important to recognize that critiques of mainstream economics are widespread, and discrimination must be exercised. Not every contribution by critical, heterodox, philosophical or other thinkers outside the economics mainstream is, in our judgment, helpful. Some critics of mainstream economics seem merely to exchange an obsession with detachment, quantification, and technological progress for an equally onesided emphasis on, for example, relationships, qualitative work, and pristine wild

Economics, Climate, and Values

236

environments. To be clear: This is not what we are advocating; we are urging improvements upon rather than rejection of existing modes of analysis. We believe the NSF is well-positioned to help economists and other social scientists address the question we raise in this essay: How can we integrate the role of values and ethics in economic analysis of climate change without sacrificing the positive aspirations of that science? 4. Other Questions Given the singular importance of climate change as an issue that must be faced by our generation, and the damage that is being done by ethically irresponsible research, we hope that the SBE program will keep two questions in mind when evaluating the results of this request for white papers. First, do the suggestions deal with issues of importance for the well-being of humans (and other species)? While a great many projects may be intellectually fascinating, and some scholars may argue for a concentration on "basic" (meaning the most highly generalizable) science, it is a fundamental economic insight that devoting resources to any one project generally involves the opportunity cost of forgoing others. Choices have to be made. The issue of values is therefore at the very center of what science in society is about. We believe that consideration of the well-being of future generations, and the urgency of the need for effective climate policy, demands that the projects that hold hope for the mitigation of climate change be given priority. Second, do the suggestions deal adequately with the issue of ethics and knowledge? Neither projects that pretend that ethical issues are irrelevant to research, nor projects that propose ethical reflection detached from theory, empirics, and policy, are likely to be helpful. We applaud the SBE for launching this request for input on "grand challenge questions that are both foundational and transformative." The need to transform climate economics is a "next-generation research" challenge in more ways than one: It requires the creation of a new generation of economic analysis, to try, to the extent still possible, to create a livable environment for the generations to come. References Ackerman, Frank. 2009. Can We Afford the Future?: The Economics of a Warming World. NY: Zed Books. Nelson, Julie A. 2008. Economists, Value Judgments, and Climate Change. Ecological Economics 65(3):441-447. Putnam, Hilary. 2003. "For Ethics and Economics Without the Dichotomies." Review of Political Economy 15(3): 395-412.

Economics, Climate, and Values

237

This work is licensed under the Creative Commons Attribution-NonCommercialShareAlike 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/3.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.

Economics, Climate, and Values

238

Some Foundational and Transformative Grand Challenges for the Social and Behavioral Sciences: The Problem of Global Public Goods1
William Nordhaus October 3, 2010 In a letter to colleagues, Myron Gutmann, Assistant Director of the National Science Foundation, invited people to contribute white papers outlining grand challenge questions that are both foundational and transformative. This paper will address some foundational issues that cross the boundaries of many social and natural science: the issue of how to deal with global public goods. The problem of global public goods Many critical issues facing humanity today -- global warming and ozone depletion, banking crises and cyber warfare, oil-price shocks and nuclear proliferation are ones whose effects are global and resist the control of both markets and national governments. These are examples of global public goods, which are goods whose impacts are indivisibly spread around the entire globe. These are not new phenomena but are becoming more important because of rapid technological change. Global public goods differ from other economic issues because there is no workable mechanism for resolving these issues efficiently and effectively. If a terrible storm destroys a significant fraction of America's corn crop, the reaction of prices and farmers will help equilibrate needs and availabilities. If scientists discover the lethal character of lead in the American air and soil, the government is likely, eventually and often haltingly, to undertake to issue the necessary regulations to reduce lead in gasoline and paint. But if problems arise for global public goods, such as global warming or nuclear proliferation, there is no market or government mechanism that contains both political means and appropriate incentives to implement an efficient outcome. Markets can work wonders, but they routinely fail to solve the problems caused by global public goods.

The author is Sterling Professor of Economics at Yale University. He is a member of the National Academy of Sciences and the American Academy of Arts and Sciences. He is also a member of the National Academy of Sciences Division of the Behavioral and Social Sciences and Education Committee. Email: william.nordhaus@yale.edu. This work is licensed under the Creative Commons Attribution-NoDerivs 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nd/3.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA. I am grateful for the advice and comments several readers of earlier drafts.
1

__________________________________________________________________________________ Nordhaus, Grand Challenges for the Social Sciences October 3, 2010
239

Global Public Goods and the Westphalian Dilemma In theory, global public goods are well understood as the polar case of a Samuelsonian public good. In practice, they raise the most intractable issues for realworld resolution primarily because of what has been called the Westphalian dilemma. Whenever we encounter a social, economic, or political problem, one of the first questions raised concerns the appropriate organizational level at which the problem should be addressed (it is called fiscal federalism in public finance). We expect households to deal with childrens homework assignments and take out the trash; we expect local or regional governments to organize schools and collect the trash; we expect national governments to defend their borders and manage their currencies. For the case of global public goods, there exist today no workable market or governmental mechanisms that are appropriate for the problems. With a few exceptions, there are no mechanisms by which global citizens can make binding collective decisions to slow global warming, to cure overfishing of blue-fin tuna, to form a world army to combat dangerous tyrants, or to rein in dangerous nuclear technologies. National governments have the actual power and legal authority to establish laws and institutions within their territories; this includes the right to internalize externalities within their boundaries and provide for national public goods. By contrast, under international law as it has evolved in the West and then the world, there is at present no adequate legal mechanism by which disinterested majorities, or even supermajorities, can coerce reluctant free-riding countries into mechanisms that provide for global public goods. Participants of the Treaty of Westphalia recognized in 1648 the Staatensystem, or system of sovereign states, each of which was a political sovereign with power to govern its territory. As the system of sovereign states evolved, it led to the current system of international law under which international obligations may be imposed on a sovereign state only with its consent. Because nations are deeply attached to their sovereignty, the Westphalian system leads to severe problems for global public goods. The de facto requirement for unanimity or broad consensus is in reality a recipe for inaction. Particularly where there are strong asymmetries in the costs and benefits (as is the case for nuclear non-proliferation or global warming), the requirement of reaching consensus means that it is extremely difficult to reach universal and binding international agreements. Not only does each nation face a powerful incentive to free-ride off the public-good efforts of other nations, but each is likely to perceive the costs and benefits of cooperation through a biased cognitive lens that justifies free-riding. One answer to the political vacuum is to create international institutions. Such organizations generally work by unanimity, have few provisions that are binding on recalcitrant countries, and generally apply only to countries which have agreed to participate. Even for life and death issues such as nuclear weapons, if a state like North
__________________________________________________________________________________ Nordhaus, Grand Challenges for the Social Sciences October 3, 2010
240

Korea declines to participate in the Non-Proliferation Treaty, there is no provision for forcing its adherence. There are important examples where the international system has responded to this set of problems. Some are rules such as prohibitions on torture, slavery, genocide, piracy, and racial discrimination. Another area, particularly important for national security, resides in the power of the U.N. Security Council, although these require consent of the five permanent Members. The rules governing international trade have evolved toward multinational decision-making. In the environmental arena, treaties to reduce ozonedepleting chemicals have been an important contribution. But the exceptions are limited and do not cover many critical areas. The central proposition of this White Paper is that global public goods are becoming more important, and will become increasingly important in the years ahead. The grand challenge for economics, political science, international relations, and associated social sciences is to devise mechanisms that overcome the bias toward the status quo and the voluntary nature of current international law in life-threatening issues. The Westphalian system is an increasingly dangerous vestige of a different world. Just as economists recognize that consumer sovereignty does not apply to children, our international institutions and analyses must come to grips with the fact that national sovereignty often cannot deal effectively with critical global public goods. Challenges in Dealing with Global Public Goods Dealing effectively with global public goods has two intellectual grand challenges, which are both critical and complementary. The first challenge is the analytical one. This involves understanding the behavioral aspects that underlie the problems associated with global public goods. I have outlined the difficulties that are at the intersection of game theory, economics, political science, and international law. There exists a substantial core of work on cooperative games and public-goods mechanisms. One critical task, then, is to explore the perverse outcomes as well as possible mechanisms involved in addressing global public goods. It should be emphasized that the nature of the syndromes may differ according to whether they are benign or harmful; on the structure of the production technologies, such as whether they are additive, best-shot, or weakest-link; on the distribution of gains and losses; and on the scale of the problem. The second challenge consists of actual problems that pose dangers to human societies. Each of the problems mentioned in this White Paper (global warming, overfishing, and cyber warfare, as examples) has a specific structure and a local expertise of knowledge. (In this context, local denotes intellectual as well as geographical proximity.) To take the example of global warming, the local expertise involves climate scientists, ecologists, marine biologists, energy specialists, and the like. But the local expertise is not sufficient to deal with global public goods. It is also
__________________________________________________________________________________ Nordhaus, Grand Challenges for the Social Sciences October 3, 2010
241

necessary to recognize the analytical issues involved, the nature of the externalities, and the mechanisms for reaching solutions and this is where the complementarily between the analysis in the first challenge and the local knowledge in the second challenge arises. To return to the example of global warming, those who have studied the history of international agreements will recognize that it is insufficient to tell countries that a terrible future awaits them if they do not act. It will be necessary to design systems in which affirmative national steps to contribute to global action serve a countrys own national self-interests, particularly when the national costs of action are large and the national costs of inaction appear small.

Strategy for Research for the National Science Foundation and other Agencies Finally, I address the strategy for the NSF and other agencies in addressing the programmatic study of global public goods. I do not recommend establishing a special program to deal with such issues. It is generally fruitless to attempt to establish programs in the social sciences to address specific challenges that spring up from time to time. Rather, I would suggest two complementary approaches. First, for the NSF, I suggest that each program within the Directorate for the Social, Behavioral, and Economic Sciences make a special effort to solicit and recognize research that is targeted to research on aspects global public goods. This would also encourage cross-disciplinary research programs (such as economics and international relations, or political science and behavioral psychology) that address specific issues that arise in the context of global public goods. This goal might be accomplished by establishing matching funds to provide incentives to programs. The definition of the analytical areas to be supported as well as the specific problems that need examination should be determined by a panel specifically asked to delineate the issues. A second approach transcends the boundaries of the social sciences and includes the study of the actual problems raised by global public goods. The federal government directly and indirectly supports a wide variety of research programs on the substantive issues discussed here. Indeed, climate change, security issues, environmental research, terrorism, and public health are extensively studied in different parts of the federal government. However, these problems are often viewed in isolation as technical, scientific, or security issues. In fact, they are just as much social and political issues. Research and policy have sometimes foundered because they did not incorporate the relevant social-science insights from the very conception. It is essential to have a mechanism by which social-scientific analyses can be included in such research programs and for social scientists to be at the table when the scope of the problems and the research programs are defined. So the grandest challenge of all is to ensure that

__________________________________________________________________________________ Nordhaus, Grand Challenges for the Social Sciences October 3, 2010

242

research on global problems be seen in the social as well as the technical context when the substantive problems are considered. What are the stakes? I conclude with the warning from Rockstrm et al. on global environmental issues: HumanactivitiesincreasinglyinfluencetheEarthsclimateandecosystems.TheEarthhas enteredanewepoch,theAnthropocene,wherehumansconstitutethedominantdriverof changetotheEarthSystem.Theexponentialgrowthofhumanactivitiesisraisingconcern thatfurtherpressureontheEarthSystemcoulddestabilizecriticalbiophysicalsystemsand triggerabruptorirreversibleenvironmentalchangesthatwouldbedeleteriousoreven catastrophicforhumanwellbeing.Thisisaprofounddilemmabecausethepredominant paradigmofsocialandeconomicdevelopmentremainslargelyoblivioustotheriskof humaninducedenvironmentaldisastersatcontinentaltoplanetaryscales. While this warning is only a hypothesis at this stage, it does indicate the stakes involved in the grand challenge of finding solutions for global public goods.

References Samuelson, P., ReStat, 1954:387-389. Nordhaus, W., Managing the Global Commons, MIT Press, 1994. Barrett, S., Environment & Statecraft, Oxford University Press, 2003. Johan Rockstrm et al., Ecology and Society, 2009.

__________________________________________________________________________________ Nordhaus, Grand Challenges for the Social Sciences October 3, 2010

243

244

Complexity in Social, Political, and Economic Systems


Scott E Page University of Michigan-Ann Arbor Santa Fe Institute
We live in a time of rising complexity both in the internal workings of our social, economic and political systems and in the outcomes that those systems produce. Increasing complexity has implications for social science: it hinders our ability to predict and explain and to prevent large deleterious events. To make headway on the problems that animate social and behavioral scientists: economic inequality, health disparities, achievement gaps, segregation, climate change, terrorism, and polarization among voters we must acknowledge their complexity through interdisciplinary teams. Harnessing complexity will require several changes: we must develop practical measures of social complexity that we can use to evaluate systems; we must learn how to identify combinations of interventions that improve systems; we must see variation and diversity as not just noise around the mean, but as sources of innovation and robustness; and finally, we must support methodologies like agent-based models that are better suited to capture complexity. These changes will improve our ability to predict outcomes, identity effective policy changes, design institutions, and, ultimately, to transform society.

245

Introduction Confronting and harnessing complexity will be among the greatest challenges facing social scientists over the coming decades. What, though, is complexity? Complexity can refer to either the attributes of a system or to the outputs a system produces: the social life of a city can be characterized as complex because it has diverse actors, whose behaviors are interdependent, as can prices in the stock market, a non stationary time series that features unpredictable booms and busts. No mere metaphor, complexity has been formally defined in dozens of ways. Some characterize output complexity as lying between ordered and random, others as being difficult to explain, to describe or to predict. According to any of these many definitions, the outcomes of our social, political, and economic systems have become more complex. So too have the inner workings of those systems. Were more interconnected and more adaptive than ever before. Technology has been a major cause. We are now much less geographically limited in our friendships and lags in information have all but disappeared. For example, just a few decades ago businesses received quarterly inventory updates. Now it is said that when you purchase milk, Wal-Mart phones the cow. To the extent that rising complexity means a lack of predictability, it limits the efficacy of social science. How do we predict the unpredictable? To the extent that it implies more large events, it has enormous implications for society. We need only consider the damage wrought by the home mortgage crises and the resulting recession. And, to the extent that it means incomprehensibility, it means that we have little chance of designing effective policies. Take just one example, the problem of rising obesity. Social and behavioral science research reveals scores of causes ranging from the economic (the low price of corn syrup infused Big Gulps) to the genetic (the tendency for some people to store fat). Individually, these causes have small magnitude, they explain little of the variation, and they act on each person differently. We therefore expect that individual policy interventions such as taxing sugary drinks, will have only modest effects. We also know that collectively, these scores of factors have a large effect. How then do we design interventions? How can we pull levers in combination to reverse the trend?

Implications of Increasing Complexity We can see the increasing complexity as creating problems, as making it hard for us to predict and design, and as making us more susceptible to large, deleterious events. We can also see it as an opportunity. The opportunity derives from the potential for complex systems to produce emergent functionalities, such as the consciousness and cognition that emerges from interacting neurons and

246

synapses (Holland 1999). The physicist Phillip Anderson famously commented ``more is different, i.e. the whole can be more than its parts. Yet, human society has only begun to learn to harness the potential of more. Current social science models cannot help us harness complexity because, for the most part, they rely on an equilibrium paradigm. Changes in outcomes are seen as movements in equilibria and not as natural progressions in a dynamic process. The relevance of complexity does not deny the value of equilibrium models. Equilibrium may well remain at the core of our disciplines. However, even the most casual observer recognizes that most markets, political systems, and social systems do not sit at rest but are constantly in flux. To account for this incommensurability between our models and reality, social scientists add in randomness in the form of shocks or uncertainty. Most, though not all, equilibrium models that toss in noise see the internal complexity of systems as disorganized. As Warren Weaver pointed out over sixty years ago, disorganized complexity cancels out, so it cannot add up to more than the parts. Yet, the large unexpected outcomes produced in complex systems are anything but random. The economy and other social systems contain organized complexity, in which the whole not just exceeds but transcends the parts. For this reason, complex systems scholars often refer to social outcomes as generated from the bottomup. Hence, the term self-organization has become widespread within complexity research. Self-organized systems can produce cooperative, robust outcomes, but they can also spiral into chaos. We need to understand how to encourage the former and guard against the latter.

How Social Science Must Change to Include Complexity The challenge and the opportunity for social science are to build in the robustness necessary to limit the damage of large events and to harness complexity to produce better social outcomes. To accomplish these tasks requires at least four changes in practice. First, we must advance our methodologies for measuring and categorizing the complexity of social processes. At present, we make little or no effort to measure and categorize the complexity of social processes. How complex is our welfare system or the international financial system? How complex is the U.S. tax code or our legal system? Why should we care about these questions? Organizational theorists have long claimed that if you can measure it, you can manage it. Managing the complexity of systems may be just as important as working to maintain their efficiency.

247

In addition, once we know the complexity of a system, we have some idea about how predictable it is and how likely it will produce large unexpected events. We can even consider complexity as a policy consideration in and of itself. We might even ask whether a new policy will make a system more complex, and if so, whether or not the cost of the complexity is worth the potential costs. As mentioned, physical and computational measures of complexity exist in abundance. These can provide a starting point for creating social complexity metrics, but they need refinement for the simple reason that electrons dont think. Thus, its relatively easy to understand how their behaviors aggregate. People, on the other hand, do think. We base our behaviors on mental models, belief systems, and passion. We can also copy others whom we perceive as being successful. This last observation, that we often mimic others, implies a positive feedback and a close link between social and evolutionary systems. Positive feedbacks along with interdependencies are a major driver of large events. Hence, social and evolutionary systems may be more prone to fluctuations than physical systems. Second, we must promote interdisciplinary research on specific problems, such as improving education. Educational success on depends individual, family, peer, and community influences. Empirical studies of educational performance include psychological variables (IQ), social variables (crime rates), health variables (presence of lead in bloodstream and obesity), and economic variables (family income). As in the aforementioned case of obesity, to explain academic success we can create a comprehensive model with lots of weak individual effects but strong collective effects. But if we break a complex system into disciplinary parts, we ignore the complex interactions that enable the whole to be more than its parts. To harness complexity, to borrow a term from Robert Axelrod and Michael Cohen, we must take a generative perspective and see social outcomes as produced by purposive actors responding to incentives, information, cultural norms, and psychological predispositions. We need interdisciplinary teams to unpack how those many forces interact. A large part of that process of taking a generative perspective will be rethinking variation and diversity, the third necessary change. Social and behavioral scientists must think more like ecologists who see variation as central and less like statisticians, who perceive variation from average effects as noise or individual differences that average out. In complex systems, variation (differences within types) and diversity (differences in the number and distribution across types) drive innovation and contribute to system level robustness. Robustness, or what some call resilience, refers to the ability of a system to maintain functionality in response to external shocks and

248

internal adaptations. Note that robustness differs from stability the capacity for a perturbed system to return to the same equilibrium. Robust systems often maintain functionality by locating a new arrangement of their parts. Variation and diversity also provide the building blocks for emergent phenomena and for complexity itself. Thus, empirical studies that assume a single type of actor or behavior may be woefully inaccurate in their estimations if in fact the systems contain multiple types of actors. Finally, we must advance computational agent-based modeling even though this methodology is not, as some claim, a panacea. Agent based models consist of a set of object agents situated in place and time that follow and adapt rules of behavior. The modeler designs a system, sets the agents loose, and watches what transpires. The behaviors included in the models need not be ad hoc, mechanistic rules. They can be calibrated to actual behaviors revealed in the laboratory, identified in field studies, or discerned from empirical studies. Many people conflate computational methods with complexity. This is a mistake. We must disconnect scientific methodologies from the properties of the systems that they are used to study. In point of fact, agent based models produce aggregate outcomes that fall into one of four broad categories: static equilibria, periodic equilibra (patterns), random paths, or complex trajectories. Social systems exhibit all these four behaviors as well. We see phenomena ranging from stable market prices, to random walks on Wall Street, to political cycles, to complex intra-industry dynamics. A goal of social science should be to explain why some processes produce outcomes that fall into one category and others fall into another.

Summary On the positive side, increased engagement with complexity research can enable social scientists to better explain and predict what occurs in our increasing complex world and anticipate large events. On the normative side, a deeper engagement with complexity can help us to identify and pull levers within systems to effect change, to design rules, laws, and incentive structures that limit the prevalence of large deleterious events, and to leverage the potential for emergence to improve outcomes.

References Holland, John 1999, Emergence: From Chaos to Order, Basic Books.

249

Page, Scott (2010) Diversity and Complexity. Princeton University Press U.S. House of Representatives, Committee on Science and Technology. 2010. Building a Science of Economics for the Real World, July 20.

This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/3.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.

250

Research Opportunities in Economics: Suggestions for the Coming Decade James Poterba, MIT and NBER September 2010 The remarkable events in the global economy in the last two years have drawn new attention to the central role that economic institutions and economic policies play in determining the well-being of virtually all participants in modern industrial societies, even those who livelihoods are far removed from the financial sector that was the epicenter of the global economic crisis. The global financial crisis suggests a number of critical research opportunities for the next decade. While they have attracted less media attention, recent developments in energy markets, in environmental policy, in information technology, and in the availability of data on the economic activity of households are also opening new research vistas for economics. This short research prospectus begins by summarizing key research issues that have been raised by the global financial crisis, and then moves on to issues that are associated with energy and environmental policy and to the wealth of new data being created by the rise of electronic commerce and internet search. Measuring and Modeling Interdependencies in Financial Markets. One of the central regulatory challenges that has been identified in the aftermath of the collapse of Lehmann Brothers and the rescue of AIG is the need to develop metrics for evaluating whether a participant in the financial system, whether a bank, an investment firm, or a sovereign borrower, is "too big to fail." The conceptual challenge in answering this question is enormous, because as the events of October 2008 demonstrated, the importance of a single actor depends critically on that actors links to other actors, and on the way those other actors will respond to signs of distress at the actor in question. This challenge needs to be addressed with empirical research directed at measurement and modeling issues, as well as with theoretical research directed at understanding market cascades and the interdependencies across markets. A central question for regulatory policy is what information needs to be disclosed by each actor in the financial system, and to whom does it need to be disclosed? Put more simply, what should regulators regulate, and how should they measure it? We now recognize that there are substantial links between financial actors, and that it regulatory authorities need to be able to judge the exposure of one systemically important financial institution to financial distress at another. Some proposals that have been discussed in the context of the newly-created Office of Financial Regulation call for financial institutions to disclose their holdings of securities, at varying levels of disaggregation, to a regulatory body that will search for patterns and excessive risk-loading on particular dimensions. Empirical research is needed to determine the value of different types of disclosures, to create risk-assessment models that offer maximum predictive power for financial distress, and to help guide the design of regulatory policy in this sphere. Theoretical issues are at least as critical as the unresolved empirical questions. What criteria should be used to determine whether a particular institution is systemically important? Answering that question requires a framework for assessing the consequences of its failure, which in turn requires assessing the set of related institutions that might be affected by such a

251

failure. The modeling of such network effects in financial markets is just beginning. The transmission of financial shocks from one firm to another, or from one sovereign borrower to the broader capital market, clearly depends on the nature and timing of policy response. This suggests that the research challenge is not simply to describe and quantify the nature of interrelationships, but to describe how the links across actors in the financial market will depend on the policy environment. The answers to these questions will surely build on several decades of theoretical research on incentives and contract structure, but they need to recognize the specific institutional features of modern global financial markets. Similarly, research on economic models of regulation, which advanced rapidly in the 1970s and helped create a wave of regulatory changes in that era, is vital to evaluating the trade-offs that will confront regulatory policy-makers. There are rich opportunities for collaborative studies by financial economists, who have the expertise to evaluate the risk attributes of specific financial products or transactions, and regulatory economists, who have analyzed the challenges of regulating risktaking and firm or individual behavior that may generate externalities in a range of market settings. Designing and Evaluating Fiscal Policy. For at least two decades, the near-consensus among academic researchers has been that fiscal policy, with its "long and variable lags," is poorly suited to stabilization of economic fluctuations, while monetary policy offered at least a potentially effective tool for high-frequency macroeconomic fine-tuning. There have been active debates about whether monetary policy should be used for such purposes, but relatively little attention to the macroeconomic effects of fiscal policy. The recent economic downturn has reignited interest in fiscal policy, and reveals important gaps in our current understanding of this policy tool. One critical set of issues concerns the stimulative effects of increases in government spending and reductions in taxes. The debate over the magnitude of the economic stimulus in early 2009 revealed widespread disagreement over the extent to which higher spending and lower taxes would raise aggregate economic activity. Moreover, while there was broad agreement that different types of spending would have different effects -- that outlays on infrastructure might have different employment consequences than extending unemployment benefits, and that cutting business taxes might have different effects than reducing payroll taxes - there was a limited base of research available to refine estimates of these policy impacts. Analyzing the economic effects of fiscal policy, studying differences across U.S. states, across nations, measuring how different fiscal policies appear to affect economic activity under different macroeconomic circumstances, are important opportunities for future research. A related, but distinct, set of questions concerns long-term fiscal balance. The U.S. an many other developed nations are currently on unsustainable fiscal trajectories. Current payroll and income taxes are not sufficient to cover the cost of current-law spending programs. This means that it is inevitable that changes will need to be made in spending programs, in taxes, or in both. The devil will be in the details of programmatic reform, and it is essential to build a solid research base for evaluating different reform strategies. The effect of changing reimbursement rules in the health care sector, for example, may effect R&D decisions by pharmaceutical firms, labor supply decisions by doctors and other health care providers, and the nature of treatment received by patients. Changes in Social Security benefit rules similarly could have differential

252

effects on the "oldest old" and on recent retirees, and they might influence household behaviors as diverse as savings decisions, retirement choices, and living arrangements. There is rich variation in past policies that can provide input on calibrating these behavioral responses, and this information must be brought to bear in policy design. Leaving specific programs aside, there are unanswered questions about the macroeconomic effect of a higher level of government debt, one of the inevitable consequences of the near-term continuation of an unsustainable fiscal stance. To evaluate policies that might bring the U.S. closer to a sustainable fiscal position, it is important to evaluate the costs of deviating from such a rule. That challenge presents a distinct but very rich set of research opportunities. Energy and the Environment. The extent to which human consumption of fossil fuels is associated with global warming has been one of the most studied scientific issues of the last three decades. As scientific advance narrows the uncertainties associated with this question, there will be greater opportunities to craft policies that will alter emission of greenhouse gases and more generally alter patterns of energy consumption. The operation of energy markets and the design of policy interventions to achieve particular objectives are central economic issues that warrant further study. Economists have long understood the basic principles of optimal tax design that apply to "Pigouvian" corrective taxes: set the tax rate equal to the marginal social damage of a good's consumption. There are refinements, such as recognizing the effect of the corrective tax on the demand for other goods, that must be considered, but this broad principle is still a useful guide. Yet there are many unanswered questions that are associated with a potential transition to an economic environment in which the tax-inclusive prices of fossil fuels are substantially greater than those prices today. Two of the most important issues concern the long-run changes in consumer behavior that will flow from higher energy prices, and the response of firms to changes in the tax and regulatory environment. First, consider consumer responses. Much of the existing research on energy demand focuses relatively short-run adjustments to price changes, such as the price elasticity of demand for gasoline and the demand for vehicles with different miles-per-gallon ratings in different fuel cost regimes. Yet there are unresolved puzzles about energy consumption, and open issues about general equilibrium responses higher energy prices. For example, there are widely documented inefficiencies in energy use in both the commercial and residential sectors - opportunities to reduce energy consumption with little or no change in the production or consumption opportunities of energy users. The underlying behavior that generates these patterns warrants investigation, both as a positive question and as policy-related input that may provide information on how behavior will adjust in response to higher taxes or other regulatory policies. It may be possible to carry out controlled experiments to learn how consumers respond to various types of incentives for energy conservation, and how the amount that can be saved with particular interventions affects the take-up of such interventions. Just as there are important opportunities to study the behavior of consumers, there are important needs and opportunities in modeling the behavior of firms. A small but growing literature has explored the economic effects of energy policies when firms are imperfect competitors. In these settings, some firms may have market power, or there may be

253

opportunities for firms to collude. The effects of various policies on energy production and on the prices facing consumers can be quite sensitive to the nature of firm competition. Exploring models of imperfect competition, and calibrating them for the energy-producing sector, is therefore an exciting direction for new research. Beyond the detailed analysis of consumer demand and energy supply, there are open issues concerning the impact of publicly- and privately-provided infrastructure on energy use. To assess the impact of expanding light rail service in a metropolitan area on energy use, it is necessary to model how intermodal transportation choices will change, to consider how residential location decisions may evolve, and to carry out the analysis at a disaggregated level. Will the former drivers who switch to light rail be drawn disproportionately from the well-to-do groups that drive late-model and relatively fuel-efficient vehicles, or will they come from lowerincome groups that drive less fuel efficient cars? How does the pricing policy for public transit affect the relative responses of these groups? Finally, there are important opportunities to study the operation of energy markets. While much energy-related research may be supported by other funders, the core research on the markets for various fossil fuels, for electricity, and for emissions associated with the combustion of fossil fuels falls squarely within the purview of social scientists. Insights from market design and auction theory have already played a central role in the helping to create markets for trading emission rights, but there are many other potential applications in the energy and environmental sectors. Regulatory policies and tax policies have long been central to the markets for crude oil, natural gas, and nuclear power in the United States, and these policies are likely to play a key role in shaping these markets in the future. Supporting research on regulatory policy analysis and on the operation of energy markets is therefore of direct relevance for the policy design process. Networking and Household Behavior. The advent of cellphones and handheld devices, coupled with social media such as Twitter and Facebook, has greatly expanded the frequency of interaction between individuals and the availability of information for a host of economic decisions. In deciding on a product purchase at a retail outlet, consumers can easily access a host of product reviews. When selecting a restaurant in an unfamiliar city, travelers can investigate ratings from previous diners in real time. There are an extraordinary range of transactions for which there have been similar expansions in information access. What are the implications for consumer choices? Will enhanced information access result in greater "herding" in product choices? How does the possibility of such herding alter strategy for firms? Does the need to establish an early product success change the way firms might develop new products and introduce them to the marketplace? How do the opportunities for networking between individuals affect job search, housing markets, and other economically significant sectors of the economy? This is an area in which there are important opportunities for both conceptual research, modeling network structure and the factors that might influence the strength of linkages across households, and for empirical work. Moreover, there are a host of regulatory design issues about information sharing and privacy, and about the extent to which firms can share information and customize marketing efforts, for which economic analysis will be key inputs. There are many issues in this research area that may be amenable to cross-disciplinary work,

254

including for example sociologists, marketing scientists, computer scientists, and applied mathematicians with experience on networking problems. At the same time that consumers are gaining access to extraordinary volumes of new information, firms are gathering much more information on their customers and on other market participants than ever before. Internet search firms that collect information on the queries of their users can build voluminous data bases that provide new insights on the products that consumers are interested in, and on related topics that involve economic activity. Google, for example, has explored the use of the frequency of searches for "temporary help" or "unemployment benefits" as a way of gauging the state of macroeconomic activity in real time. As large quantities of information on individual consumers, and on networks of households, become available, there will not only be exciting research opportunities but also important challenges. One will be finding secure ways for researchers outside of the firms that collect this information to analyze these data -- subject of course to corporate approval. This may involve designing new forms of data protection, or creating data warehouses in which data files that remove any potential identifiers for individuals can be stored for research access. Federal research support may be needed to create such data facilities and to "clean" the data that are collected by private firms. The same sort of infrastructure could be deployed to provide access to information on consumer financial records and related administrative record information, currently held by firms, that would enable researchers to move well beyond the limitations of existing household surveys. Creating model contracts for such data access and supporting the researchers who might work to extract such data from corporate files, and even supporting corporate partnerships to facilitate such data access, are promising research directions. The four issues represent promising opportunities for research in economics and allied fields. There is a fifth issue -- the implications of an aging U.S. and global population for economic institutions and economic performance -- that is also of great importance. It is widely understood that the decline in the birth rate beginning in the mid-1960s, and the coincident decline in old-age mortality rates, will lead to a gradual aging of the U.S. population in coming decades. These will pressure pay-as-you-go transfer programs to the elderly, such as Social Security and Medicare, and it will raise a host of other questions about economic activity. How will the organization of work and the design of workplaces respond to an aging labor force? Will there be important effects on asset markets, housing markets, and on the structure of economic activity across sectors? How will the aging population affect the rate of technological progress, and how will the health care sector respond to the growing need for its services? Many of these important research topics are supported by the National Institute of Aging, and while the research issues are as important as those in the four areas outlined above, they may not command the same funding priority in light of the potential availability of other funding sources.

255

256

NationalScienceFoundationwhitepaperonfutureresearchinmacroeconomics RicardoReis,ColumbiaUniversity Theeconomiccrisisofthepasttwoyearshasbroughtatremendousamountofexcitementto thefieldofmacroeconomics.Studentsareflockingintomacroclasses,PhDthesesandseminarpapers arebecomingmorecreativeandconnectedtotherealworld.Itisagoodbetthatsoonnewapproaches willemergetothinkaboutmacroeconomicphenomena.Hardertoforecastiswhethertheywillrequire achangeinparadigmoramoreintenseuseofexistingideasandmodels. Either way, it is hard to think of a time in the evolution of economic science where funding researchmaygetmorebangforitsbuck.Applicationstograduateprogramsarehigherthanever,which will in a few years lead to more economists applying for grants than ever, and research is shifting towardsthe typeofbasicfundamentalresearchthattheNSFalmostsolelyfunds.Inthisreport,Iwill presentthreeareasorquestions,whereIcurrentlyseelargeoutstandingquestions,butwhereIalsosee activework. Fiscalpolicy Fromtheendof2007totheendof2009,governmentspendingintheUnitedStatesincreased by4.4%ofGDP,thelargest2yearincreaseongovernmentspendingsince1953.Themotivationforthis rise was the longest and deepest economic recession in the postwar. Yet, there has been little economic research in the past decade on the aggregate impacts of fiscal policy on unemployment or output,oronitspotentialusetofightrecessions. Inthelasttwoyears,macroeconomistswereveryresponsiveandadaptedthetoolsthatthey had developed to study business cycles to look at the effects of government consumption. However, thesestudiesbarelyaddressedsomeofthebiggestchallengesinthisliterature.Empiricalworkfacesthe obstacle that fiscal programs go through multiple steps to be approved, and many provisions and interestsgettackedonateachofthesestages.Therefore,evenmeasuringthefiscalchangeishard,let aloneidentifyingitseffects.Moreover,theliteratureonmonetarypolicyhastaughtusthattheimpact ofachangeinpolicycanbewildlydifferentdependingonwhatpeopleexpectwillbethesubsequent policypath. Looking at this outstanding challenge, I am filled with optimism. Economists working on monetarypolicyfacedsimilarseeminglyoverwhelmingchallenges25yearsago.After15or20yearsof work,muchofitfundedbytheNSF,awealthofnewempiricaltechniquesemergedtogetherwithclever exercises at data collection or empirical identification that today leave us with some confidence as to what is the effect of a monetary policy shock on inflation, GDP or employment. With attention now turnedtofiscalpolicy,Iexpectthatwewillseeprogressinthisfieldoverthenext15yearsaswell. In current theory, the mechanism by which government policy stimulates the economy in standardmodelsisacaricatureofrealityatbest:governmentspendingisexpansionarybecauseittakes
257

resources from private hands, making households poorer, and inducing them to work harder to compensatefortheirlostwealth.Itiseasytoridiculethemodels,butwecannotforgetthattheywere built to understand the macroeconomic dynamics that follow either monetary policy shocks or technology shocks. The impact of fiscal shocks has simply not been the focus of much businesscycle research. Here I am again optimistic. The study of monetary policy went through a fundamental transformation in the 80s and 90s. Noticeably, there was a striking contrast between the confidence withwhichmonetarypolicywassetbytheFedinresponsetothecrisisvisavisourignoranceaboutthe Treasury and its fiscal expansion programs. Thinking harder about how is it that government consumption affect decisions to invest and work will require a transformation in the models, but the toolkitthatmacroeconomistsuseleavesmuchroomforcreativity. Thereisathird,morefoundationalchallenge. Oftheincreaseingovernmentspendinginthe past two years, only 25% of it has come from government consumption and investment, the two spendingcategoriestowhichalloftheresearchthatIhavedescribedaboveapplies.Theremaining75% wasduetoincreasesinsocialtransfers.Yet,thereisvirtuallynoresearchontheimpactoftransferson aggregateconsumptionoremployment.Inrepresentativeagentmodels,lumpsumtransfersfromone groupofagentstoanother,potentiallywithbudgetdeficitsinbetween,areneutraltoeconomicactivity. Distortionary transfers in turn may improve welfare but they discourage work effort and investment, depressing,ratherthanexpanding,economicactivity. An important challenge is therefore to build models where transfer programs have aggregate effects.TounderstandhowexpandingMedicaid,ortuitionassistance,oreligibilityfordisability,orearly retirementmaybeeffectiveatfightingrecessions,allseemlikequestionsthatcouldhavealargeimpact onpublicpolicyandforwhichwecurrentlydonothaveeventhemostbasicresearch. Limitedattentioninaworldoflimitlessinformation There is an old tradition in economic models that assumes that people have imperfect information about what surrounds them, and uses this insight to explain a wide variety of economic phenomena. However, these theories have always had a clear Achilles' heel: to justify why people happened to know some pieces of information, but not others. Many approaches developed, some more popular than others: whether people know everythingmodel, what others know, and any relevant variable whether they just don't know some variablesin rational expectations models whethertheyalsodon'tknowwhatothersknowinimperfectcommonknowledgemodelsandfinally whethertheydonotknoweventhemodelitselfasintheoriesoflearningandimperfectknowledge. Whiledebateoverthistopicraged,theworldchanged.Informationonalmostanythingischeap today. For the typical household, it would take a few minutes to discover most of the relevant informationitneedstoplantheirsavingsforretirement.Yet,therehasbeennonoticeableprogress(or


258

even change) in the way people save, and the typical result from surveys of people is that most lack eventhebasicknowledgethateconomicmodelsassumewouldbesovaluabletothem. The answer to this puzzle is likely that, while free information is today almost limitless, the humanbrainisnot.Inparticularourmentalenergyandattentionspanisverylimited.Howtomodel theselimitsisanoutstandingchallengethatcouldprofoundlychangeeconomics.Thereismuchactive workonthisarea,andthescopeforinterdisciplinarityisgreat.Economistshavejuststartedusingtools from neuroscience to measure brain activity and for a while drawn on philosophy and psychology to understandthelimitsofhumanknowledge. IfIwastobetonwherethenextrevolutionineconomicswillcomefrom,Iwouldsayitishere. The challenge is great, but also welldefined and appreciated, usual prerequisites for any glimmer of progress.Thenewdatafrombrainscans,aswellastheamazinginformationfromsurveysandactivities of people using the Internet is allowing the discussion to move from ideological positions to testable propositions. One problem with the cooperation across disciplines is that they are terribly expensive, andoftenfrustratinglyslowtoyieldprogress.Thishasled,inmyview,thegrowingfieldofbehavioral economics to too often focus on "cute" results that are quicker to obtain and grab the attention of sponsors.Ifwewanteconomicstodealbetterwithinformation,fundingforbasicresearchisessential. Measurementofmacroeconomicaggregatesandforecasting Thelastgreatrevolutioninthemeasurementofeconomicactivityismorethanhalfacentury old, with the introduction of price indices and the national income and product accounts. Since then, most research has revolved around improving the quality of the data, without changes to the fundamentalconceptsofwhatisbeingmeasured. However,therearereasonstothinkthatreformulatingeconomicstatisticswillsoonbecomean activeareaofresearch.Agoodexamplecomesfromourdivisionofeconomicactivityintothreebroad sectors: agriculture, manufacturing and services. With employment in the services sector getting very closeto90%,thisclassificationisalmostvacuous.Ifeverythingisintheold"services",itistimetocome upwithmeaningfulwaystogrouptheactivitieswithinservices.Oneusefulinsightcamerecentlyfrom Alan Blinder, who suggested that jobs could be instead divided into offshorable or not, a much more usefulcategorizationwheninformingtradepolicy. Anothereconomicindexinneedofreassessmentistheconsumerpriceindex.Iwasshockedby therecentworkofChristianBrodaandDavidWeinsteinshowingthatusingthedataonconsumerprices of AC Nielsen, which has a sample a few times larger than that available to the Bureau of Labor Atatistics,andjustdrawingfromittomimictheBLSsample,samplingerroralonemeantthat65%ofthe times it was impossible to say whether inflation was accelerating of decelerating from one quarter to the next. A second reason for concern is that the measures of inflation that the models of optimal monetarypolicysayshouldguidemonetarypolicyhaveonlytheslightestresemblancewiththeCPI.


259

Finally,therehasbeengreatprogressinthescienceofforecastinginthepreviousyears,inpart drivenbyincreasesincomputingpowerthatallowsustouseinformationfromthousandsoftimeseries toinformtheforecasts.Thetheoryofdynamicprincipalcomponentsmodelsappliedtoforecastinghas takengreatstridesandtherearemanynewapproachestoforecastingbusinesscyclethatarejustnow being applied. Much of this work has been left to do as macroeconomists have steered away from forecastinginthepastdecadetofocusmoreontheoryandstructuralmodeling.Theembarrassmentof the past crisis, that almost no one forecasted, may provide the right stimulus to reignite interest in forecasting. All of this work is painful, for any work that argues for measuring and forecasting economic activity in a different way has to come up with at least provisional estimates. This is time consuming, expensive, and requires assembling teams of research assistants similar to the laboratories in the natural sciences but which are inconceivable for economists with access to only very limited funding. Yet,IamencouragedbythesurprisingsuccessoftherecentbookbyReinhartandRogoffonfinancial crisis.Itisalmostentirelyadataeffort,andyetithashadasmuchimpactonourunderstandingofthe currentfinancialcrisisasanyotherwork. Conclusion Most people being asked to write about the future of macroeconomic research today would probablydiscussfinancialcrisesandtheintegrationoffinanceandmacroeconomics.Ididnotmention thissofar,inpartbecauseIexpectotherswilldescribeitinmoredetailtoyou.Inotherpartthough,I fearthattheexcitementaboutfinancewillsatiateitselfwithinafewyearsanditwillinvolveapplying the standard economic apparatus. The work I have seen in the last year has confirmed my prior. The three areas that I discussed above instead are ones that I think may have a larger impact on what is economicsbutalsothatwillyieldfruitsforlonger. References Hyunseung Oh and Ricardo Reis (2010). "Targeted transfers and the great recession" Columbia Universitymanuscript. Blinder, Alan S. and Jagdish Bhagwati (2009) Offshoring of American Jobs: What Response from U.S. EconomicPolicy?MITPress. Lusardi, Annamaria and Olivia S. Mitchell (2007). "Baby Boomer retirement security: The roles of planning,financialliteracy,andhousingwealth."JournalofMonetaryEconomics,54,205224.


260

ThisworkislicensedundertheCreativeCommonsAttributionNoDerivs3.0UnportedLicense.Toviewa copy of this license, visit http://creativecommons.org/licenses/bynd/3.0/ or send a letter to Creative Commons,171SecondStreet,Suite300,SanFrancisco,California,94105,USA.


261

262

A RESEARCH AGENDA IN ECONOMIC DIAGNOSTICS A Note Prepared for the National Science Foundation Dani Rodrik Harvard University September 8, 2010 Economists work with models. That is our great strength, as the discipline imparted by specifying well-articulated cause-and-effect relationships checks our logic and prevents us from falling into incoherence. Economics as a science makes progress one model at a time. But models are also our great weakness, because each single model is necessarily false. A model is at best a gross simplification of reality. We can get overly enamored of a particular model that happens to be inappropriate to the circumstances at hand. So we can end up misunderstanding the world and making the wrong recommendations. This is in fact not a bad characterization of what happened in economics in the run-up to the recent financial crisis. Economists put too much faith in particular financial and macro models at the expense of others -- not because these had better empirical validation, but because they were, to put it bluntly, in fashion. Many commentators, including some within mainstream economics, interpreted the failure of economists to recognize the housing bubble, emphasize the risks created by financial innovation, and agree on the solutions to be pursued once the crisis struck as evidence that economics had become bankrupt as a discipline. My view is different. Without recourse to the economists toolkit, we cannot even begin to make sense of the financial crisis. Why, for example, did Chinas decision to accumulate foreign reserves result in a mortgage lender in Ohio taking excessive risks? It is impossible to provide a coherent answer to this question without resorting to constructs from behavioral economics, agency theory, information economics, and international economics, among others. The fault lies less with economics than with how economists have used the tools at their disposal. The problem was that economists (and those who listen to them) became overconfident in their preferred models of the moment: markets are efficient, financial innovation transfers risk to those best able to bear it, self-regulation works best, and government intervention is ineffective and harmful. They forgot that there were many other models that led in radically different directions. Hubris creates blind spots. Non-economists tend to think of economics as a discipline that idolizes markets and a narrow concept of (allocative) efficiency. If the only economics course one takes is the typical introductory survey, or if one is a journalist asking an economist for a quick

263

opinion on a policy issue, that is indeed what one encounters. But take a few more economics courses, or spend some time in advanced seminar rooms, and one gets a different picture. Labor economists focus not only on how trade unions can distort markets, but also how, under certain conditions, they can enhance productivity. Trade economists study how globalization can exacerbate inequality within and across countries. Finance theorists have written reams on the consequences of the failure of the efficient markets hypothesis. Open-economy macroeconomists examine the instabilities of international finance. Advanced training in economics requires learning about market failures in detail, and about the myriad ways in which government intervention is required to help markets work better. Macroeconomics is perhaps the only applied field within economics today in which more training puts greater distance between the specialist and the real world, owing to its reliance on highly unrealistic models that sacrifice relevance to technical rigor. Sadly, in view of todays needs, macroeconomists have made little progress on policy since John Maynard Keynes explained how economies could get stuck in unemployment due to deficient aggregate demand. Some, like Brad DeLong and Paul Krugman, would say that the field has actually regressed. Economics is really a toolkit with multiple models each a different, stylized representation of some aspect of reality. Ones skill as an economist depends on the ability to pick and choose the right model for the situation. The shocking thing about economics is that very little research is devoted to what might be called economic diagnostics: figuring out which among multiple plausible models actually applies in a particular setting. The profession places a large premium on developing new models that shed light on as yet unexplained phenomena; but no-one gets brownie points for research that informs how appropriate models and remedies can be selected in specific contexts. With better diagnostic tools, perhaps economists would have been more skeptical of applying perfect-information, zero-agency-costs models to the U.S. prior to the financial crisis. And to give an example from an entirely different domain, development economics would not gravitate from one extreme to another, relying on market-led and state-led strategies in turn, and moving from one big idea to another (and sometimes repudiating all big ideas altogether). The reality is that different economies suffer from different constraints, and the appropriate models and remedies depend on the nature of the more binding constraints. Diagnostic research can help us figure out how to apply economics in different settings in an intelligent way. My colleagues and I have brought such ideas to bear on problems of growth policy in developing countries. 1 But clearly this ought to be part of a much more general research agenda.
1

See Dani Rodrik, Diagnostics Before Prescription, Journal of Economic Perspectives, Summer 2010, pp. 33-43.

264

The absence of serious research on choosing among models results also in graduate programs in economics producing PhDs who are woefully undertrained when it comes to applying their trade to the real world. A student of industrial organization, say, will be exposed to many different game-theoretic models of imperfect competition. But s/he will not be exposed to a systematic exposition on when it is appropriate to apply one of these models and not another. Over time, of course, good economists develop a knack for performing the needed diagnostics. Even then, the work is done instinctively and rarely becomes codified or expounded at any length. The trend towards empirical work in many economics subfield has pushed the problem only further back into the sub-consciousness of the researcher. For every piece of empirical work requires a background theoretical model in order to be interpreted. Even in the best kind of empirical work the kind that really hones in on the essentials of the problem at hand the manner in which the honing in has been accomplished is typically left unspecified. More commonly, the model behind empirical work is selected in an ad hoc manner or for reasons of convenience. Randomized policy evaluations that seem at first sight to be model-free are not immune from this criticism. Suppose the researcher finds that free distribution of bed nets reduce malaria incidence or that cameras in the classroom deter teacher absenteeism. Because these experiments are necessarily carried out in highly specific locales and under very specific experimental conditions, one needs a well articulated theory to be able to infer anything at all about the likely effects of similar policy interventions in different settings. In other words, extrapolation requires structure. Economics constrains that structure but does not provide a unique mapping. It all depends on the specific model we want to apply. And that in turn requires methods for making intelligent diagnostic decisions. The promise of economics as a discipline is that it is an applied science. The science part in this definition does not imply that every successive model displaces previous ones. Instead, every new model enlarges the toolkit, and makes us better able to deal with different and new circumstances. Approached as such, the discipline remains incomplete unless we develop better rules for navigating among the diverse models that it contains. A research program in economic diagnostics would help economists think systematically about how to choose among competing, necessarily simplified representations of reality. It would contribute expertise about which model to apply where. It would make researchers better applied economists and more useful policy advisers.

265

266

ThreeChallengesFacingModernMacroeconomics WhitepapersubmittedtotheNationalScienceFoundation KennethRogoff,ProfessorofEconomics,HarvardUniversity,September21,2010 Abstract: ThreeChallengesFacingModernMacroeconomics WhitepapersubmittedtotheNationalScienceFoundation KennethRogoff,ProfessorofEconomics,HarvardUniversity,September21,2010 Therearethreegreatchallengesfacingresearchersinmodernmacroeconomicstoday,all broughtintosharpreliefbytherecentfinancialcrisis.Thefirstistofindmorerealistic,andyet tractable,waystoincorporatefinancialmarketfrictionsintoourcanonicalmodelsforanalyzing monetarypolicy.Thesecondistorethinktheroleofcountercyclicalfiscalpolicy,particularlyinthe responsetoafinancialcrisiswherecreditmarketsseize.Athirdgreatchallengeistoachieveabetter costbenefitanalysisoffinancialmarketregulation.

267

Therearethreegreatchallengesfacingresearchersinmodernmacroeconomicstoday,all broughtintosharpreliefbytherecentfinancialcrisis.Thefirstistofindmorerealistic,andyet tractable,waystoincorporatefinancialmarketfrictionsintoourcanonicalmodelsforanalyzing monetarypolicy.Thesecondistorethinktheroleofcountercyclicalfiscalpolicy,particularlyinthe responsetoafinancialcrisiswherecreditmarketsseize.Athirdgreatchallengeistoachieveabetter costbenefitanalysisoffinancialmarketregulation. Priortofinancialcrisis,theconsensusmonetarypolicymodelassumedfrictionlessperfect financialmarketsineveryaspectoftheeconomy.Thiswasincontrasttoproductandlabormarkets, wheretransitorywageandpricerigiditiescreatedthepossibilitythatunemploymentandcapacitycould temporarilydeviatefromequilibriumlevels,bothinresponsetoshocksand,importantly,inresponseto monetarypolicy.Theargumentwasthatwhereasfinancialmarketsmightnotbequiteperfect,they werefarmoresothanlaborandgoodsmarkets,andanydeparturesfromidealizedperfectionwereof onlyminorconsequence.Theperfectfinancialmarketsassumptionmayseemabsurdtoalayperson, buteconomistsoftenchooseitbecauseitprovedahugesimplifyingassumption,allowinganalysisto concentrateonsay,labormarkets,wheredistortionsandimperfectionswerethought(bymany)tobe muchlarger.Certainly,economistshaddevelopedsophisticatedmodelsoffinancialfrictionsandof debtrepudiation.1However,anydeparturefromfrictionlessmarketswhereprices(including sophisticatedfuturesandderivativeprices)movetoequatedemandandsupply,createsconsiderable complications.Inaddition,therewasnoconsensusmodeloffrictions,makingithardtoknowwhat directiontopush. Despitethecanonicalmodelsobviouslystrongassumptions,economistshadbeenencouraged bytheapparentsuccessoftheirframeworksinmodelingmonetarypolicy,notjustintheUnitedStates butaroundtheworld.Thefinancialcrisis,ofcourse,deeplyundercutthatconfidence.Themodelsnot onlyfailedtopredictthecrisisitself,theyfailedtogivemeaningfulwarningsignsofanykind.Perhaps mostimportant,theycontinuedtoperformpoorlyinanalyzingtheaftermathofthecrisis.Instead, usinghistoricaldatatodevelopmentbenchmarktrajectoriesbasedonpastdeepfinancialcrisesaround theworldhasproventobeafarmorepowerfultoolbothforpredictingthecrisisandforprojectingthe economyspostcrisisrecoverypath.2Withthebenefitofhindsight,ithasbecomeapparentthatpartof theconsensusmodelssuccessmaybepartlyattributedtotherelativeeaseofforecastingduring tranquilperiods.Thefailureoftheconsensusmodelsishardlyasatisfactorystateofaffairs, policymakersneedamorenuancedframeworkforanalyzingtheirpolicychoices.

1 Forfinancialmarketfrictions,seeBernankeandGertler(1988).ObstfeldandRogoff(1996)review analysesofsovereigndefault.Thesearebothimportantexamplesofdeparturesfromperfectfinancial markets.

SeeReinhartandRogoff(2009).

268

Thechallengefacingmacroeconomistsisadauntingoneand,inmanyways,paralleltothe challengeeconomistsfacedaftertheGreatDepressionofthe1930s.Beforethen,thecanonicalmodel notonlyassumedperfectfinancialmarkets(totheextentthatconceptwasunderstoodatthetime),but alsoperfectmarketsforallnonfinancialtransactionsaswell.Butwithaquarterofthepopulation unemployedatthepeakoftheDepression,thenotionthatfrictionlessmarketsequatethesupplyand demandforlaborappearedpatentlyabsurd.ThisobservationwasacentraltenetofLordJohnMaynard Keynesseminalwork.Keynes,however,whilemakingsomeprofoundlyinsightfulempirical observations,didnotreallyofferaclearapproachtohowtoformallymodellabormarketfrictions.To makealongstoryshort,economistsdebatedtherightapproachformorethanhalfacentury,andnever foundacompletelysatisfactorysolution.Ontheeveofthefinancialcrisis,theconsensusmonetary modelincorporatedpriceandwagerigiditiesinawaythatseemedtocaptureempiricalrealityina usefulway,althoughtheunderlyingrationalefortherigiditiesremainedsomewhatcrudeand mechanical.Nevertheless,evenafterthefinancialcrisis,itisclearthatNewKeynesianandrelated modelsareavastimprovementnotonlyoverKeynesbutoverlaternewneoclassicalandreal businesscyclemodelsthatessentiallyrejectedallfrictionsentirely.(Atleast,thenewmodelsarean improvementforpurposesofanalyzingmonetarypolicywhichwouldbevirtuallyimpotentinthe absenceoffrictions. Thechallengeaheadistonowalsoincorporatefinancialfrictions.Althoughmanyyoung economistsarealreadyworkingontheproblem,thereisnoreasontopresumethataconsensuswill ariseanymorequicklythanafterKeynes,andthatitmightwelltakemanydecadesbeforethedust settles.Nevertheless,untilmacroeconomicsmeetsthischallenge,thecredibilityofitsmodelswill remaindeeplycompromised. Asecondgreatchallengeistodevelopabetterunderstandingofhowgovernmentfiscaland debtpolicyaffectstheeconomy.Ontopofalltheissuesconfrontinganalysisofmonetarypolicy (introducingfrictionsinfinancial,laborandproductmarkets),thereaseveraladditionalproblems.In thecaseofgovernmentspendingincreases,ithastomattergreatlywhatthegovernmentisspending moneyon.Anincreaseininfrastructurespendingpresumablyhasverydifferenteffectsthanan increaseinmilitaryspending.Also,deficitsthatareduetotaxcutsarguablyhaveaverydifferent impactthandeficitsthatareduetogovernmentspendingincreases.Thereisalsoaquestionofhow privatesavingsmightbeinfluencedbydeficitspendingandprospectofhighertaxesinthefuture,a problemfamouslyemphasizedbyHarvardeconomistRobertBarro.Arelatedquestionishowlargea governmentdebtburdencananeconomysustainwithoutriskingalossinmarketconfidence.Future researchneedstobetterincorporatethestrikingnonlinearitiesthathistoricalanalysesrevealinthe data.Uptoapoint(adebtceiling)countriesseemtobeabletoborrowfreelywithlittleconsequence ontheinterestratetheypay.Butasdebtrises,andespeciallyifgrowthslows,interestratesona countrysdebtcanrisequitesuddenly,promptingeitherdefaultorasharpandpainfuladjustment.iThe inadequacyofeconomistsmodelsoffiscalanddebtpolicywasagainbroughttotheforebythe financialcrisis.TheUSgovernmenthadtomakeprofoundlydifficultchoicesonhowmuchfiscal stimulustointroduceonthebackofdisturbinglythineconomicresearch.Fiscalanddebtpolicywillof

269

coursebecomeamuchmorepopulartopicnow,butagain,asinthecaseoffinancialmarketfrictions,it willtakeagreatdealofresearchtomakelastingprogress. Thethirdgreatchallengeistodevelopabettercostbenefitanalysisoffinancialmarket regulation.Mostanalysesofregulationtakeamicroeconomicindustryorfirmlevelperspective.Butin thecaseoffinancialmarketregulation,thereareimportanteconomywiderisks.Remarkably,whereas economistshavelookedagreatdealathowfinancialdeepeningfostersdevelopment,thereisfarless understandingofhowtobalancerisksinamoresophisticatedeconomy.Howdoesonedoaproper costbenefitanalysisofbankcapitaladequacyrules?Doeshighfrequencytradingimprovean economysstabilityandgrowth,orisitmorelikelytobedestabilizing?Again,theseareissuesthathave alwaysexisted,buthavenowbeengivenfreshurgencybytheglobalfinancialcrisis. Ihavedetailedthreeimportantchallengesfacingmodernmacroeconomicresearch.In concluding,Iwanttotakeuptheissueofmethodologyineconomics.Mybasiccontentionisthat althoughmacroeconomistsshouldcertainlygivemoreattentiontohistoricalanalysisandempirics,the professionstillverymuchneedstocontinuedeepeningitsmathematicalandanalyticalframeworks, certainlyalongthelinesofthethreechallengesoutlineabove. Acentralthrustofmoderneconomics,especiallysinceWorldWarII,hasbeentointroduce greatermathematicalrigoranddisciplineintoanalysis.Althoughthisapproachhasbeenmuch criticized,mathematicalrigorservestwoessentialroles.First,itmakesitfareasiertomakethefield cumulative,sothatresearcherscangeneralize,refine,advanceandrefuteexistingtheories.Secondly,in conjunctionwithmodernstatisticalmethods,ithasmadepossibletoformallyparameterizeandtest specifictheoreticalmodels,greatlyexpandingtheirapplicability. Asnoted,therecentfinancialcrisishasraisedhugecriticismanddiscontentwiththecanonical approachtomacroeconomics,somejustified,somenot.Afaircriticismisthatbecauseacademic researchersplacegreatemphasisoninternalconsistency,thereistendencytogivefarlessrigorous attentiontoexternalconsistency.Asnoted,thesmallnumberofeconomistswholookedatlongterm historicaldataonthehistoryoffinancialcriseswerefarbetterabletoanalyzeandpredictthe economysvulnerabilitytothefinancialcrisis,aswelltoprojectitslikelyaftermath. Butthecurrentlimitationsofsophisticatedmathematicalandstatisticalmodelsforrealworld macroeconomicapplicationsshouldnotbeviewedasareasontorejectmoderntechnicaleconomics. Overtheverylongterm,aseconomicsadvancesasascience,frameworksthatareamenableto concretemathematicalandstatisticalmethodsarelikelytocontinuetoimprovedramatically,especially ascomputationalmethodsexpandanddatabasesbecomedeeperandeasiertomanipulate.Onecan imaginethatfuturedevelopmentswillallowmuchmorenuancedmodelsofhowlargescalemarkets work,andoftheinterconnectionbetweenfinancialvariables,politicalandregulatoryconstraintsand macroeconomicoutcomes.Ultimately,successinmeetingthethreechallengesdetailedheremust involveadeepeningofresearchintechnicaleconomicmethods,notabandonment.

270

References Bernanke,BenSandMarkGertler,Financial Fragility and Economic Performance, The Quarterly

Journal of Economics Vol. 105, No. 1 (Feb., 1990), pp. 87-114 Obstfeld, Maurice and Kenneth S Rogoff, Foundations of International Macroeconomics, Cambridge: MIT Press, 1996 Reinhart, Carmen M and Kenneth S Rogoff, This Time is Different: Eight Centuries of Financial Folly, Princeton: Princeton University Press, 2009.

i

SeeReinhartandRogoff(2009)andReinhartandRogoff,AmericanEconomicReview,May2010.

271

272

NSF white paper, 9/8/10

Market Design

Alvin E. Roth

Market Design: Understanding markets well enough to fix them when theyre broken Grand Challenge White Paper for Future Research in the Social, Behavioral & Economic Sciences By Alvin E. Roth, Harvard University Abstract In the past fifteen years, the emerging field of Market Design has solved important practical problems, and clarified both what we know and what we dont yet know about how markets work. The challenge is to understand complex markets well enough to fix them when theyre broken, and implement new markets and market-like mechanisms when needed. Among markets that economists have helped design are multi-unit auctions for complementary goods such as spectrum licenses; computerized clearinghouses such as the National Resident Matching Program, through which most American doctors get their first jobs; decentralized labor markets such as those for more advanced medical positions and for academic positions; school choice systems; and kidney exchange, which allows patients with incompatible living donors to exchange donor kidneys with other incompatible patient-donor pairs. These markets differ from markets for simple commodities, in which, once prices have been established, everyone can choose whatever they can afford. Most of these markets are matching markets, in which you cant just choose what you want, you also have to be chosen. One of the scientific challenges is to learn more about the workings of complex matching markets, such as labor markets for professionals, college admissions, and marriage.

273

NSF white paper, 9/8/10

Market Design

Alvin E. Roth

1.

Fundamental questions

Market design is the term used to refer to a growing body of work that might also be called microeconomic engineering, and to the theoretical and empirical research in economics, computer science and other disciplines that supports this effort and is motivated by it. At its heart is the fundamental question: How do markets work? For competitive commodity markets, economists have a good grasp of some of the basic elements. When price discovery and adjustment operate smoothly, agents choose what they want at the prices they see. But many markets are more complicated than that; you cant simply choose what you want, even if you can afford it; you also have to be chosen. Examples of such matching markets abound: colleges dont select their entering classes by raising tuition until just enough students remain interested, rather they set tuition so that lots of students would like to attend, and then they admit some fraction of those who apply. (And colleges also cant just choose their students, they have to woo them, since many students are admitted to multiple colleges.) Neither do employers of professionals reduce wages until just enough applicants remain to fill the positions; theres courtship on both sides (e.g. many new economics Ph.D.s would like to work for Stanford at the wages they offer, but Stanford receives many applications and makes only a few offers, and then has to compete with other top universities to actually hire those they make offers to). Particularly for entry-level professionals, wages are often rather impersonal (e.g. many new assistant professors of economics, or new associates at large law firms, earn around the same wage, just as many students are offered the same tuition packages). Prices seem to play a different role in clearing matching markets than in markets for commodities. Labor markets and college admissions are thus more than a little like the marriage market; each is a two sided matching market that involves searching and courting on both sides. Among markets that economists have helped design are multi-unit auctions for complementary goods such as spectrum licenses; computerized clearinghouses such as the National Resident Matching Program, through which most American doctors get their first jobs; decentralized labor markets such as those for more advanced medical positions, and for academic positions; the school choice systems used to assign children to big city schools; and kidney exchange, which allows patients with incompatible living donors to exchange donor kidneys with other incompatible patient-donor pairs. For surveys, see Milgrom (2004) and Roth (2002, 2008). Auction design is perhaps the part of market design most closely connected to the traditional function of commodity markets: price discovery and efficient allocation. However, when multiple, heterogeneous goods are offered, and buyers may want to consume packages of complementary goods, matching bidders to packages becomes necessary, and recent research motivated by FCC auctions of radio spectrum has drawn close parallels between auctions and matching markets. Many open questions

274

NSF white paper, 9/8/10

Market Design

Alvin E. Roth

remain, on the interface of economics and computer science, about the design and conduct of auctions that will make it safe and simple for bidders to bid on packages of goods. Matching markets sometimes suffer persistent market failures, and this has opened a door through which economists have engaged in market design. For example, a number of labor markets have lost thickness due to the unraveling of transaction dates: e.g. presently big law firms often hire new associates while they still have a year remaining of law school, and appellate judges hire law clerks via exploding offers that dont allow them to compare offers. Failures associated with phenomena like these caused the market for new doctors to explore various forms of centralized clearinghouse. In 1995 I was asked to direct the redesign of the clearinghouse for new doctors (the National Resident Matching Program), to address a number of issues, including the fact that there are a growing number of married couples in that labor market who seek two positions in the same vicinity. Each of these issues raises questions whose further answers will be important for understanding and designing complex markets: How does the timing of transactions influence market clearing? In particular, what is needed to create a marketplace in which sufficiently many transactions are available at the same time to achieve the benefits of a thick market? (Economists have devoted great effort to understanding the price of transactions, but much less is known about other features of transactions.) The timing of transactions concerns not just when they are made, but also their duration, as in e.g. the case of exploding offers. How does the growing number of two-career households influence the labor market? How does it influence the marriage market? How are these related (e.g. in migration to cities, and in spousal hiring policies of firms such as universities located outside of cities, and labor law involving what kinds of questions applicants can be asked about their marital status)? Some of these are questions that will involve collaboration among economists, demographers, and sociologists.

A marketplace that successfully becomes thick by attracting many participants may face a problems of congestion resulting from all the transactions that can potentially be considered, since in many markets such consideration take time (e.g. interviews in labor markets, time between offers and acceptances, etc.) Congestion was the problem that led to the redesign of the high school assignment process in New York City, and it has led to the redesign of a number of other markets, such as the market for clinical psychologists. Many open questions remain about the management of congestion. Some markets fail to reach efficient outcomes because it isnt safe for market participants to reveal the necessary private information. This was what led to the redesign of the school choice system for Boston: the old Boston algorithm made it risky for families to reveal what schools they wished their children to attend, since a family that failed to get the choice it listed first would likely drop far down in the rankings. The new assignment mechanism makes it safea dominant strategy-- for families to state their true preferences. However in many cases it can be shown to be impossible to make safe participation a dominant strategy, and so many questions remain about how to make participation safe. Recent results in economics and computer science suggest that some of these problems may become more tractable as markets grow large.

275

NSF white paper, 9/8/10

Market Design

Alvin E. Roth

Developing kidney exchange in the United States involved many people working to overcome each of the problems mentioned above. First, a thick marketplace had to be made possible by establishing databases of incompatible patient donor pairs. Then, congestion had to be overcome, in the form of the number of operating rooms and surgical teams who could be assembled simultaneously to carry out larger exchanges. (The recent development of non-simultaneous chains has helped.) Presently, kidney exchange programs are grappling with the problem of how to make it safe for transplant centers to participate fully, by revealing all of their incompatible patient-donor pairs to the exchange. At each step of the process, there has been collaboration between economists and doctors, and lately also computer scientists (about which more in a moment). A characteristic of market design is that it is going to require a great deal of collaboration among all sorts of people to design appropriate markets, and get them adopted and implemented. In addition, kidney exchange and the shortage of transplantable organs also make clear that not every kind of market transaction is welcomed: some kinds of market transactions are viewed as repugnant. In particular, it is against the law in the U.S. and in most developed nations to buy or sell organs for transplantation. More broadly, market solutions are not welcomed for a variety of transactions. Understanding the sociology and psychology of repugnant transactions and markets is a big task, which is likely to illuminate many aspects of markets and market design. At present, this is work that brings together economists, psychologists, sociologists, legal scholars, and philosophers. Many of the market designs referred to above involve computer-assisted markets. Computers can assist markets in a number of ways, some of them more profound than others. Markets can be run on computers, so that transactions are recorded and processed in an orderly way. Markets can be accessed over the internet, so that many more people can participate than could at a marketplace in the physical world. Markets can use computers as trusted intermediaries, to accomplish something more or more cheaply than could be done without computers (for example the computer can hold a reserve price without revealing it unnecessarily, or job applicants can send a certified number of signals (as in the signaling mechanism now used in the market for new economists). Finally, computers can add computational intelligence to the market; instead of just reporting bids and asks, market outcomes can be determined by (possibly computationally intensive) algorithms that process market information in ways that couldnt be done, or done quickly, without computers. In this latter category, note that finding optimal kidney exchanges of constrained size is an NP hard problem solved by integer programming, while many labor market clearinghouses take as input rank order lists and use a deferred acceptance algorithm to find a stable matching. Note that market design is not just about computerized or even centralized marketplaces, but also about the rules, procedures and customs of decentralized markets, what might be called their market culture. For example, in helping repair an unraveled market for gastroenterologists, an essential feature was changing the rules about whether applicants could change their minds about offers received before a specified time.

276

NSF white paper, 9/8/10

Market Design

Alvin E. Roth

To summarize, the last fifteen years have increased our understanding of how markets fail and how they can sometimes be fixed. The theory and practice of market design are deeply intertwined, and each particular market design brings economists into close contact with experts in the particular domain, and in other academic disciplines. For economics as a discipline, market design provides a fresh source of theoretical problems and empirical data, about the most fundamental questions of economics, concerning how markets work, and how they can be fixed when they fail.
2.

What are the implications for advancing the domain? For building capacity? And for providing infrastructure?

As we understand more about markets (and perhaps about repugnant transactions) well know more about where and in what ways better markets can improve welfare, and perhaps also more about where we might pause to look for alternatives before instituting simple or unregulated or monetary markets or relaxing the restrictions against them. As market design grows, it will become more like an engineering discipline, demanding both design knowledge and knowledge of particular domains of application. Right now, design papers mostly are judged by the journals as theory papers . But frontier design papers might not necessarily have the same focus on theory, or empirical work, that standard papers do, they might derive their value from how those things are combined in novel ways on some new domain of application. So, as market design develops, well have to nurture a market design literature that judges and recognizes frontier work in appropriate ways.
3.

Who is doing provocative research?

Market designers are starting to be to numerous for a short list (heres a link to a very partial list of mostly economists and computer scientists), but there are big active groups in the Boston area and at Stanford. The Stanford group includes Milgrom, Bulow, Levin; Niederle, Ostrovsky, Hatfield, and Kojima, and the Boston group includes Roth, Athey, Parkes, Edelman, and Coles at Harvard; Pathak and Ashlagi at MIT; Sonmez and Unver at Boston College. Other centers include Maryland: Ausubel and Cramton; Michigan: Chen, Resnick, and Leider; Chicago: Budish; CMU: Sandholm

References Milgrom, Paul Putting Auction Theory to Work, Cambridge U. Press, 2004 Roth, Alvin E. "The Economist as Engineer: Game Theory, Experimentation, and Computation as Tools for Design Economics," Econometrica, 70, 4, July 2002, 1341-1378. Roth, Alvin E. "What have we learned from market design?" Hahn Lecture, Economic Journal, 118 (March), 2008, 285310.

277

NSF white paper, 9/8/10

Market Design

Alvin E. Roth

Creative Commons License: <a rel="license" href="http://creativecommons.org/licenses/by-ncsa/3.0/"><img alt="Creative Commons License" style="border-width:0" src="http://i.creativecommons.org/l/by-nc-sa/3.0/88x31.png" /></a><br /><span xmlns:dc="http://purl.org/dc/elements/1.1/" href="http://purl.org/dc/dcmitype/Text" property="dc:title" rel="dc:type">Market Design</span> by <a xmlns:cc="http://creativecommons.org/ns#" href="http://kuznets.fas.harvard.edu/~aroth/papers/NSF%20Grand%20Challenge.SBE%202020.%20Mar ket%20Design.pdf" property="cc:attributionName" rel="cc:attributionURL">Alvin E. Roth</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/3.0/">Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License</a>.

278

Future Research in the Social, Behavioral and Economic Sciences Larry Samuelson Department of Economics Yale University 30 Hillhouse Avenue New Haven, CT 06525-8281 Larry.Samuelson@yale.edu September 20, 2010 Abstract: This paper describes a research program organized around the theme of What Makes Societies Work? There are two stages. The first is a study of how context and institutions affect peoples incentives. Why do peoples preferences appear to be helpfully prosocial in some settings, and narrowly self-interested in others, and how can we design interactions to amplify the former? Why are institutions such as constitutions and courts effective in shaping social behavior in some settings but not others? Why are the relational incentives created by repeated interactions more effective in some settings than others? Addressing these questions will take a concerted effort on the part of economists and others from across the range of social sciences. Next, these insights are to be put to work in addressing questions of how we can influence or even design social outcomes. How do we achieve a consensus on using future interactions to create current incentives? What institutions can we design that will induce people to coordinate on contributions to the public good rather than hoarding private wealth as the route to status? Questions such as these are fundamental to making economics and social science more generally a useful part of our intellectual arsenal.

This work is licensed under the Creative Commons Attribution-NoDerivs 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nd/3.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.

279

Future Research in the Social, Behavioral and Economic Sciences Larry Samuelson September 20, 2010 I. Introduction: II. What Makes Societies Work? Why are some societies more successful than others? This is perhaps the most fundamental of social science questions, from both a positive viewpointit is important to understand the patterns we seeand a normative viewpointit is important to draw lessons for how we should organize society. One might define success in many ways, with per capita income, growth rate, longevity, physical or mental health, happiness, and political stability being just a few of the possibilities. However, there is sufficient common ground in these measures that we can move beyond this potentially endless diversion to concentrate attention on the underlying mechanisms. Economics has not produced a convincing answer. Early economic models, focused on physical capital accumulation, explained woefully little of the variation in economic performance across countries. Adding human capital to the analysis provides some improvement, but still leaves large gaps in our understanding. Recent appeals to social capital reflect a recognition that something is missing, but have not produced a precise idea as to what it might be. The other social sciences provide a collection of intriguing ideas, but have not advanced to careful quantitative evaluation of their models. II. Incentivees An understanding of how a society works begins with an understanding of the incentives motivating its members. :Let us say that the incentive for a person to do A rather than B is internal if the person prefers A to B, is contractural if the person chooses A in return for some explicit (typically immediate) benefit, and relational if the person chooses A in return for some commonly understood but implicit (typically future) benefit. Like many concepts in economics, such as the short run and long run, the boundaries between these categories are blurred and context dependent, but the categories are conceptually useful. Internal incentives are simply a matter of preferences. Economists typically have very little to say about preferences, taking them as fixed and beyond either explanation or influence. However, there is good reason to believe that internal incentives are quite sensitive to context, as well as quite important in shaping social behavior. The United States is known for the extent to which its citizens comply with its tax code, while many other countries struggle with tax collection. The difference appears to be not that audit probabilities are higher in the United States or penalties more severe, but rather that people in the US unilaterally supply, or prefer, a higher degree of compliance. This preference is fragile, however, in the sense that people express a willingness to comply only to the extent that they think others are also doing so. The

280

same is true of many other activities, from littering, waiting in line, and obeying traffic laws to life-style decisions that are reflected in the broken windows theory of neighborhood behavior. Contractural incentives are the hallmark of economic exchange. Such incentives come into play whenever we make a purchase, accept employment, trade financial instruments, and so on. Contractural incentives mediate a relatively small portion of our interactions, with relational incentives playing the key role in the remainder. We expect to pay for a restaurant meal, but a typical response to a fine dinner at a friends house is that one has acquired an obligation to reciprocate. We readily pay taxi fares, but wouldnt think of asking for compensation upon giving a colleague a ride home. The former interaction in each case is contractural, the latter relational. Far from being confined to seemingly minor matters of social etiquette, there is evidence that a great proportion of business-to-business interactions are governed not by explicit agreements, but by relational considerations of the form well make this up next time. Interactions between firms and consumers similarly hinge on relational incentivesone makes no explicit promise to return to a provider who has given good service, yet everyone views the interaction differently if such return is impossible. Political interactions similarly hinge on relational incentives. III. A First Round of Questions These distinctions allow us to outline a research program in two stages. The first stage would address the following three questions: 1. What Shapes Internal Incentives? As noted by Arrow (1971), virtually every interaction between individuals requires a willingness to forego some individual advantage, and to trust that others will do so as well. The retail sector of our economy works as it does partly because customers fear arrest if they flee without paying, but to a greater extent because they would choose not to flee even if certain of impunity. People invest in private property partly because law enforcement resources help protect that property, but to a greater extent because they understand that most others will not try to seize it. In the language of this proposal, people have internal incentives to complete transactions and respect the property of others. It is clear that these internal incentives go beyond the narrow conception of self-interest that serves economics. What is the nature of these incentives? More importantly, what determines them? It may appear at first glance as if we are appealing here simply for more behavioral economics. There are three important differences. First, much of behavioral economics has been concerned with arguing that peoples preferences are not narrowly self-interested. It is important for the proposed research to move beyond this to investigate what determines the social aspects of preferences. Why are people willing to behave socially in some environments and not others? How do we design interactions to take advantage of these social aspects? Answering these questions will require insight from and collaboration with

281

the other social sciences, most notably psychology, but also including sociology and anthropology. Second, behavioral economics relies crucially on experimental methods. We are still far from having developed commonly accepted and workable standards for doing economic experiments. For example, there is virtually no emphasis on replication in experimental economics, while replication is commonly touted in the sciences as the essence of experimental inquiry. Existing econometric methods have been imported into experimental economics, in contrast to the extent to which other sciences view experimental design and the ability to collect additional data as a substitute for statistics. Perhaps most distressingly, psychologists have a long history of experimentation, but very little has been done to bring experimental methods from psychology into economics. The different questions stressed by the two disciplines clearly call for different methods, but it is unlikely that we have nothing to learn from decades of experimental research in psychology. It is imperative that the proposed research make progress in experimental method, drawing insight not only from economic theory but also psychology and the other social sciences. Notice that we need not simply more experiments, but rather more work on how we should do experiments. Third, much of experimental and behavioral economics has been concerned with showing that one can find behavior that cannot be explained by the simple economic models with which we work. This is interesting, but this alone tells us very little. Models are by design approximations of a hopelessly complex reality, and hence are deliberately constructed so as to not explain some behavior, in return for simplicity and transparency. Hence, finding behavior inconsistent with our current models is a useful contribution only if one can argue that elaborating our existing models to accommodate such behavior is worth the resulting erosion of simplicity and transparency. Unfortunately, we currently have no techniques for making such comparisons, and indeed no common language for discussing the issues. We need research in quest of the theoretical equivalent of an adjusted R-squared, allowing discoveries of behavior inconsistent with standard models to be accompanied by a meaningful discussion of whether such a finding warrants a more complicated model, or is simply another reminder that models are indeed models. 2. What Gives Rise to Contractual Incentives? At first glance, the answer to this question seems obvious. Contractural incentives are the bread-and-butter of economics. People work because they are paid, they produce because they can sell, and so on. These incentives are backed up by formal institutions that ensure these transactions can be made reliably. Constitutions protect private property, courts enforce contracts, markets are designed to facilitate trade, all in the shadow of mutually-embraced and officially-sanctioned coercion. Upon closer examination, the link between the formal institutions and the resulting contractural incentives is complex and fragile. What does it mean to say that a constitution guarantees certain liberties or protects private property? The constitutions of Liberia and the United States are quite similar, but give rise to remarkably different outcomes. The constitution of the Soviet Union included a host of civil liberties, but produced a surprisingly

282

different outcome. What does it mean to say that courts enforce contracts? Law enforcement personnel or juries simply decline to enforce laws they find sufficiently unpalatable. Example include cases in which juries in Victorian England simply failed to convict, in response to penalties they judged too severe, and the current concept of jury nullification. What does it mean to say that incentives are created by the potential for officially-sanctioned coercion? The propensity for super-bowl celebrations to turn into riots is but one indication that law enforcement is effective only if most people comply voluntarily. In effect, formal institutions are simply cheap talk, suggesting (perhaps quite vividly) an equilibrium in the game of society, but with no power to do anything other than suggest. When are these suggestions effective, and when are they irrelevant? How do we design formal institutions to give rise to effective contractural incentives? Mailath, Morris and Postlewaite (2001) provide one intriguing attempt at examining this problem. Much more work is needed, drawing not only on economics, but also history, psychology, sociology, and political science. 3. How Do We Harness Relational Incentives? The incentives in the vast bulk of our interactions arise neither internally nor out of the expectation of contractural reward, but out of the implications of current actions for future payoffs. We have a well-developed theory of repeated games to deal with such situations. At the same time, this theory is missing an essential element, namely an understanding of which of the many equilibria in a repeated game is the relevant one. Schelling (1980) raised this problem long ago, noting that in many cases there appears to be an obvious equilibrium, despite being distinguished by nothing in the formal structure of the equilibrium. The intervening decades have provided many more examples and filled in many details, but have brought little progress on a general understanding of focal points. How do we structure relationships so that salutary equilibria not only exist, but are selected by the participants? This is perhaps the most important question, again calling for reinforcements from the other social sciences. IV. Implications The second stage of the proposed research will build on these foundations to ask the following type of questions. What is the optimal level of diversity in a society? Psychologists have stressed that heterogeneous groups of people often make better decisions, while effective relational incentives may require sufficiently homogeneous behavioral expectations. How do we balance these conflicting forces? How can we rewrite institutional economics to include not only formal institutionsbanks, credit markets, legal systems, and so onbut also the internal and relational incentives that supplement these formal institutions? To what extent should development assistance concentrate on building such incentives? Should the study and perhaps creation of such incentives play a role in our educational system? Can we describe culture as a shared set of incentives, and if so, can we design culture to be more effective? These questions are the most speculative raised in this proposal, and accordingly are the most briefly described, but are also ultimately the most important. We have the tools for examining such questions, but are at the very beginning of formulating and understanding them.

283

Answering this research challenge will require elements from all of the social sciences, but promises tremendous rewards. V. References 1. Kenneth J. Arrow (1974), The Limits of Organization (W. W. Norton & Company). 2. George J. Mailath, Stephen Morris, and Andrew Postlewaite (2001), Laws and Authority, unpublished. 3. Thomas Schelling (1980), The Strategy of Conflict (Harvard University Press).,

284

Some Research Priorities in Environmental Economics


Robert N. Stavins1
Albert Pratt Professor of Business and Government, John F. Kennedy School of Government, Harvard University University Fellow, Resources for the Future Research Associate, National Bureau of Economic Research

As the United States and other economies have grown, the carrying-capacity of the planet C in regard to both natural resources and environmental quality C has become a greater concern. This is particularly true for common-property and open-access resources. While small communities frequently provide modes of oversight and methods for policing their citizens, the scale of society has grown, and commons problems have spread across communities and even across nations. No over-arching authority can offer complete control, and so commons problems have become more commonplace and more severe. The stocks of a diverse variety of renewable natural resources C including water, forests, fisheries, and numerous other species of plant and animal C have been depleted below socially efficient levels, principally because of commons problems, that is, poorly-defined property-right regimes. Likewise, the same forces of open-access C whether characterized as externalities, following Pigou, or public goods, following Coase C have led to the degradation of air and water quality, inappropriate disposal of hazardous waste, depletion of stratospheric ozone, and the atmospheric accumulation of greenhouse gases linked with global climate change. Economics C as a discipline C has gradually come to focus more and more attention on these commons problems, first with regard to natural resources, and more recently with regard to environmental quality. Economic research within academia and think tanks has improved our understanding of the causes and consequences of excessive resource depletion and inefficient environmental degradation, and thereby has helped identify sensible policy solutions. Today, natural resource and environmental economics is a productive field of the discipline and one which shows considerable promise for the future. In the environmental and resource sphere, the world surely faces some Agrand challenges,@ and economic research is well-positioned to make increasingly important contributions. Clearly, the greatest challenge the world faces in this realm is the threat of global climate change linked with the accumulation of greenhouse gases C including carbon dioxide (CO2) C in the atmosphere. The key challenges are associated presently not with better scientific understanding of the nature of the problem C although that surely is important C but rather with the fundamental economic and political barriers to policy action.

This work is licensed under the Creative Commons Attribution-NoDerivs 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nd/3.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.

285

Climate change is a commons problem of unparalleled magnitude along two key dimensions: temporal and spatial. In the temporal domain, it is a stock, not a flow problem, with greenhouse gases remaining in the atmosphere for decades to centuries. In the spatial domain, greenhouse gases uniformly mix in the atmosphere, and so the nature, magnitude, and location of damages are independent of the location of emissions. Hence, for any individual political jurisdiction, the direct benefits of taking action will inevitably be less than the costs, producing a classic free-rider problem, and thereby suggesting the importance of international C if not global C cooperation. Despite the apparent necessity of international cooperation for the achievement of meaningful GHG targets, the key political unit of implementation Cand decision-making C for any international climate policy will be the sovereign state, that is, the nations of the world. Therefore, before turning to the topic of international cooperation, it is important to ask what economics can say about the best instruments for national action. In both cases, I limit my attention to the means C the instruments C of climate policy, although economists have and will continue to make important contributions to analyses of the ends C the goals C of climate policy. There is widespread agreement among economists (and a diverse set other policy analysts) that economy-wide carbon pricing will be an essential ingredient of any policy that can achieve meaningful reductions of CO2 emissions cost-effectively, at least in the United States and other industrialized countries. The ubiquitous nature of energy generation and use and the diversity of CO2 sources in a modern economy mean that conventional technology and performance standards would be infeasible and C in any event C excessively costly. There is considerably less agreement among economists regarding the choice of carbon-pricing policy instrument. It is fair to say that most academic economists have favored the use of carbon taxes, whereas a minority have endorsed the use of cap-and-trade mechanisms. A carbon tax C if implemented upstream (at the point of fossil-fuels entering the economy) and hence economy-wide C would appear to have some advantages over an equivalent upstream cap-and-trade system. First is the simplicity of the carbon tax system, in which firms would not need to manage and trade allowances, and the government would not need to track allowance transactions and ownership. Experience with previous cap-and-trade systems, however, indicates that the costs of trading institutions are not significant. Whether a policy as significant as a national carbon tax would turn out to be simple in its implementation is an open question. Second, a carbon tax would raise revenues that can be used for beneficial public purposes, such as for cutting existing distortionary taxes, thereby lowering the social cost of the overall policy. Of course, an auction mechanism under a cap-and-trade system can do the same. Third, a tax approach eliminates the potential for price volatility that can exist under a cap-and-trade system. From an economic perspective, it makes sense to allow emissions (of a stock pollutant) to vary from year to year with economic conditions that affect aggregate abatement costs. This happens automatically with a carbon tax. With a cap-and-trade system, this temporal flexibility needs to be built in through provisions for banking and borrowing of allowances.

286

There are also a set of apparent disadvantages of carbon taxes, relative to a cap-and-trade regime. First among these is the resistance to new taxes in many countries, including but not limited to the United States. In their simplest respective forms (a carbon tax without revenue recycling, and a cap-and-trade system without auctions), a carbon tax is more costly than a cap-and-trade system to the regulated sector, because with the former firms incur both abatement costs and the cost of tax payments to the government. Second, cap-and-trade approaches leave distributional issues up to politicians, and provide a straightforward means to compensate burdened sectors. Of course, the compensation associated with free distribution of allowances based on historical activities can be mimicked under a tax regime, but it is legislatively more complex. The cap-and-trade approach avoids likely battles over tax exemptions among vulnerable industries and sectors that would drive up the costs of the program, as more and more sources (emission-reduction opportunities) are exempted from the program, thereby simultaneously compromising environmental integrity. Instead, a cap-and-trade system leads to battles over the allowance allocation, but these do not raise the overall cost of the program nor affect its climate impacts. Some observers worry about the political process= propensity under a cap-and-trade system to compensate (with free allowance allocations) sectors that claim to be burdened. A carbon tax is sensitive to the same pressures, and may be expected to succumb in ways that are ultimately more harmful. This is the crucial political-economy distinction between the two approaches. Third, cap-and-trade systems generate a natural unit of exchange for international harmonization linkage: allowances denominated in units of carbon content of fossil fuels (or CO2 emissions). Hence, it is easier to harmonize with other countries= carbon mitigation programs, which are more likely to employ cap-and-trade than tax approaches. However, through appropriate mechanisms, international linkage can include carbon tax systems. Despite these differences between carbon taxes and cap-and-trade, the two approaches have much in common. Therefore, it has been argued that the two key questions that should be used to decide between these two policy approaches are: which is more politically feasible; and which is more likely to be well-designed. To some degree, responses to these questions have been provided by the political revealed preference of individual countries. The world=s largest cap-and-trade system is addressing Europe=s CO2 emissions C the European Union Emission Trading Scheme (EU ETS). Although the system had its share of problems in its pilot phase, it has functioned as anticipated since then, despite the fact that the 2008-2009 recession has led to significantly lower allowance prices and hence fewer emission reductions. In addition, New Zealand has launched a CO2 cap-and-trade system, and Australia and Japans are planning to do likewise. Canada has indicated that it will launch a domestic system when and if the United States does so, but domestic U.S. politics slowed developments in 2010. In June, 2009, the U.S. House of Representatives passed an ambitious, economy-wide capand-trade regime as part of H.R. 2454 C the American Clean Energy and Security Act of 2009 (otherwise known as the Waxman-Markey bill). That system, if enacted, would reduce U.S. CO2 emissions by 17 percent in 2020 and by 80 percent in 2050, relative to 2005, through the effects of

287

price signals on energy efficiency, fuel switching (coal to natural gas), landBuse changes, and technological change. The best estimates of the costs are that they would be considerably less than 1 percent of GDP annually in the long term, thereby reducing the pace of economic growth such that 2050's expected economic output would be delayed by a few months. In 2010, the U.S. Senate opted to delay any and all climate legislation. With political stalemate in Washington, attention may increasingly turn to sub-national policies intended to address climate change. The Regional Greenhouse Gas Initiative (RGGI) in the Northeast has created a cap-and-trade system among electricity generators, and California=s Global Warming Solutions Act (Assembly Bill 32) will lead to the creation of an ambitious set of climate initiatives, including a statewide cap-and-trade system, unless it is stopped by ballot initiative or a new Governor. The California system will be linked with systems in seven other states and four Canadian provinces under the Western Climate Initiative. These sub-national policies will interact in a variety of ways with Federal policy when and if a Federal policy is enacted. Some of these interactions would be problematic, such as the interaction between a Federal cap-and-trade system and a more ambitious cap-and-trade system in California under AB 32, while other interactions would be benign, such as RGGI becoming irrelevant in the face of a Federal cap-and-trade system that was both more stringent and broader in scope. Even as domestic climate policies move forward in some countries but not in others, it is clear that due to the global commons nature of the problem, international cooperation will eventually be necessary. The Kyoto Protocol (1997) to the United Nations Framework Convention on Climate Change (1992) will expire in 2012, and is, in any event, insufficient to the long-term task, due to the exclusion of developing countries from responsibility. Although the industrialized countries accounted for the majority of annual CO2 emissions until 2004, that is no longer the case. China has surpassed the United States as the world=s largest emitter, and most growth in CO2 emissions in the coming decades will come from countries outside of the Organization of Economic Cooperation and Development (OECD), with emissions in nearly all OECD countries close to stable or falling. A wide range of potential paths forward are possible, including top-down international agreements involving targets and timetables that involve more countries as they become more wealthy; harmonized national policies, such as domestic carbon taxes; and bottom-up loosely coordinated national policies, such as the linkage of regional and national cap-and-trade systems through bilateral arrangements. The most promising alternatives can C in principle C achieve reasonable environmental performance cost-effectively by including not only the currently industrialized nations, but also the key emerging economies. Political feasibility, however, is another matter. Given the spatial and temporal nature of this global commons problem, political incentives around the world are to rely upon other nations to take action. Since sovereign nations cannot be compelled to act against their wishes, successful cooperation C whether in the form of international treaties or less formal mechanisms C must create internal incentives for compliance, along with external incentives for participation. Because no single approach guarantees a sure path to ultimate

288

success, the best strategy to address this ultimate commons problem may be to pursue a variety of approaches simultaneously.

289

290

White Paper for NSF Grand Challenges John Van Reenen, Professor of Economics at the London School of Economics and Director of the Centre for Economic Performance September 6th 2010 Abstract I discuss some developments in economics and what I think are Grand Challenges for the social sciences over the next 10-20 years. One recurrent theme is the importance of heterogeneity in performance between firms and how this links to management practices. Splitting economics from other NSF funding would also be desirable.

I.

Introduction

I am writing this in response to a letter by Myron Guttman requesting Grand Challenges for the social sciences over the next two decades. This is a formidable task, akin to writing the music of the future. After all, if one could indeed write such lyrics, they would already have been penned. Nevertheless, I will take this opportunity to reveal my prejudices regarding key areas that would in benefit from increased resources. Two of the most important developments in economics the last decade have been (i) the massive growth of micro-economic data and (ii) the methodological move towards credible identification. The growth of huge databases of firm-level information in the public and private sector has been driven by the phenomenal fall in the quality adjusted price of information technology. This has made the storage, manipulation and analysis of data much easier and has led to a Golden Age of micro-econometric work. Liberalization of access to Census Bureau information has also helped as has greater regulatory requirements of the disclosure of company accounts. Alongside the flourishing of large-scale datasets has been a move towards more transparent methods of understanding the causal relations between variables. Researchers are now much more careful to seek to identify exogenous changes in the variable of interest (either from nature or policy-makers) or, if this is not possible, to design and implement their own (often randomized controlled) experiments. Although writing down a structural model and using an off the shelf secondary dataset even (if highly unsuited) still goes on, it is not in the dominant position that it once was. This is not so say that structural modelling has no place it definitely does (see sub-section on methodology below), but theory is no substitute for good empirical design. II. Grand Challenges Some main themes

Organizational Heterogeneity

291

In my view, one of the most profound facts uncovered about modern economies is the huge variation in performance between plants and firms in narrowly defined industries. For example, within a typical four digit sector in US manufacturing output per worker is four times as high for the plant at the 90th percentile as the plant at the 10th percentile. And for total factor productivity the difference is still about double. Even wider distributions are evident in other nations. Most economists initial reaction to these performance differences was denial. First, it was said that these differences were purely transitory they were not, they are relatively persistent. Second, the view was that inputs and outputs were badly mismeasured. This is true, but better measurement actually tended to make the differences larger. For example, plant-level price information has recently become available for some industries and when this is used to correct the measure of output (which typically used industry deflators) productivity differences where even wider (as the more efficient firms tended to charge lower prices. Thirdly, it was argued that the estimation of production function parameters was flawed. There has been significant methodological advance in this area (and still more needed) but the bottom line is that the differences persist under a wide variety of estimation procedures. Many papers suggest that the evolution of productivity differences through the creative destruction process of allocating more output to the most efficient and driving the less productive from the market is a key factor in the time series aggregate growth of nations and aggregate TFP differences between nations (about half of the US-India difference for example). The key challenge then is what is the cause of these between plant productivity differences?

Management Practices One answer to the question on the causes of productivity heterogeneity is that the differences lie in technology. This is, of course, only a proximate answer because the deeper responses need to rest to structural features of societies product, financial and labor markets, culture, etc. Nevertheless, unravelling the first part of the puzzle would be a start. There has been a large and substantial literature looking at the various hard technological variables that influence productivity R&D, patents, observable innovation measures, diffusions measures (especially information and communication technologies, ICT). This is valuable but (1) a large residual remains after accounting for these observable indicators of technology; (2) the impact of technologies on productivity is very heterogeneous and seems to depend in a substantive way on the management of firms. Bloom, Sadun and Van Reenen (2007) for example, find that the impact of ICT is much stronger for firms with better people management (i.e. careful hiring, pay and promotion based on effort/ability rather than just tenure, rigorous procedures for dealing with underperformers, etc.). This suggests that management is a key factor in understanding productivity. There are two big challenges here. First, how to quantify management practices across different

292

organizations in a comparable way. Second, is the correlation of management on productivity causal? And third, what are the theories that can account for the relationship? We discuss these three questions of measurement, identification and theory in turn.

On the measurement side, there have been some advances in recent years (see Bloom and Van Reenen, 2010, for a discussion), but the challenge is how to develop such methods further and how to integrate them into standard statistical series such as the Economic Census. This needs to be done internationally to obtain cross-country comparisons. Can the (high skilled) labor intensive methods of Bloom and Van Reenen (2007) be simplified so that they can be mainstreamed in statistical agencies routine data collection? On the identification side, how can we get at causal effects? The gold standard approach here is, in my view, randomised control trials. Although these are expensive, it is difficult to see how the evidence can be made secure without this type of approach. On the theory side there are now a wide range of models that seek to account for the heterogeneity of management. Although some management styles are fads and fashions, mainstream modern economics correctly deems them as part of the chosen organizational design of the firm. This design approach applies much of standard optimization and equilibrium concepts to the theory of the firm. Although powerful (e.g. Personnel economics), the empirical basis for organizational economics is based too much on case studies and anecdote than solid data. Further, there is an element of management that is linked to productivity that makes it more akin to a technology. This may be static and nontransferable (embodied in people as in Lucas, 1978 or in firms as in Melitz, 2003) or dynamic and transferred between firms like any other technology. This is still poorly understood and needs theoretical development.

Intangible Capital Management is one part of the intangible capital of the firm. Increasingly, the core assets of firms are not easily on the company balance sheet and the assets of a nation are barely tracked (human capital, intellectual property, brands and marketing, for example). There is a challenge to better measure and understand the accumulation of these intangible assets. Much more so than conventional forms of capital, intangible capital is beset by uncertainties, externalities and potential failures of financial markets. Unlocking Business data Large amounts of data are collected by private sector firms and kept secret. This also used to be the case for governments, but increasingly these are being opened. Firms tended to underutilize their data, but with the new abundance of information, firms are starting to use their data more systematically.

293

Just as with government-academic cooperation, there is a huge opportunity to make more business data available to tackle the questions of heterogeneity and the causal impact of business practices Macro-economics and Finance The trends towards credible identification and deeper use of micro-data have penetrated some fields more than others. Macro-economics at some point seemed to turn its back on data and retreat into a focus on tightly specified models with empirical data used loosely to calibrate parameters of these models. Most macro-models share the unfortunate assumptions of frictionless financial markets, an assumption that has fared extra-ordinarily badly over the financial crisis. Macro-economics needs a Perestroika moment where the imperfections of financial markets take a pride of place. It also needs to re-discover respect for data and causal identification. Some of its problems are inescapable a paucity of data of severe downturns, for example, and difficulty in running experiments. But a grand challenge for macro is to reflect economic reality of frictions much more seriously.

III.

Other Themes and Challenges

These are mainly obvious so I will list them in a rather staccato way A richer conception of human capital. On the other end of the scale to macro, we need a richer concept of human capital. Peoples facilities rely not just on their physical and cognitive endowments, but also on their non-cognitive resiliency. Studies of the human brain and behavior have shown how important these noncognitive aspects are in economic behavior. How can we model the accumulation of mental health? What policies best influence the development of human capital in this respect? Methodology. The best methods combine credible identification with good theory. Encouragement for work which combines experimental and quasi-experimental evidence with theory (so structural estimation is possible) is the ideal. Climate change and its economic effects. What are the adaption policies? How can policy be used to influence innovation to tackle climate change The growth of emerging powers, above all China. What effect will this have on the political economy of the world? Demographics The impact of aging and changing demographics Africa. Why has Africa stayed so poor? Is this going to change

294

IV.

Recommendations

Funding should focus on the areas identified above, especially for III. Since many of my themes cross disciplines, there is ample scope for inter-disciplinary work. Yet my experience is that the best research is done in the discipline one knows and setting up explicit inter-disciplinary funding leads only to tokenism. Economists have the best tools to tackle the questions I have identified and I think it would be better to split NSF funding so there was a distinct stream solely for economics, rather than mixing the funding stream with other disciplines. References Bloom, Nick and John Van Reenen (2007) Measuring and Explaining Management practices
across firms and nations Quarterly Journal of Economics (2007) 122(4), 13511408. http://cep.lse.ac.uk/pubs/download/dp0716.pdf

Bloom, Nick and John Van Reenen (2010) Human Resource Management and Productivity Handbook of Labor Economics Volume IV (Edited by Orley Ashenfelter and David Card) http://cep.lse.ac.uk/pubs/download/dp0982.pdf Bloom, Nick, Raffaella Sadun and John Van Reenen (2007) Americans Do I.T Better: US multinationals and the productivity miracle NBER Working Paper No. 13085. Forthcoming, American Economic Review http://cep.lse.ac.uk/pubs/download/dp0788.pdf j.vanreenen@lse.ac.uk

295

296

Grand Challenge for NSF-SBE in the Next Decade

Clinical Trials in Economics


Hal Varian 6 September 2010 Abstract. The gold standard for scientific research is reproducible controlled experiments. In the last two or three decades economics has made much progress in implementing experiments in both the laboratory and in the field. I propose that the NSF should set up a program to fund field experiments/clinical trials in a variety of areas in economics. These clinical trials should be designed to resolve fundamental debates in economics. Proposals for experimental designs should be submitted to a special program and be reviewed by referees and a panel of experts, as with current experiments. Unlikely current proposals, we would expect some iteration with respect to the experimental design. When a consensus (or a significant majority) is reached about experimental design, funding would be offered to the researchers. It would be helpful to involve researchers from public health and other fields who are familiar with the problems involved with large clinical trials. Discussion. I am well aware that the NSF has been funding field experiments in a variety of areas, including welfare payments, educational issues, and development economics, to name just a few. My proposal is to create a special program in experimental design and analysis for these and other topics. It would be particularly helpful for researchers to be on the alert for natural experiments and recruit subjects in both treatment and control groups to facilitate analysis of the experiments. (Think of rentcontrol as an example.) I believe that there should be a bias towards policy-relevant experiments, but proposals for fundamental topics in human behavior should also be entertained. For example, over the last several decades economists have built up an impressive body of literature on game theory and strategic behavior, but there has been comparatively little empirical work, except in the relatively constrained environment of the laboratory. There should also be educational funding for summer courses in experimental design and related topics to make sure that all economic graduates have some training in experimental design and analysis. Long-running panels such as the PSID have been hugely helpful in understanding economic behavior at the individual and household level. My understanding is that interviews and surveys are still the basis for much of the analysis. I believe that monitoring technology available today can offer substantial improvement on these traditional methods by making the gathering of data less onerous and more accurate. This is a ripe area for analysis. Such monitoring technology would also be very helpful for the shorter clinical trials I am describing here. A substantial amount of money is being invested by the private sector in the design and funding of panels for purposes of marketing. In the marketing literature, these are referred to as single-source panels. Many of these are quite sophisticated and there is a strong possibility for public-private research co-operation in this area. In many cases, the panel members can receive different experimental treatments using quite sophisticated techniques. The data from such studies could be hugely valuable to economists studying household behavior.

297

For example, Nielsen Homescan maintains a standing panel of consumers who scan every item purchased on a weekly basis. This provides a wealth of data for marketing, but imagine what you could do by examining how purchases respond to changes in employment, family composition, taxes, consumer confidence, and the like. The marketing uses of the data require current data, but many of the other economic uses can be conducted with historical data, providing a natural way of addressing scientific and commercial needs. This is only a single example; there are several other single-source panels that are providing on-going data for various commercial purchases. With only a small amount of effort, these panels could provide highly useful scientific data. Conversely, one might speculate that existing efforts such as the PSID would provide commercially useful data as well (on a non-proprietary, non-exclusive basis, of course.

This work is licensed under the Creative Commons Attribution Non-Commercial Share Alike license.

298

This work is licensed under the Creative Commons Attribution-NoDerivs 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nd/3.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.

GRAND CHALLENGES FOR THE SCIENTIFIC STUDY OF AGING David R. Weir University of Michigan October 15, 2010 The demographic transition that began at the end of the eighteenth century saw dramatic reductions in infant and child mortality, accompanied by declining fertility. Both trends resulted in populations whose stable dynamics imply much older populations. Moreover, the enormous success at defeating infant mortality has left very little room for further improvements to have much quantitative effect on life expectancy. The demographic transition of the recent past and long-term future is one in which adult and especially older-age mortality and morbidity will be the main stage for large changes in population dynamics. These two featurespopulation aging as a social-demographic fact, and individual aging as a target for health improvementpose the grand challenges for behavioral science I wish to address. Integrate behavioral and biological sciences. The first grand challenge is to better integrate the behavioral and biological sciences to their mutual benefit. The Health and Retirement Study (HRS), a cooperative agreement between its sponsor the National Institute on Aging (NIA) and the University of Michigan, which I now direct, is one of several population-based surveys that are integrating biological measures. We are now moving into a leadership role in the integration of genetics, with funding to genotype 20,000 respondents using a current state-of-the-art 2.5 million SNP chip. This will open the door to investigation by many and varied multidisciplinary teams of scholars to better understand the behaviors and health conditions that either advance or delay the progression of aging. In contrast to that first demographic transition, in which the prevention and curing of infectious diseases was central, this one involves chronic diseases that cannot in general be definitively prevented or cured but rather postponed and managed to limit their impact on healthy functioning and mortality. Behavior and decision-making are important to chronic disease management, as they are to other aspects of healthy aging. Understanding the genetics of both chronic disease and the behaviors and decision propensities needed to manage it, will be critical to further progress. What then is the benefit to genetics or the biological sciences from an integration with behavioral sciences? One crucial link is gene expression. To use a common analogy, DNA is the blueprint and gene expression is the contractor who actually implements the design. Genes express their influence over the construction of proteins differently at different times and this appears to be subject to social influences in at least some cases. That means that studying genetic associations without understanding the influence of social environments is inherently limited. Moreover, the fact that humans can both choose and manipulate their environments (e.g., choose a low-risk environment if one has genes that produce a low biological tolerance for risk) means that the links between genetics and outcomes can appear distorted.

299

The challenge to the institutions that support scientific research is to support the large sample sizes needed to make and replicate genetic inferences. The challenge to the scientists is to better understand the genome and its expression to focus the effort more economically on the right genes, the right proteins, and the right phenotypic traits. Expand life-course perspective. Another way in which this new demographic transition differs from the old is that the health of older persons depends on their own individual histories in ways that the health of a newborn simply cannot. Longitudinal studies are critical to the study of aging because it is fundamentally about change and the pace of change in functioning (Hauser and Weir, forthcoming). But most longitudinal studies begin in middle age or later. Greater attention must be given to study designs that allow early-life exposures, experiences, and characteristics to be included in the analysis of outcomes in later life. Birth cohort studies, of which there are several good examples, can achieve this and deserve support now even though their usefulness for aging research are several decades off. The more immediate priority is to fill in individual histories on persons currently in late adulthood. Retrospection has some obvious dangers if recall of earlier risk factors is influenced by the current presence of outcomes the respondent believes are due to those risk factors. That bias is not universal and can be minimized by obtaining retrospective reports before the outcomes are present, and through better question design as we have attempted to do in HRS. The other approach is to exploit studies done many years ago that provide observations on early life. Finding such resources, and finding and studying their participants, should be a top scientific priority. Promote international comparison. A persistent problem in behavioral research is endogeneity, or reverse causality, which clouds the association of two variables. Longitudinal data can be valuable in this regard, and longitudinal study designs are crucial to research on aging, but not always sufficient. International comparisons exploit the natural experiments of national histories to provide some truly exogenous variation. A recent example is the relationship of retirement to cognitive decline. Across individuals, it would be nearly impossible to determine whether cognitive decline caused early retirement or early retirement caused cognitive decline. But some countries have developed generous early retirement policies and others have not, resulting in fairly large differences in retirement ages across countries that is not likely due to cognition. Combining several studies based on the HRS model, Rohwedder and Willis (2010) showed that country variation in retirement age predicted country differences in the pace of cognitive decline. The ability to conduct international comparisons requires a high degree of cooperation or harmonization among scientists and studies. The HRS model has been successful at encouraging comparable designs elsewhere. Such cooperation can be further encouraged by the right incentives from research organizations. REFERENCES Robert M. Hauser and David R. Weir, Recent Developments in Longitudinal Studies of Aging, forthcoming, Demography

300

Rohwedder, Susann, and Robert J. Willis. 2010. "Mental Retirement." Journal of Economic Perspectives, 24(1): 11938.

301

302

Sensitivity Analysis through mixed Gini and OLS regressions Paper submitted to the NSF By Shlomo Yitzhaki The Government Statistician, Israel and Professor emeritus, The Hebrew University Shlomo.yitzhaki@huji.ac.il Abstract About thirty years ago Edward Leamer criticized the credibility of empirical research in economics. Since then there were huge improvements in research design, data collection and econometric methodology. On the other hand, the huge increase in computing power has increased the number of instruments available for the use of the over-zealous researcher who wants to prove his point. I suggest developing the mixed Gini and Ordinary Least Squares regression. It enables unraveling, tracing and testing the role of several whimsical assumptions imposed on the data in regression analysis. Among those assumptions are the linearity assumption, the use of monotonic increasing transformations, and the symmetry between distributions that is imposed by the Pearson correlation coefficient. My conjecture is that the new technique will reduce drastically the number of results that are claimed to be supported by empirical "proofs". This work is licensed under the Creative Commons Attribution-NonCommercialShareAlike 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/3.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.

303

Sensitivity Analysis through mixed Gini and OLS regressions 1. What is the fundamental question? A popular method in reaching quantitative conclusions in research is the method of regression. The most popular one is the Ordinary Least Squares, which is based on the properties of the variance. To simplify the presentation, I will restrict my arguments to the OLS, although the arguments apply, with some modifications, to other methods. It is clear that this area of research suffers from a lack of credibility. This was pointed out by Edward Leamer who states that "Hardly anyone takes data analysis seriously." 1 Leamer traced the lack of credibility to lack of robustness because of sensitivity to key assumptions he called as "whimsical". My interpretation of whimsical assumptions is those that are imposed on the data, affect the coefficients in a drastic way but are not supported by the data. In a recent paper Angrist and Pischke responded by pointing out the huge improvements in research design, better data collection, better definitions of the research question, and more. 2 I do not deny the improvements pointed out. However, as far as I can see, the methodology of estimation has not changed in a qualitative way. More computer power allows more complicated modeling and data mining. But some of the assumptions that are not supported by the data, and that may drive the results, are still there. To see whether Leamers criticism is still valid let me ask the following question: is it possible that two investigators, using the same data and an identical model, can reach opposite conclusions concerning the partial effect of one variable on another? Golan and Yitzhaki supply a positive answer to this question. They show that if one investigator uses Gini regression and the other OLS regression, then the sign of some regression coefficients differ. 3 Yet both regression methods rely on plausible properties of regression model and can be described as innocent applications of the methodology. .

1 2

E. Leamer, Lets Take the Con Out of Econometrics, American Economic Review 73 no. 1 (1983): 37,

J. Angrist and J.S. Pischke, The Credibility Revolution in Empirical Economics: How Better Research Design is taking the Con out of Econometrics, Working Paper No. 15794, NBER, http://www.nber.org./papers/w15794
3

Y. Golan and S. Yitzhaki, Who Does not Respond in the Social Survey: an Exercise in OLS and Gini Regressions, Draft, Presented at the 31st IARIW, http://www.iariw.org/c2010.php

304

My argument is that there are too many tools in the arsenal of the researcher to influence the results of the regression. In some sense the target of the overzealous researcher is to prove his point, which in some cases translates into searching for the model that can get the desired results. My aim is to provide a method that exposes the hidden and redundant assumptions that are responsible for the results. For this purpose, we need a methodology that "reveals more" (the term was coined by Lambert and Aronson). I believe that Leamer's critique can be answered by developing a better technique that incorporates the properties of OLS as a special case. Moreover, it can be used together with the OLS, to see how robust are the conclusions derived by the regression. The suggested methodology is based on the properties of the Gini's Mean Difference (and the Extended Gini family), which has many properties that are similar to those of the variance (and which nests variance), but reveals more about the critical underlying statistical assumptions. The "reveal more" includes the following: The basic assumption in a regression is that there exists a linear model connecting the variables. Yitzhaki showed that both the OLS and Gini methods result in a regression coefficient which is a weighted average of slopes between adjacent points of the independent variable. 4 The method of regression determines the weighting scheme. Hence, linearity is a whimsical assumption. If the underlying model is not linear then the method of regression would result in different estimate. To see the effect of this assumption consider the following: Some variables are not related to each other in a monotonic way. As an example, let us consider age. The association of many variables with age is a U-shape (or an inverse U-shape) relationship. In such a case, the composition of the sample, and the way the age variable is introduced in the regression, together with monotonic increasing transformations and the regression method, may determine the sign of the regression coefficient with respect to age. These factors may also determine the sign of the regression coefficients that are included as independent variables together with age.

S. Yitzhaki, On Using Linear Regression in Welfare Economics, Journal of Business & Economic Statistics, 14, 4 (October 1996): 478-86.

305

The Gini methodology enables the researcher and the reader to know whether nonmonotonic relationships among the variables exist. 5 The same problem exists if the relationship is monotonic but not linear. The reason is that the regression method or a monotonic transformation can change the magnitude of the regression coefficient and the magnitude of its correlation with other variables. This in turn can change the sign of a regression coefficient of another variable in the regression. The Gini method also enables the researcher to test for linearity. If linearity is rejected, then the model should be viewed as a linear approximation that is not useful for prediction. In at least two areas of social science, economic theory calls for asymmetric treatment of the data. It arises because of the assumption of declining marginal utility of income that is made in the areas of decision making under risk and income distribution. It is shown in Yitzhaki that statistical theory may contradict economic theory if some whimsical assumptions--like linearity of the model that includes income as an independent variable is imposed but not supported by the data. 6 To illustrate, a researcher that uses a linear expenditure system using OLS in order to design poverty reducing subsidies estimates the income elasticities of the commodities by the properties of the top deciles, totally ignoring the poor. The extended Gini allows the researcher to impose (and reveal to the reader) her social (or risk) attitude, and to impose the statistical measure of variability that reflects the social attitude on the analysis. This way one first reveals ones social attitude and then imposes it on the analysis. Monotonic transformations, which are a legitimate tool to use in modeling the relationship between variables, can change the sign of the relationship in the case of a non-monotonic relationship. Monotonic transformations include using a different functional form, restricting the sample from above or/and below, and binning (making a continuous variable a discrete one). Of course such transformations of variables change the properties of the data, and therefore should be used sparingly and, when used, should be carefully documented and justified. The Extended Gini regression

E.S. Schechtman, S. Yitzhaki and T. Pudalov, Gini's Multiple Regressions: Two Approaches and Their Interaction (2010). http://ssrn.com Yitzhaki (1996) Ibid.

306

keeps the properties of the data intact, while applying a transformation on the weighting scheme. Another instrument in the hand of an over-zealous or sophisticated researcher is the Pearson correlation coefficient. Its "official" range is between minus one and one. But, if the underlying distributions of the variables are different then its range can be limited. As an example, consider two lognormally distributed variables, for which the Pearson correlation coefficient is bounded from below by -0.36. 7 That is, by applying the transformation ex to two normally distributed variables, we are able to change the correlation coefficient from minus one to minus -0.36. The problem is especially relevant and severe in the field of finance, where additive and multiplicative relationships are mixed. (A transfer of a dollar from one asset to the other is additive, while interest over time is multiplicative, because compound interest is used). Levy and Swartz have shown that the Pearson correlation between the returns of two assets will converge to zero provided that one estimates them for a long enough period, no matter what is the periodical correlation coefficient. 8 The implications of advancing the domain: To answer these questions, we need to explain the difference between the OLS/variance and Gini-based methodologies. Both are based on averaging the difference between all pairs of observations. The difference is in the metric used to define the distance between observations: the variance is based on Euclidean metric, the Gini on "city block". (That is, one can move east/west or north/south as in Manhattan). It is not obvious a priori which metric is more appropriate for particular social science applications. As Yitzhaki develops, the Gini is the only measure of variability that can be decomposed in a way that resembles the decomposition of the variance, but it reveals more. 9 In this context "reveals more" means that the structure of the decomposition of the Gini of a linear combination of random variables can be identical to the structure
7

E. Schechtman and S. Yitzhaki, On the Proper Bounds of the Gini Correlation, Economics Letters 63, 2 (May 1999), 133-138.
8

H. Levy and G. Schwarz, Correlation and the Time Interval over which the Variables Are Measured, Journal of Econometrics 76 (1997): 341

S. Yitzhaki, Ginis Mean Difference: A Superior Measure of Variability for Non-Normal Distributions, Metron LXI, 2 (2003): 285-316.

307

of the decomposition of the variance, provided that certain properties of the underlying distributions hold. Therefore, by using the decomposition of the Gini we can learn about the implicit assumptions behind the use of the variance method. Among the implicit assumptions is the imposition of a symmetric relationship in correlations and linearity. Under the Gini methodology there are two correlation coefficients between two variables, and if they are equal then one gets the same decomposition as is done under the variance. If they are not equal, then a symmetric relationship is imposed on the asymmetric relationship, leading to some of the results mentioned above. Thus using the Gini methodology expands the number of sensitivity tests one has to perform to demonstrate a robust relationship, which will result in reducing of the quantity of results proven by regressions. The range of cases for which researchers will have to admit that no answer can be confidently claimed will likewise expand. To see how this can be done, note that the Gini methodology allows for mixed regressions in the following sense: one can run an OLS regression as well as a Gini regression. If the sign of all regression coefficients are identical and the values do not differ very much, then we may conclude that the regression results are robust. On the other hand, if the signs are not equal, then the researcher should run a mixed regression where the investigator can choose which independent variable to treat according to OLS and which according to the Gini. This enables the researcher to move from one regression to the other in a step-wise way so that one can identify the variable(s) that are responsible for the change in sign that can happen in another independent variable. Should one detect such a change, then one can investigate the simple regression coefficients for a non-monotonic or non-linear relationship. 10 Adopting the methodology will reduce the number of new "findings" in the social science, which in turn will increase the trust in empirical results, further supporting the credibility revolution championed by Angrist and Pischke. 11 At this stage, one can replicate every text-book in econometrics. However, the Gini regression is much more complicated than the OLS, friendly Software has to be developed and it is not clear that there are low hanging fruits in every area.

10 11

Golan and Yitzhaki (2010), Ibid. Angrist and Pischke (2010), Ibid.

308

For Further Reading: Schechtman, E; S. Yitzhaki and T. Pudalov (2010). Gini's multiple regressions: two approaches and their interaction. http://ssrn.com. Forthcoming, Metron. Yitzhaki, S. (1996). On Using Linear Regression in Welfare Economics, Journal of Business & Economic Statistics, 14, 4, October, 478-86. Yitzhaki, S.(2003). Ginis mean difference: A superior measure of variability for non-normal distributions, Metron, LXI, 2, 285-316.

309

Você também pode gostar