Archive: Working Papers 801-925

You can search the 天美传媒app Economics Working Papers by author, title, keyword, JEL category, and abstract contents via聽听辞谤听.

925. Ryan Chahrour (Boston College), and Robert Ulbricht (Toulouse School of Economics), "Robust Predictions for DSGE Models with Incomplete Information", (rev. 06/2021; PDF)

Abstract: We provide predictions for DSGE models with incomplete information that are robust across information structures. Our approach maps an incomplete-information model into a full-information economy with time-varying expectation wedges and provides conditions that ensure the wedges are rationalizable by some information structure. Using our approach, we quantify the potential importance of information as a source of business cycle fluctuations in an otherwise frictionless model. Our approach uncovers a central role for firm-specific demand shocks in supporting aggregate confidence fluctuations. Only if firms face unobserved local demand shocks can confidence fluctuations account for a significant portion of the US business cycle.

924. Yushan Hu (Boston College) and Ben Li (Boston College), "" (03/2017; PDF)

Abstract: The arrival of the internet age forces academic journals to adjust their output margins: journal length, article length, and number of published articles. Using data from 41 major economics journals spanning 21 years (1994-2014), we find that both journals and articles are getting longer, but the page share of an individual article within its journal is shrinking. This pattern is consistent with a monopolistic competition model that features within-firm (journal) specialization. As predicted by the model, the share of an individual article shrinks less in general-interest journals and better ranked journals, where expertise is less substitutable across topics. In this discipline that emphasizes the benefits of specialization, the expertise underpinning its publications is indeed divided in a specialized fashion.

923. Claudia Olivetti (Boston College and NBER) Erling Barth (Institute for Social Research, Oslo and NBER) and Sari Pekkala Kerr (Wellesley College), "" (02/2017; PDF)

Abstract: We use a unique match between the 2000 Decennial Census of the United States and the Longitudinal Employer Household Dynamics (LEHD) data to analyze how much of the increase in the gender earnings gap over the lifecycle comes from shifts in the sorting of men and women across high- and low-pay establishments and how much is due to differential earnings growth within establishments. We find that for the college educated the increase is substantial and, for the most part, due to differential earnings growth within establishment by gender. The between component is also important. Differential mobility between establishments by gender can explain approximately thirty percent of the overall gap widening for this group. For those with no college, the, relatively small, increase of the gender gap over the lifecycle can be fully explained by differential moves by gender across establishments. The evidence suggests that, for both education groups, the between-establishment component of the increasing wage gap is entirely driven by those who are married.

922. Hideo Konishi, Boston College, and Chen-Yu Pan, Wuhan University, "" (11/2016; PDF)

Abstract: This paper analyzes the optimal partisan and bipartisan gerryman- dering policies in a model with electoral competitions in policy positions and transfer promises. With complete freedom in redistricting, partisan gerrymandering policy generates the most one-sidedly biased district profile, while bipartisan gerrymandering generates the most polarized district profile. In contrast, with limited freedom in gerrymandering, both partisan and bipartisan gerrymandering tend to prescribe the same policy. Friedman and Holden (2009) find no significant empirical difference between bipartisan and partisan gerrymandering in explaining incumbent reelection rates. Our result suggests that gerrymanderers may not be as free in redistricting as popularly thought.

921. Michael T. Belongia, University of Mississippi, and Peter Ireland, Boston College, "" (11/2016; PDF)

Abstract: In the 1920s, Irving Fisher extended his previous work on the Quantity Theory to describe how, through an early version of the Phillips Curve, changes in the price level could affect both output and unemployment. At the same time, Holbrook Working designed a quantitative rule for achieving price stability through control of the money supply. This paper develops a structural vector autoregressive time series model that allows these "classical" channels of monetary transmission to operate alongside, or perhaps even instead of, the now-more-familiar interest rate channels of the canonical New Keynesian model. Even with Bayesian priors that intentionally favor the New Keynesian view, the United States data produce posterior distributions for the model's key parameters that are more consistent with the ideas of Fisher and Working. Changes in real money balances enter importantly into the model's aggregate demand relationship, while growth in Divisia M2 appears in the estimated monetary policy rule. Contractionary monetary policy shocks reveal themselves through persistent declines in nominal money growth instead of rising nominal interest rates. These results point to the need for new theoretical models that capture a wider range of channels through which monetary policy affects the economy and suggest that, even today, the monetary aggregates could play a useful role in the Federal Reserve's policymaking strategy.

920. Robert G. Murphy, Boston College, "" (04/2016; PDF)

Abstract: Standard Phillips curve models relating price inflation to measures of slack in the economy suggest that the United States should have experienced an episode of deflation during the Great Recession and the subsequent sluggish recovery. But although inflation reached very low levels, prices continued to rise rather than fall. More recently, many observers have argued that inflation should have increased as the unemployment rate declined and labor markets tightened, but inflation has remained below the Federal Reserve鈥檚 policy target. This paper confirms that the slope of the Phillips curve has decreased over recent decades and is very close to zero today. I modify the Phillips curve to allow its slope to vary continuously through time drawing on theories of price-setting behavior when prices are costly to adjust and when information is costly to obtain. I find that adapting the Phillips curve to allow for time-variation in its slope helps explain inflation before, during, and after the Great Recession.

919. Lauren Hoehn Velasco, Boston College, "" (10/2016; PDF)

Abstract: This study estimates the impact of an American rural public health program on child mortality over 1908 to 1933. Due to the absence of sanitation and child-oriented health services outside of urban areas, public and private agencies sponsored county-level health departments (CHDs) throughout the US. Variation in the location and timing of the CHDs identifies improvements in population health, which are captured entirely by children. Mortality declines emerge in infancy and gradually decay through childhood. Adversely affected areas with either an ample population of nonwhites or greater levels of preexisting infectious disease undergo larger reductions in mortality.

918. Christopher F. Baum, Boston College and DIW Berlin, Atreya Chakraborty, University of Massachusetts-Boston, and Boyan Liu, Beihang University, "" (07/2016; PDF)

Abstract: In this paper we provide evidence on how firm-specific and macroeconomic uncertainty affects shareholders' valuation of a firm's cash holdings. This extends previous work on this issue by highlighting the importance of the source of uncertainty. Our findings indicate that increases in firm-specific risk generally increase the value of cash while increases in macroeconomic risk generally decrease the value of cash. These findings are robust to alternative definitions of the unexpected change in cash. We extend our analysis to financially constrained and unconstrained firms.

917. Ryan Chahrour, Boston College, and Kyle Jurado, Duke University, "News or Noise? The Missing Link" (rev. 11/2017; PDF)

Abstract: The literature on belief-driven business cycles treats news and noise as distinct representations of agents鈥 beliefs. We prove they are empirically the same. Our result lets us isolate the importance of purely belief-driven fluctuations. Using three prominent estimated models, we show that existing research understates the importance of pure beliefs. We also explain how differences in both economic environment and information structure affect the estimated importance of pure beliefs.

916. Marek Pycia, UCLA, and M. Utku Unver, Boston College, "" (08/2016; PDF)

Abstract: The paper estimates a model that allows for shifts in the aggressiveness of monetary policy and time variation in the distribution of macroeconomic shocks. These model features induce variations in the cyclical properties of inflation and the riskiness of bonds. The estimation identifies inflation as procyclical from the late 1990s, when the economy shifted toward aggressive monetary policy and experienced procyclical macroeconomics shocks. Since bonds hedge stock market risks when inflation is procylical, the stock-bond return correlation turned negative in the late 1990s. The risks of encountering countercyclical inflation in the future could lead to an upward-sloping yield curve, like in the data.

915. Dongho Song, Boston College, "" (07/2016; PDF)

Abstract: The paper estimates a model that allows for shifts in the aggressiveness of monetary policy and time variation in the distribution of macroeconomic shocks. These model features induce variations in the cyclical properties of inflation and the riskiness of bonds. The estimation identifies inflation as procyclical from the late 1990s, when the economy shifted toward aggressive monetary policy and experienced procyclical macroeconomics shocks. Since bonds hedge stock market risks when inflation is procylical, the stock-bond return correlation turned negative in the late 1990s. The risks of encountering countercyclical inflation in the future could lead to an upward-sloping yield curve, like in the data.

914. Jos茅 Alberto Molina, Departamento de An谩lisis Econ贸mico, Facultad de Econom铆a y Empresa, Universidad de Zaragoza, Alberto Alcolea, Kampal Data Solutions S.L., Alfredo Ferrer, Instituto de Biocomputaci贸n y Fisica de Sistemas Complejos (BIFI), Zaragoza, David I帽iguez, Fundaci贸n ARAID, Diputaci贸n General de Arag贸n, Zaragoza, Alejandro Rivero, Instituto de Biocomputaci贸n y Fisica de Sistemas Complejos (BIFI), Zaragoza, Gonzalo Ruiz, Instituto de Biocomputaci贸n y Fisica de Sistemas Complejos (BIFI), Zaragoza, and Alfonso Taranc贸n, Departamento de F铆sica Te贸rica, Facultad de Ciencias, Universidad de Zaragoza, "" (06/2016; PDF)

Abstract: We explore the relationship between collaborations in writing papers and the academic productivity of economists and, particularly, we describe the magnitude and intensity of co-authorship among economists. To that end, we employ interaction maps from Complex Systems methods to study the global properties of specific networks. We use 8,253 JCR papers from ISI-WOK, published by 5,188 economists from Spanish institutions, and their co-authors, up to 8,202 researchers, from 2002 to 2014, to identify and determine the collaborative structure of economics research in Spain, with its primary communities and figures of influence. Our results indicate that centrality and productivity are correlated, particularly with respect to a local estimator of centrality (page rank), and we provide certain recommendations, such as promoting interactions among highly productive authors who have few co-authors with other researchers in their environment, or recommending that authors who may be well-positioned but minimally productive strive to improve their productivity.

913. Michael T. Belongia, University of Mississippi, and Peter N. Ireland, Boston College, "" (05/2016; PDF)

Abstract: Unconventional policy actions, including quantitative easing and forward guidance, taken by the Federal Reserve during and since the financial crisis and Great Recession of 2007-2009, have been widely interpreted as attempts to influence long-term interest rates after the federal funds rate hit its zero lower bound. Alternatively, similar actions could have been directed at stabilizing the growth rate of a monetary aggregate, so as to maintain a more consistent level of policy accommodation in the face of severe disruptions to the financial sector and the economy at large. This paper bridges the gap between these two views, by developing a structural vector autoregression that uses information contained in both interest rates and a Divisia monetary aggregate to infer the stance of Federal Reserve policy and to gauge its effects on aggregate output and prices. Counterfactual simulations from the SVAR suggest that targeting money growth at the zero lower bound would not only have been feasible, but would also have supported a stronger and more rapid economic recovery since 2010.

912. Chaim Fershtman, Tel Aviv University, and Uzi Segal, Boston College, "" (05/2016; PDF)

Abstract: Interaction between decision makers may affect their preferences. We consider a setup in which each individual is characterized by two sets of preferences: his unchanged core preferences and his behavioral preferences. Each individual has a social influence function that determines his behavioral preferences given his core preferences and the behavioral preferences of other individuals in his group. Decisions are made according to behavioral preferences. The paper considers different properties of these social influence functions and their effect on equilibrium behavior. We illustrate the applicability of our model by considering decision making by a committee that has a deliberation stage prior to voting.

911. Michael T. Belongia, University of Mississippi, and Peter N. Ireland, Boston College, "" (05/2016; PDF)

Abstract: Discussions of monetary policy rules after the 2008-2009 recession highlight the potential impotence of a central bank's actions when the short-term interest rate under its control is limited by the zero lower bound. This perspective assumes, in a manner consistent with the canonical New Keynesian model, that the quantity of money has no role to play in transmitting a central bank's actions to economic activity. This paper examines the validity of this claim and investigates the properties of alternative monetary policy rules based on control of the monetary base or a monetary aggregate in lieu of a short-term interest rate. The results indicate that rules of this type have the potential to guide monetary policy decisions toward the achievement of a long run nominal goal without being constrained by the zero lower bound on a nominal interest rate. They suggest, in particular, that by exerting its influence over the monetary base or a broader aggregate, the Federal Reserve could more effectively stabilize nominal income around a long-run target path, even in a low or zero interest-rate environment.

910. Susanto Basu, Boston College, and Pierre De Leo, Boston College, "" (rev. 05/2017; PDF)

Abstract: Central banks nearly always state explicit or implicit inflation targets in terms of consumer price inflation. If there are nominal rigidities in the pricing of both consumption and investment goods and if the shocks to the two sectors are not identical, then monetary policy cannot stabilize inflation and output gaps in both sectors. Thus, the central bank faces a tradeoff between targeting consumption price inflation and investment price inflation. In this setting, ignoring investment prices typically leads to substantial welfare losses because the intertemporal elasticity of substitution in investment is much higher than in consumption. In our calibrated model, consumer price targeting leads to welfare losses that are at least three times the loss under optimal policy. A simple rule that puts equal weight on stabilizing consumption and investment price inflation comes close to replicating the optimal policy. Thus, GDP deflator targeting is not a good approximation to optimal policy. A shift in monetary policy to targeting a weighted average of consumer and investment price inflation may produce significant welfare gains, although this would constitute a major change in current central banking practice.

909. Ben Li, Boston College, and Penglong Zhang, Boston College, "" (02/2017; PDF)

Abstract: Since the Age of Discovery, the world has grown integrated economically while remaining disintegrated politically as a collection of nation-states. The nation-state system is robust because borders, as state dividers, interact with economic integration to absorb shocks. We build a tractable general equilibrium model of international trade and national borders in the world. Over a longer time horizon, declining trade costs alter trade volumes across states but also incentivize states to redraw borders, causing states to form, change, and dissolve. Our model offers significant implications for the global economy and politics, including trade patterns, political geography, its interplay with natural geography, state-size distribution, and the risks of militarized disputes. These implications are supported by modern and historical data.

908. Bogdan Genchev, Boston College, and Julie Holland Mortimer, Boston College, "" (02/2016; PDF)

Abstract: Conditional pricing practices allow the terms of sale between a producer and a downstream distributor to vary with the ability of the downstream firm to meet a set of conditions put forward by the producer. The conditions may require a downstream firm to accept minimum quantities or multiple products, to adhere to minimum market-share requirements, or even to deal exclusively with one producer. The form of payment from the producer to the downstream firm may take the form of a rebate, marketing support, or simply the willingness to supply inventory. The use of conditional pricing practices is widespread throughout many industries, and the variety of contractual forms used in these arrangements is nearly as extensive as the number of contracts.

907. Hong Luo, Harvard Business School, and Julie Holland Mortimer, Boston College, "" (02/2016; PDF)

Abstract: Effective dispute resolution is important for reducing private and social costs. We study how resolution responds to changes in price and communication using a new, extensive dataset of copyright infringement incidences by firms. The data cover two field experiments run by a large stock-photography agency. We find that substantially reducing the requested amount generates a small increase in the settlement rate. However, for the same reduced request, a message informing infringers of the price reduction and acknowledging possible unintentionality generates a large increase in settlement; including a deadline further increases the response. The small price effect, compared to the large message effect, can be explained by two countervailing effects of a lower price: an inducement to settle early, but a lower threat of escalation. Furthermore, acknowledging possible unintentionality may encourage settlement due to the typically inadvertent nature of these incidences. The resulting higher settlement rate prevents additional legal action and reduces social costs.

906. Umut Mert Dur, North Carolina State University, Parag A. Pathak, MIT, and Tayfun S枚nmez, Boston College, "" (03/2016; PDF)

Abstract: Affirmative action schemes must confront the tension between admitting the highest scoring applicants and ensuring diversity. In Chicago鈥檚 affirmative action system for exam schools, applicants are divided into one of four socioeconomic tiers based on the characteristics of their neighborhood. Applicants can be admitted to a school either through a slot reserved for their tier or through a merit slot. Equity considerations motivate equal percentage reserves for each tier, but there is a large debate on the total size of these reserve slots relative to merit slots. An issue that has received much less attention is the order in which slots are processed. Since the competition for merit slots is influenced directly by the allocation to tier slots, equal size reserves are not sufficient to eliminate explicit preferential treatment. We characterize processing rules that are tier-blind. While explicit preferential treatment is ruled out under tier-blind rules, it is still possible to favor certain tiers, by ex- ploiting the distribution of scores across tiers, a phenomenon we call statistical preferential treatment. We characterize the processing order that is optimal for the most disadvantaged tier assuming that these applicants systematically have lower scores. This policy processes merit slots prior to any slots reserved for tiers. Our main result implies that Chicago has been providing an additional boost to the disadvantaged tier beyond their reserved slots. Using data from Chicago, we show that the bias due to processing order for the disadvantaged tier is comparable to that from the 2012 decrease in the size of the merit reserve.

905. Jose虂 Alberto Molina, University of Zaragoza, visiting Boston College, Alfredo Ferrer, University of Zaragoza, Jose虂 Ignacio Gimenez-Nadal, University of Zaragoza, Carlos Gracia-Lazaro, University of Zaragoza, Yamir Moreno, University of Zaragoza, and Angel Sanchez, University of Zaragoza "" (03/2016; PDF)

Abstract: In this paper, we analyze how kinship among family members affects intergenerational cooperation in a public good game. 165 individuals from 55 families, comprising three generations (youths, parents, and grandparents), play a public good game in three different treatments: one in which three members of the same family play each other (family), a second with the youth and two non-family members but preserving the previous generational structure (intergenerational), and a third in which three randomly-selected players play each other (random). We find that players contribute more to the public good when they play with other family members, than when they play with non-family members. This effect is present in all three generations, and is independent of the gender of the players. We also observe the significant result that older generations contribute more to the public good, relative to their children.

904. Claudia Olivetti, Boston College, Eleonora Patacchini, Cornell University, and Yves Zenou, Monash University, "Mothers, Peers and Gender Identity" (rev. 08/2018; PDF)

Abstract: We study whether a woman鈥檚 labor supply as a young adult is shaped by the work behavior of her adolescent peers鈥 mothers. Using detailed information on a sample of U.S. teenagers who are followed over time, we find that labor force participation of high school peers鈥 mothers affects adult women鈥檚 labor force participation, above and beyond the effect of their own mothers. The analysis suggests that women who were exposed to a larger number of working mothers during adolescence are less likely to feel that work interferes with family responsibilities. This perception, in turn, is important for whether they work when they have children.

903. Claudia Olivetti, Boston College, M. Daniele Paserman, Boston University, and Laura Salisbury, York University, "" (01/2016; PDF)

Abstract: This paper estimates intergenerational elasticities across three generations in the United States in the late 19th and early 20th centuries. We extend the methodology in Olivetti and Paserman (2015) to explore the role of maternal and paternal grandfathers for the transmission of economic status to grandsons and granddaughters. We document three main findings. First, grandfathers matter for income transmission, above and beyond their e鈫礶ct on fathers鈥 income. Second, the socio-economic status of grandsons is influenced more strongly by paternal grandfathers than by maternal grand- fathers. Third, maternal grandfathers are more important for granddaughters than for grandsons, while the opposite is true for paternal grandfathers. We present a model of multi-trait matching and inheritance that can rationalize these findings.

902. Marianne Bertrand, Booth School of Business, University of Chicago, Patricia Cort茅s, Questrom School of Business, Boston University, Claudia Olivetti, Boston College, and Jessica Pan, University of Singapore, "" (02/2016; PDF)

Abstract: In most of the developed world, skilled women marry at a lower rate than unskilled women. We document heterogeneity across countries in how the marriage gap for skilled women has evolved over time. As labor market opportunities for women have improved, the marriage gap has been growing in some countries but shrinking in others. We discuss a theoretical model in which the (negative) social attitudes towards working women might contribute towards the lower marriage rate of skilled women, and might also induce a non-linear relationship between their labor market prospects and their marriage outcomes. The model is suited to understand the dynamics of the marriage gap for skilled women over time within a country with set social attitudes towards working women. The model also delivers predictions about how the marriage gap for skilled women should react to changes in their labor market opportunities across countries with more or less conservative attitudes towards working women. We test the key predictions of this model in a panel of 23 developed countries, as well as in a panel of US states.

901. Fabio Schiantarelli, Boston College, Massimiliano Stacchini, Bank of Italy, and Philip E. Strahan, Boston College and NBER, "" (08/2016; PDF)

Abstract: Exposure to liquidity risk makes banks vulnerable to runs from both depositors and from wholesale, short-term investors. This paper shows empirically that banks are also vulnerable to run-like behavior from borrowers who delay their loan repayments (default). Firms in Italy defaulted more against banks with high levels of past losses. We control for borrower fundamentals with firm-quarter fixed effects; thus, identification comes from a firm鈥檚 choice to default against one bank versus another, depending upon their health. This 鈥榮elective鈥 default increases where legal enforcement is weak. Poor enforcement thus can create a systematic loan risk by encouraging borrowers to default en masse once the continuation value of their bank relationships comes into doubt.

900. Tayfun S枚nmez, Boston College, M. Utku 脺nver, Boston College, and 脰zg眉r Yilmaz, Ko莽 University, "How (Not) to Integrate Blood Subtyping Technology to Kidney Exchange" (01/2016; PDF)

Abstract: Even though kidney exchange became an important source of kidney transplants over the last decade with the introduction of market design techniques to organ transplantation, the shortage of kidneys for transplantation is greater than ever. Due to biological disadvantages, patient populations of blood types B/O are disproportionately hurt by this increasing shortage. The disadvantaged blood types are overrepresented among minorities in the US. In order to mitigate the disproportionate harm to these biologically disadvantaged groups, the UNOS reformed in 2014 the US deceased-donor kidney-allocation system, utilizing a technological advance in blood typing. The improved technology allows a certain fraction of blood type A kidneys, referred to as subtype A2 kidneys, to be transplanted to medically qualified patients of blood types B/O. The recent reform prioritizes subtype A2 deceased-donor kidneys for blood type B patients only. When restricted to the deceased-donor allocation system, this is merely a distributional reform with no adverse impact on the overall welfare of the patient population. In this paper we show that the current implementation of the reform has an unintended consequence, and it de facto extends the preferential allocation to kidney exchange as well. Ironically this 鈥渟pillover鈥 not only reduces the number of living-donor transplants for the overall patient population, but also for the biologically disadvantaged groups who are the intended beneficiaries of the reform. We show that minor variations of the current policy do not suffer from this unintended consequence, and we make two easy-to-implement, welfare-increasing policy recommendations.

899. Stefan Hoderlein, Boston College, Liangjun Su, Singapore Management University, Halbert White, University of California-San Diego, and Thomas Tao Yang, Boston College, "" (02/2016; PDF)

Abstract: Monotonicity in a scalar unobservable is a common assumption when modeling heterogeneity in structural models. Among other things, it allows one to recover the underlying structural function from certain conditional quantiles of observables. Nevertheless, monotonicity is a strong assumption and in some economic applications unlikely to hold, e.g., random coefficient models. Its failure can have substantive adverse consequences, in particular inconsistency of any estimator that is based on it. Having a test for this hypothesis is hence desirable. This paper provides such a test for cross-section data. We show how to exploit an exclusion restriction together with a conditional independence assumption, which in the binary treatment literature is commonly called unconfoundedness, to construct a test. Our statistic is asymptotically normal under local alternatives and consistent against global alternatives. Monte Carlo experiments show that a suitable bootstrap procedure yields tests with reasonable level behavior and useful power. We apply our test to study the role of unobserved ability in determining Black-White wage differences and to study whether Engel curves are monotonically driven by a scalar unobservable.

898. Daniel Gutknecht, Mannheim University, Stefan Hoderlein, Boston College, and Michael Peters, Yale University, "" (02/2016; PDF)

Abstract: Do individuals use all information at their disposal when forming expectations about future events? In this paper we present an econometric framework to answer this question. We show how individual information sets can be characterized by simple nonparametric exclusion restrictions and provide a quantile based test for constrained information processing. In particular, our methodology does not require individuals鈥 expectations to be rational, and we explicitly allow for individuals to have access to sources of information which the econometrician cannot observe. As an application, we use microdata on individual income expectations to study which information agents employ when forecasting future earnings. Consistent with models where information processing is limited, we find that individuals鈥 information sets are coarse in that valuable information is discarded. We then quantify the utility costs of coarse information within a standard consumption life-cycle model. Consumers would be willing to pay 0.04% of their permanent income to incorporate the econometrician鈥檚 information set in their forecasts.

897. Christoph Breunig, Humboldt-Universit盲t zu Berlin, and Stefan Hoderlein, Boston College, "" (02/2016; PDF)

Abstract: In this paper, we suggest and analyze a new class of specification tests for random coefficient models. These tests allow to assess the validity of central structural features of the model, in particular linearity in coefficients and generalizations of this notion like a known nonlinear functional relationship. They also allow to test for degeneracy of the distribution of a random coefficient, i.e., whether a coefficient is fixed or random, including whether an associated variable can be omitted altogether. Our tests are nonparametric in nature, and use sieve estimators of the characteristic function. We analyze their power against both global and local alternatives in large samples and through a Monte Carlo simulation study. Finally, we apply our framework to analyze the specification in a heterogeneous random coefficients consumer demand model.

896. Stefan Hoderlein, Boston College, Hajo Holzmann, Marburg University, Maximilian Kasy, Harvard University, and Alexander Meister, University of Rostock, Erratum regarding 鈥" (07/2015; PDF)

895. Stefan Hoderlein, Boston College, Lars Nesheim, University College London, and Anna Simoni, CNRS - CREST, "" (04/2015; PDF)

Abstract: This paper discusses nonparametric estimation of the distribution of random coefficients in a structural model that is nonlinear in the random coefficients. We establish that the problem of recovering the probability density function (pdf ) of random parameters falls into the class of convexly-constrained inverse problems. The framework offers an estimation method that separates computational solution of the structural model from estimation. We first discuss nonparametric identification. Then, we propose two alternative estimation procedures to estimate the density and derive their asymptotic properties. Our general framework allows us to deal with unobservable nuisance variables, e.g., measurement error, but also covers the case when there are no such nuisance variables. Finally, Monte Carlo experiments for several structural models are provided which illustrate the performance of our estimation procedure.

894. Stefan Hoderlein, Boston College, Hajo Holzmann, Marburg University, and Alexander Meister, University of Rostock, "" (06/2015; PDF)

Abstract: The triangular model is a very popular way to capture endogeneity. In this model, an outcome is determined by an endogenous regressor, which in turn is caused by an instrument in a first stage. In this paper, we study the triangular model with random coefficients and exogenous regressors in both equations. We establish a profound non-identification result: the joint distribution of the random coefficients is not identified, implying that counterfactual outcomes are also not identified in general. This result continues to hold, if we confine ourselves to the joint distribution of coefficients in the outcome equation or any marginal, except the one on the endogenous regressor. Identification continues to fail, even if we focus on means of random coefficients (implying that IV is generally biased), or let the instrument enter the first stage in a monotonic fashion. Based on this insight, we derive bounds on the joint distribution of random parameters, and suggest an additional restriction that allows to point identify the distribution of random coefficients in the outcome equation. We extend this framework to cover the case where the regressors and instruments have limited support, and analyze semi- and nonparametric sample counterpart estimators in finite and large samples. Finally, we give an application of the framework to consumer demand.

893. Stefan Hoderlein, Boston College, and Anne Vanhems, University of Toulouse, "" (09/2013; PDF)

Abstract: This paper proposes a framework to model empirically welfare effects that are associated with a price change in a population of heterogeneous consumers which is similar to Hausman and Newey (1995), but allows for more general forms of heterogeneity. Individual demands are characterized by a general model which is nonparametric in the regressors, as well as monotonic in unobserved heterogeneity. In this setup, we first provide and discuss conditions under which the heterogeneous welfare effects are identified, and establish constructive identification. We then propose a sample counterpart estimator, and analyze its large sample properties. For both identification and estimation, we distinguish between the cases when regressors are exogenous and when they are endogenous. Finally, we apply all concepts to measuring the heterogeneous effect of a chance of gasoline price using US consumer data and find very substantial differences in individual effects across quantiles.

892. Liangjun Su, Singapore Management University, Stefan Hoderlein, Boston College, and Halbert White, University of California-San Diego, "" (04/2013; PDF)

Abstract: Monotonicity in a scalar unobservable is a crucial identifying assumption for an important class of nonparametric structural models accommodating unobserved heterogeneity. Tests for this monotonicity have previously been unavailable. This paper proposes and analyzes tests for scalar monotonicity using panel data for structures with and without time-varying unobservables, either partially or fully nonseparable between observables and unobservables. Our nonparametric tests are computationally straightforward, have well behaved limiting distributions under the null, are consistent against pre- cisely specified alternatives, and have standard local power properties. We provide straightforward bootstrap methods for inference. Some Monte Carlo experiments show that, for empirically relevant sample sizes, these reasonably control the level of the test, and that our tests have useful power. We apply our tests to study asset returns and demand for ready-to-eat cereals.

891. Stefan Hoderlein, Boston College, and Matthew Shum, Caltech, "" (04/2014; PDF)

Abstract: In this paper, we propose and implement an estimator for price elasticities in demand models that makes use of Panel data. Our underlying demand model is nonparametric, and accommodates general distributions of product-specific unobservables which can lead to endogeneity of price. Our approach allows these unobservables to vary over time while, at the same time, not requiring the availability of instruments which are orthogonal to these unobservables. Monte Carlo simulations demonstrate that our estimator works remarkably well, even with modest sample sizes. We provide an illustrative application to estimating the cross-price elasticity matrix for carbonated soft drinks.

890. Jose虂 Ignacio Gimenez-Nadal, University of Zaragoza, Jose虂 Alberto Molina, University of Zaragoza (visiting Boston College), and Jorge Velilla, University of Zaragoza, "" (02/2016; PDF)

Abstract: In this paper, we analyze the spatial distribution of US employment and earnings against an urban wage-efficiency background, where leisure and effort at work are complementary. Using data from the American Time Use Survey (ATUS) for the period 2003-2014, we analyze the spatial distribution of employment across metropolitan areas. We also empirically study the relationship between individual earnings and commuting and leisure. Our empirical results show that employment is mostly concentrated in metropolitan cores, and that earnings increase with 鈥渆xpected鈥 commuting time, which gives empirical support to our urban wage-efficiency theory. Furthermore, we use Geographical Information System models to show that there is no common pattern of commuting and the employees-to-unemployed rate, although we find higher wages in comparatively crowded states, where average commuting times are also higher.

889. Claudia Olivetti, Boston College, and Barbara Petrongolo, Queen Mary University, "" (06/2016; PDF)

Abstract: Women in developed economies have made major inroads in labor markets throughout the past century, but remaining gender differences in pay and employment seem remarkably persistent. This paper documents long-run trends in female employment, working hours and relative wages for a wide cross-section of developed economies. It reviews existing work on the factors driving gender convergence, and novel perspectives on remaining gender gaps. The paper finally emphasizes the interplay between gender trends and the evolution of the industry structure. Based on a shift-share decomposition, it shows that the growth in the service share can explain at least half of the overall variation in female hours, both over time and across countries.

888. Ryan Chahrour, Boston College, and Luminita Stevens, University of Maryland, "" (12/2015; PDF)

Abstract: We develop a model of equilibrium price dispersion via retailer search and show that the degree of market segmentation within and across countries cannot be separately identified by good-level price data alone. We augment a set of well-known empirical facts about the failure of the law of one price with data on aggregate intranational and international trade quantities, and calibrate the model to match price and quantity facts simultaneously. The calibrated model matches the data very well and implies that within-country markets are strongly segmented, while international borders contribute virtually no additional market segmentation.

887. Joseph Quinn, Boston College, and Kevin Cahill, Sloan Center on Aging & Work at Boston College, "" (12/2015; PDF)

Abstract: We have entered a new world of retirement income security in America, with older individuals more exposed to market risk and more vulnerable to financial insecurity than prior generations. This reflects an evolution that has altered the historical vision of a financially-secure retirement supported by Social Security, a defined-benefit pension plan, and individual savings. Today, two of these three retirement income sources 鈥 pensions and savings 鈥 are absent or of modest importance for many older Americans. Retirement income security now often requires earnings from continued work later in life, which exacerbates the economic vulnerability of certain segments of the population, including persons with disabilities, the oldest-old, single women, and individuals with intermittent work histories. Because of the unprecedented aging of our society, further changes to the retirement income landscape are inevitable, but policymakers do have options to help protect the financial stability of older Americans. We can begin by promoting savings at all (especially younger) ages and by removing barriers that discourage work later in life. For individuals already on the cusp of retirement, more needs to be done to educate the public about the value of delaying the receipt of Social Security benefits. Inaction now could mean a return to the days when old age and poverty were closely linked. The negative repercussions of this outcome would extend well beyond traditional economic measures, as physical and mental health outcomes are closely tied to financial security.

886. Haluk Ergin, University of California, Berkeley, Tayfun S枚nmez, Boston College, and Utku 脺nver, Boston College, "" (rev. 02/2017; PDF)

Abstract: Owing to the worldwide shortage of deceased-donor organs for transplantation, living donations have become a significant source of transplant organs. However, not all willing donors can donate to their intended recipients because of medical incompatibilities. These incompatibilities can be overcome by an exchange of donors between patients. For kidneys, such exchanges have become widespread in the last decade with the introduction of optimization and market design techniques to kidney exchange. A small but growing number of liver exchanges have also been conducted. Over the last two decades, a number of transplantation procedures emerged where organs from two living donors are transplanted to a single patient. Prominent examples include dual-graft liver transplantation, lobar lung transplantation, and simultaneous liver-kidney transplantation. Exchange, however, has been neither practiced nor introduced in this context. We introduce dual-donor organ exchange as a novel transplantation modality, and through simulations show that living-donor transplants can be significantly increased through such exchanges. We also provide a simple theoretical model for dual-donor organ exchange and introduce optimal exchange mechanisms under various logistical constraints.

885. Christopher F. Baum, Boston College, Hans L枚枚f, Royal Institute of Technology, Stockholm, and Pardis Nabavi, Royal Institute of Technology, Stockholm "" (11/2015; PDF)

Abstract: This paper examines variations in productivity growth due to innovation within a given location and between different locations. Implementing a dynamic panel data approach on Swedish micro data, we test the sepa- rate and complementary effects of internal innovation efforts and spillovers from the local milieu. Measuring the potential knowledge spillover by ac- cess to knowledgeintensive services, the estimation results produce strong evidence of differences in the capacity to benefit from external knowledge among persistent innovators, temporary innovators and non-innovators. The results are consistent regardless of whether innovation efforts are measured in terms of the frequency of patent applications or the rate of R&D investment.

884. Sanjay Chugh, Boston College, Wolfgang Lechthalerz, Kiel Institute for the World Economy, and Christian Merkl, Friedrich-Alexander-University Erlangen-Nuremberg "" (10/2015; PDF)

Abstract: This paper characterizes long-run and short-run optimal fiscal policy in the labor selection framework. Quantitatively, the time-series volatility of the labor income tax rate is orders of magnitude larger than the "tax-smoothing" results based on Walrasian labor markets, but is a few times smaller than the results based on search and matching labor markets. To understand these results in terms of model primitives, we develop a welfare-relevant analytic concept of externalities for the selection model, which we label "tightness." This concept of tightness is the source of the decentralized economy's inefficient cross-sectional wage premia between the average newly-hired worker and the marginal newly-hired worker. Compared to the traditional concept of labor-market tightness in the search and matching literature, this new concept of tightness plays a highly similar role, and, like in the matching model, is crucial for understanding efficiency and optimal policy.

883. Rossella Calvi, Boston College, and Federico G. Mantovanelli, The Analysis Group, "" (08/2015; PDF)

Abstract: We study the long-term effect of access to health care on individuals' health status by investigating the relationship between the proximity to a Protestant medical mission in colonial India and current health outcomes. We use individuals' anthropometric indicators to measure health status and geocoding tools to calculate the distance between the location of individuals today and Protestant health facilities founded in the nineteenth century. We exploit variation in activities of missionary societies and use an instrumental variable approach to show that proximity to a Protestant medical mission has a causal effect on individuals' health status. We find that a 50 percent reduction in the distance from a historical medical facility increases current individuals' body mass index by 0.4. We investigate some potential transmission channels and we find that the long-run effect of access to health care is not driven by persistence of infrastructure, but by improvements in individuals' health potential and changes in hygiene and health habits.

882. Michael T. Belongia, University of Mississippi, and , Boston College, "" (08/2015; PDF)

Abstract: This paper estimates a VAR with time-varying parameters to characterize the changes in Federal Reserve policy that occurred from 2000 through 2007 and assess how those changes affected the performance of the U.S. economy. The results point to a gradual shift in the Fed's emphasis over this period, away from stabilizing inflation and towards stabilizing output. A persistent deviation of the federal funds rate from the settings prescribed by the estimated monetary policy rule appears more important, however, in causing inflation to overshoot its target in the years leading up to the Great Recession.

881. Alexander Kurov, West Virginia University, Alessio Sancetta, University of London, Georg H. Strasser, European Central Bank, and Marketa Halova Wolfe, Skidmore College, "" (07/2015; PDF)

Abstract: We examine stock index and Treasury futures markets around releases of U.S. macroeconomic announcements from 2003 to 2014. Since 2008 seven out of 18 market-moving announcements show evidence of substantial informed trading before the official release time. Prices begin to move in the "correct" direction about 30 minutes before the release time. The pre-announcement price move accounts on average for about half of the total price adjustment. This pre-announcement price drift has not been documented before. We examine four possible explanations. The evidence points to leakage and proprietary data collection as the most likely causes of the new drift.

880. Kevin E. Cahill, Sloan Center on Aging & Work at Boston College, Michael D. Giandrea, U.S. Bureau of Labor Statistics, and , "" (02/2014; PDF)

Abstract: To what extent does hours flexibility in career employment impact the retirement process? Workplace flexibility policies have the potential to improve both the welfare of employees and the business outcomes of employers. These policies, and hours flexibility in particular for older Americans, have also been touted as a way to reduce turnover. For older Americans, reductions in turnover could mean more years in career employment, fewer years in bridge employment, and little or no impact on the timing of retirement. Alternatively, hours flexibility in career employment could lead to longer working lives and delayed retirements. The distinction between the two outcomes is important if hours flexibility policies, such as phased retirement, are to be considered an option for alleviating the strains of an aging society. This paper describes how hours flexibility in career employment impacts the retirement patterns of older Americans. We use data on three cohorts of older Americans from the Health and Retirement Study (HRS), a large nationally-representative dataset that began in 1992. We explore the extent to which hours flexibility arrangements are available and utilized in career employment and explore the extent to which such arrangements impact job transitions later in life. We find that bridge job prevalence is higher among those with access to hours flexibility in career employment compared to those without hours flexibility. Further, while we find mixed evidence that hours flexibility extends time in career employment, we do find that hours flexibility in career employment is associated with longer tenure on bridge jobs. Taken together these results suggest that hours flexibility in career employment is associated with extended work lives, particularly in post-career employment.

879. , "" (05/2015; PDF)

Abstract: This paper succinctly overviews three primary branches of the industrial organization literature with behavioral consumers. The literature is organized according to whether consumers: (1) have non-standard preferences, (2) are overconfident or otherwise biased such that they systematically misweight different dimensions of price and other product attributes, or (3) fail to choose the best price due to suboptimal search, confusion comparing prices, or excessive inertia. The importance of consumer heterogeneity and equilibrium effects are also highlighted along with recent empirical work.

878. , "" (05/2015; PDF)

Abstract: Both the "law of one price" and Bertrand's (1883) prediction of marginal cost pricing for homogeneous goods rest on the assumption that consumers will choose the best price. In practice, consumers often fail to choose the best price because they search too little, become confused comparing prices, and then show excessive inertia through too little switching away from past choices or default options. This is particularly true when price is a vector rather than a scalar, and consumers have limited experience in the relevant market. All three mistakes may contribute to positive markups that fail to diminish as the number of competing sellers increases. Firms may have an incentive to exacerbate these problems by obfuscating prices, thereby using complexity to make price comparisons diffcult and soften competition. Possible regulatory interventions include simplifying the choice environment, for instance by restricting price to be a scalar, advising consumers of their expected costs under each option, or choosing on behalf of consumers.

877. , "" (03/2015; PDF)

876. , Hans L枚枚f (Royal Institute of Technology, Stockholm), Pardis Nabavi (Royal Institute of Technology, Stockholm), and Andreas Stephan (J枚nkoping International Business School), "" (05/2015; PDF; forthcoming, Economics of Innovation and New Technology)

Abstract: We evaluate a Generalized Structural Equation Model (GSEM) approach to the estimation of the relationship between R&D, innovation and productivity that focuses on the potentially crucial heterogeneity across technology and knowledge levels. The model accounts for selectivity and handles the endogeneity of this relationship in a recursive framework. Employing a panel of Swedish firms observed in three consecutive Community Innovation Surveys, our maximum likelihood estimates show that many key channels of in uence among the model's components differ meaningfully in their statistical significance and magnitude across sectors defined by different technology levels.

875. Scott Fulford, Ivan Petkov and Fabio Schiantarelli, "Does It Matter Where You Came From? Ancestry Composition and Economic Performance of U.S. Counties, 1850-2010" (rev. 04/2020; PDF) - Appendix (PDF).

Abstract: What impact on local development do immigrants and their descendants have in the short and long term? The answer depends on the attributes they bring with them, what they pass on to their children, and how they interact with other groups. We develop the first measures of the country-of-ancestry composition and of GDP per worker for US counties from 1850 to 2010. We show that changes in ancestry composition are associated with changes in local economic development. We use the long panel and several instrumental variables strategies in an effort to assess different ancestry groups鈥 effect on county GDP per worker. Groups from countries with higher economic development, with cultural traits that favor cooperation, and with a long history of a centralized state have a greater positive impact on county GDP per worker. Ancestry diversity is positively related to county GDP per worker, while diversity in origin-country economic development or culture is negatively related.

874. Thomas Gilbert (University of Washington), Chiara Scotti (Federal Reserve System, Board of Governors), and Clara Vega (Federal Reserve System, Board of Governors), "" (02/2015; PDF)

Abstract: The literature documents a heterogeneous asset price response to macroeconomic news announcements: Some announcements have a strong impact on asset prices and others do not. In order to explain these differences, we estimate a novel measure of the intrinsic value of a macroeconomic announcement, which we define as the announcement's ability to nowcast GDP growth, inflation, and the Federal Funds Target Rate. Using the same nowcasting framework, we then decompose this intrinsic value into the announcement's characteristics: its relation to fundamentals, timing, and revision noise. We find that in the 1998-2013 period, a significant fraction of the variation in the announcements' price impact on the Treasury bond futures market can be explained by differences in intrinsic value. Furthermore, our novel measure of timing explains significantly more of this variation than the announcements' relation to fundamentals, reporting lag (which previous studies have used as a measure of timing), or revision noise.

873. and Gaetano Gaballo (Banque de France, Monetary Policy Division), "" (02/2015; PDF)

Abstract: We show that non-trivial aggregate fluctuations may originate with vanishingly-small common shocks to either information or fundamentals. These "sentiment" fluctuations can be driven by self-fulfilling variation in either first-order beliefs (as in Benhabib et al., 2015) or higher-order beliefs (as in Angeletos and La'O, 2013), due to an endogenous signal structure. We analyze out-of-equilibrium best-response functions in the underlying coordination game to study whether sentiment equilibria are stable outcomes of a convergent process. We found that limiting sentiment equilibria are generally unattainable under both higher-order belief and adaptive learning dynamics, whereas equilibria without sentiment shocks show strong stability properties. Away from the limit case, however, multiple noisy rational expectations equilibria may be stable.

872. Scott Duke Kominers (Harvard University) and , "" (12/2014; PDF)

Abstract: We introduce a two-sided, many-to-one matching with contracts model in which agents with unit demand match to branches that may have multiple slots available to accept contracts. Each slot has its own linear priority order over contracts; a branch chooses contracts by filling its slots sequentially, according to an order of precedence. We demonstrate that in these matching markets with slot-specific priorities, branches' choice functions may not satisfy the substitutability conditions typically crucial for matching with contracts. Despite this complication, we are able to show that stable outcomes exist in this framework and can be found by a cumulative offer mechanism that is strategy-proof and respects unambiguous improvements in priority.

871. Nejat Anbarci (Durham University), Ching-Jen Sun (Deakin University) and M. Utku 脺nver, "Designing Practical and Fair Sequential Team Contests" (Rev: 04/2021; PDF)

Abstract: Economists have long recognized that the effect of the order of actions in sequential contests on performance of the contestants is far from negligible. We model the tiebreak mechanisms, known as penalty shootouts, which have sequential move order and are used in several team- sports contests, as a practical dynamic mechanism-design problem. We characterize all order- independent mechanisms; in such mechanisms two balanced teams have equal chances to win the shootout whenever the score is tied after equal numbers of attempts and hence move order has no relevance for winning chances. Using additional desirable properties, we uniquely characterize practical mechanisms. In most sports, such as football and hockey, the order in which teams take the penalties is fixed, known as ABAB, and a few high-level football competitions recently adopted the alternating-order variant mechanism, ABBA. Our results imply that these two and all other exogenous-order mechanisms 鈥 with one exception 鈥 are order dependent in regular rounds. Although ABBA is order independent in sudden-death rounds, ABAB fails there, too. Our theory supports empirical studies linking ABAB to unfair outcomes and multiple equilibria in terms of winning chances of the first- vs. second-kicking teams in different football traditions.

870. Marek Pycia (UCLA) and , "" (09/2014; PDF)

Abstract: Random mechanisms have been used in real-life situations for reasons such as fairness. Voting and matching are two examples of such situations. We investigate whether desirable properties of a random mechanism survive decomposition of the mechanism as a lottery over deterministic mechanisms that also hold such properties. To this end, we represent properties of mechanisms--such as ordinal strategy-proofness or individual rationality--using linear constraints. Using the theory of totally unimodular matrices from combinatorial integer programming, we show that total unimodularity is a sufficient condition for the decomposability of linear constraints on random mechanisms. As two illustrative examples, we show that individual rationality is totally unimodular in general, and that strategy-proofness is totally unimodular in some individual choice models. However, strategy-proofness, unanimity, and feasibility together are not totally unimodular in collective choice environments in general. We thus introduce a direct constructive approach for such problems. Using this approach, we prove that feasibility, strategy-proofness, and unanimity, with and without anonymity, are decomposable on non-dictatorial single-peaked voting domains.

869. , Bettina Siflinger (University of Mannheim) and Joachim Winter (University of Munich), "" (01/2015; PDF)

Abstract: Distortions in the elicitation of economic variables arise frequently. A common problem in household surveys is that reported values exhibit a significant degree of rounding. We interpret rounding as a filter that allows limited information about the relationship of interest to pass. We argue that rounding is an active decision of the survey respondent, and propose a general structural model that helps to explain some of the typical distortions that arise out of this active decision. Specifically, we assume that there is insufficient ability of individuals to acquire, process and recall information, and that rational individuals aim at using the scarce resources they devote to a survey in an optimal fashion. This model implies selection and places some structure on the selection equation. We use the formal model to correct for some of the distorting effects of rounding on the relationship of interest, using all the data available. Finally, we show how the concepts developed in this paper can be applied in consumer demand analysis by exploiting a controlled survey experiment, and obtain plausible results.

868. and , "" (01/2015; PDF)

Abstract: Within the last decade kidney exchange has become a mainstream paradigm to increase the number of kidney transplants. However, compatible pairs do not participate, and the full benefit from exchange can be realized only if they do. In this paper, we propose a new incentive scheme that relies on incentivizing participation of compatible pairs in exchange via insurance for the patient for a future renal failure. Efficiency and equity analyses of this scheme are conducted and compared with efficiency and equity outcomes of live donation and living donor organ exchange. We also present the potential role of such an incentive scheme to strengthen the national kidney exchange system.

867. , Sanjay Chugh, Ohio State University, and Tristan Potter, Drexel University, "" (rev. 12/2016: PDF)

Abstract: We estimate a real business cycle economy with search frictions in the labor market in which the latent wage follows a non-structural ARMA process. The estimated model does an excellent job matching a broad set of quantity data and wage indicators. Under the estimated process, wages respond immediately to shocks but converge slowly to their long-run levels, inducing substantial variation in labor's share of surplus. These results are not consistent with either a rigid real wage or flexible Nash bargaining. Despite inducing a strong endogenous response of wages, neutral shocks to productivity account for the vast majority of aggregate fluctuations in the economy, including labor market variables.

866. Ozge Akinci (Board of Governors of the Federal Reserve System) and , "" (12/2014: PDF)

Abstract: We show that a model with imperfectly forecastable changes in future productivity and an occasionally-binding collateral constraint can match a set of stylized facts about Sudden Stop events. "Good" news about future productivity raises leverage during times of expansions, increasing the probability that the constraint binds, and a Sudden Stop occurs, in future periods. During the Sudden Stop, the nonlinear effects of the constraint induce output, consumption and investment to fall substantially below trend, as they do in the data. Also consistent with data, the economy exhibits a boom period prior to the Sudden Stop, with output, consumption, and investment all above trend.

865. Michael Pesko (Cornell University) and , "" (02/2016: PDF)

Abstract: We use single equation and simultaneous instrumental variable models to explore if individuals smoke during times of stress (i.e., motivation effect) and if they are successful in self-medicating short-term stress (i.e., self-medication effect). Short-term stress is a powerful motivator of smoking, and the decision to smoke could trigger biological feedback that immediately reduces short-term stress. This feedback confounds estimates of the relationship between stress and smoking. Omitted variables, such as genetic or social factors, could also suggest a spurious correlation. We use data on self- reported smoking and stress from 240,388 current and former smokers. We instrument stress with temporal distance from September 11, 2001 (using date of interview). We instrument smoking with cigarette accessibility variables of cigarette price changes and distance to state borders. In the absence of accounting for feedback and other forms of endogeneity, we find that smoking is associated with increases in short-term stress. This is opposite of our theoretical prediction for self-medication. However, when we account for endogeneity we find no evidence of smoking affecting short-term stress. We do find a consistent positive effect of stress on smoking.

864. Christopher T. Conlon (Columbia University) and , "Empirical Properties of Diversion Ratios" (11/2013 - rev. 01/2019; PDF)

Abstract: The 2010 Department of Justice and Federal Trade Commission Horizontal Merger Guidelines lay out a new standard for assessing proposed mergers in markets with differentiated products. This new standard is based on a measure of "upward pricing pressure," (UPP) and the calculation of a "gross upward pricing pressure index" (GUPPI) in turn relies on a "diversion ratio," which measures the fraction of consumers of one product that switch to another product when the price of the first product increases. One way to calculate a diversion ratio is to estimate own- and cross-price elasticities. An alternative (and more direct) way to gain insight into diversion is to exogenously remove a product from the market and observe the set of products to which consumers actually switch. In the past, economists have rarely had the ability to experiment in this way, but more recently, the growth of digital and online markets, combined with enhanced IT, has improved our ability to conduct such experiments. In this paper, we analyze the snack food market, in which mergers and acquisitions have been especially active in recent years. We exogenously remove six top-selling products (either singly or in pairs) from vending machines and analyze subsequent changes in consumers' purchasing patterns, firm profits, diversion ratios, and upward pricing pressure. Using both nonparametric analyses and structural demand estimation, we find significant diversion to remaining products. Both diversion and the implied upward pricing pressure differ significantly across manufacturers, and we identify cases in which the GUPPI would imply increased regulatory scrutiny of a proposed merger.

863. Christopher T. Conlon (Columbia University) and , "Efficiency and Foreclosure Effects of Vertical Rebates: Empirical Evidence" (06/2014 - rev. 06/2017; PDF)

Abstract: Vertical rebates are prominently used across a wide range of industries. These contracts may induce greater retail effort, but may also prompt retailers to drop competing products. We study these offsetting efficiency and foreclosure effects empirically, using data from one retailer. Using a field experiment, we show how the rebate allocates the cost of effort between manufacturer and retailer. We estimate models of consumer choice and retailer behavior to quantify the rebate鈥檚 effect on assortment and retailer effort. We find that the rebate increases industry profitability and consumer utility, but fails to maximize social surplus and leads to upstream foreclosure.

862. , "" (08/2014: PDF)

Abstract: While it is common to use income uncertainty to explain household saving decisions, there is much disagreement about the importance of precautionary saving. This paper suggests that income uncertainty is not an important motive for saving, although households do have other precautionary reasons to save. Using a question from the Survey of Consumer Finances that asks how much households want for precautionary purposes, this paper shows that expressed household preferences, and liquid savings, are much lower than predicted by standard modeling assumptions. Households rarely list unemployment as a reason to save. Perceived income uncertainty does not affect liquid savings or precautionary preferences. Neither does being in an occupation with higher income volatility. Instead, households seem very concerned with expenditure shocks.

861. Daniel Gutknecht (Oxford University), and Michael Peters (Yale University), "" (10/2014, pdf)

Abstract: Do individuals use all information at their disposal when forming expectations about future events? In this paper we present an econometric framework to answer this question. We show how individual information sets can be characterized by simple nonparametric exclusion restrictions and provide a quantile based test for costly information processing. In particular, our methodology does not require individuals' expectations to be rational, and we explicitly allow for individuals to have access to sources of information which the econometrician cannot observe. As an application, we use microdata on individual income expectations to study the information agents employ when forecasting future earnings. Consistent with models where information processing is costly, we find that individuals' information sets are coarse in that valuable information is discarded. To quantify the utility costs, we calibrate a standard consumption life-cycle model. Consumers would be willing to pay 0.04% of their permanent income to incorporate the econometrician鈥檚 information set in their forecasts. This represents a lower bound on the costs of information processing.

860. and Paola Zerilli (University of York), "" (10/2014, pdf; forthcoming, Energy Economics)

Abstract: We evaluate alternative models of the volatility of commodity futures prices based on high-frequency intraday data from the crude oil futures markets for the October 2001-December 2012 period. These models are implemented with a simple GMM estimator that matches sample moments of the realized volatility to the corresponding population moments of the integrated volatility. Models incorporating both stochastic volatility and jumps in the returns series are compared on the basis of the overall fit of the data over the full sample period and subsamples. We also find that jumps in the returns series add to the accuracy of volatility forecasts.

859. , Geoffrey Sanzenbacher (Boston College), Shannon Seitz (Analysis Group) and Meghan Skira (University of Georgia), "" (07/2014, pdf)

Abstract: Why do some men father children outside of marriage but not provide support? Why are some single women willing to have children outside of marriage when they receive little or no support from unmarried fathers? Why is this behavior especially common among blacks? To shed light on these questions, we develop and estimate a dynamic equilibrium model of marriage, employment, fertility, and child support. We consider the extent to which low earnings and a shortage of single men relative to single women among blacks can explain the prevalence of deadbeat dads and non-marital childbearing. We estimate the model by indirect inference using data from the National Longitudinal Survey of Youth 1979. We simulate three distinct counterfactual policy environments: perfect child support enforcement, eliminating the black-white earnings gap, and equalizing black-white population supplies (and therefore gender ratios). We find perfect enforcement reduces non-marital childbearing dramatically, particularly among blacks; over time it translates into many fewer couples living with children from past relationships, and therefore less deadbeat fatherhood. Eliminating the black-white earnings gap reduces the marriage rate difference between blacks and whites by 29 to 43 percent; black child poverty rates fall by nearly 40 percent. Finally equalizing gender ratios has little effect on racial differences in marriage and fertility.

858. Vincent W. Slaugh (Carnegie-Mellon University), Mustafa Akan (Carnegie-Mellon University), Onur Kesten (Carnegie-Mellon University) and , "" (08/2014, pdf)

Abstract: The Pennsylvania Adoption Exchange (PAE) helps case workers representing children in state custody by recommending prospective families for adoption. We describe PAE's operational challenges using case worker surveys and a regression analysis of data on child outcomes over multiple years. Using a discrete-event simulation of PAE, we justify the value of a statewide adoption network and demonstrate the importance of the family preference information quality on the percentage of children who successfully find adoptive placements. Finally, we detail a series of simple improvements implemented by PAE to increase the adoptive placement rate through collecting more valuable information, improving the family ranking algorithm, and aligning incentives for families to provide useful preference information.

857. and Adam Rohde (Charles River Associates), "" (rev. 08/2015, pdf)

Abstract: This paper argues that individuals may rationally weight price increases for food and energy products differently from their expenditure shares when forming expectations about price inflation. We develop a simple dynamic model of the economy with gradual price adjustment in the core sector and flexible prices in the food and energy sectors. Serial correlation of supply shocks to food and energy allows individuals to gain an understanding about future shocks, possibly making it optimal for individuals to place more weight on the movement of prices in these sectors. We use survey data on expected inflation to show that the weights implied by the model differ from the expenditure shares of food and energy prices in the CPI for the United States. We find food price inflation is weighted more heavily and energy price inflation is weighted less heavily. But importantly, we cannot reject the hypothesis that these weights reflect rational behavior in forming expectations about inflation.

856. Xiaohuan Lan (Cheung Kong GSB) and , "" (08/2014; pdf)

Abstract: This paper provides an economic framework for examining how economic openness affects nationalism. Within a country, a region's level of nationalism varies according to its economic interests in its domestic market relative to its foreign market. A region's nationalism is strongest if the optimal size of its domestic market equals the size of its country. All else being equal, increasing a region's foreign trade reduces its economic interests in its domestic market and thus weakens its nationalism. This prediction holds both cross-sectionally and over time, as evidenced by our empirical study using the Chinese Political Compass data and the World Value Surveys. Our framework also applies to analysis of nationalism across countries and receives support from cross-country data.

855. Luis Castro (Universidad Privada Boliviana), , Keith Maskus (University of Colorado at Boulder) and Yiqing Xie (Fudan University), "" (08/2014; pdf)

Abstract: This paper provides a direct test of how fixed export costs and productivity jointly determine firm-level export behavior. Using Chilean data, we construct indices of fixed export costs for each industry-region-year triplet and match them to domestic firms. Our empirical results show that firms facing higher fixed export costs are less likely to export, while those with higher productivity export more. These outcomes are the foundation of the widely-used sorting mechanism in trade models with firm heterogeneity. A particularly novel finding is that high-productivity nonexporters face greater fixed export costs than low-productivity exporters. We also find that the substitution between fixed export costs and productivity in determining export decisions is weaker for firms with higher productivity. Finally, both larger fixed export costs and greater within-triplet productivity dispersion raise the export volume of the average exporter.

854. and Justin Svec (College of the Holy Cross), "" (04/2014: PDF; forthcoming, Journal of Macroeconomics)

Abstract: This paper analyzes the impact of consumer uncertainty on optimal fiscal policy in a model with capital. The consumers lack confidence about the probability model that characterizes the stochastic environment and so apply a max-min operator to their optimization problem. An altruistic fiscal authority does not face this Knightian uncertainty. We show analytically that, in responding to consumer uncertainty, the government no longer sets the expected capital tax rate exactly equal to zero, as is the case in the full-confidence benchmark model. Rather, our numerical results indicate that the government chooses to subsidize capital income, albeit at a modest rate. We also show that the government responds to consumer uncertainty by smoothing the labor tax across states and by making the labor tax persistent.

853. Francesco Giavazzi (Bocconi University), Ivan Petkov (Boston College) and , "Culture: Persistence and Evolution" (rev. 08/2016: PDF)

Abstract: This paper documents the speed of evolution (or lack thereof) of a range of values and beliefs of different generations of US immigrants, and interprets the evidence in the light of a model of socialization and identity choice. Convergence to the norm differs greatly across cultural attitudes. Moreover, results obtained studying higher generation immigrants differ from those found when the analysis is limited to the second generation and imply a lesser degree of persistence than previously thought. Persistence is also culture specific, in the sense that the country of origin of one's ancestors matters for the pattern of generational convergence.

852. , "" (02/2014: PDF)

Abstract: This paper develops an affine model of the term structure of interest rates in which bond yields are driven by observable and unobservable macroeconomic factors. It imposes restrictions to identify the effects of monetary policy and other structural disturbances on output, inflation, and interest rates and to decompose movements in long-term rates into terms attributable to changing expected future short rates versus risk premia. The estimated model highlights a broad range of channels through which monetary policy affects risk premia and the economy, risk premia affect monetary policy and the economy, and the economy affects monetary policy and risk premia.

851. Maria Arbatskaya (Emory University) and , "" (rev. 03/2016)

Abstract: In many industries, firms reward their customers for making referrals. We analyze a monopoly's optimal policy mix of price, advertising intensity, and referral fee when buyers choose to what extent to refer other consumers to the firm. When the referral fee can be optimally set by the firm, it will charge the standard monopoly price. The firm always advertises less when it uses referrals. We estend the analysis to the case where consumers can target their referrals. In particular, we show that referral targeting could be detrimental for consumers in a low-valuation group.

850. Maria Arbatskaya (Emory University) and , "" (01/2014; forthcoming, Review of Network Economics)

Abstract: We consider the optimal pricing and referral strategy of a monopoly that uses a simple consumer communication network (a chain) to spread product information. The first-best policy with fully discriminatory position-based referral fees involves standard monopoly pricing and referral fees that provide consumers with strictly positive referral incentives. Effective price discrimination among consumers based on their positions in the chain occurs in both the first-best solution and the second-best solution (with a common referral fee).

849. Taiji Furusawa (Hitotsubashi University) and , "" (10/2016)

Abstract: We propose a simple theory that shows a mechanism through which international trade entails wage and job polarization. We consider two countries in which individuals with different abilities work either as knowledge workers, who develop differentiated products, or as production workers, who engage in production. In equilibrium, ex ante symmetric firms attract knowledge workers with different abilities, which create firm heterogeneity in product quality. Market integration disproportionately benefits firms that produce high-quality products. This winner-take-all trend of product markets causes war for talents, which exacerbates income inequality within the countries and leads to labor-market polarization

848. Katsuya Kobayashi (Hosei University) and , "" (rev. 06/2015)

Abstract:
This paper proposes a model of two-party representative democracy on a single-dimensional political space, in which voters choose their parties in order to influence the parties' choices of representative. After two candidates are selected as the median of each party's support group, Nature determines the candidates' relative likeability (valence). Based on the candidates' political positions and relative likeability, voters vote for the preferable candidate without being tied to their party's choice. We show that (1) there exists a nontrivial equilibrium under natural conditions, and that (2) the equilibrium party border and the ex ante probabilities of the two-party candidates winning are sensitive to the distribution of voters. In particular, we show that if a party has a more concentrated subgroup, then the party tends to alienate its centrally located voters, and the party's probability of winning the final election is reduced. Even if voter distribution is symmetric, an extremist party (from either side) can emerge as voters become more politically divided.

847. and 脟aglar Yurtseven (Bah莽e艧ehir University), "?" (10/2013; published, Japan and the World Economy, 29, 36-45, 2014)

Abstract: In the 1950s and 60s, Japanese and US antitrust authorities occasionally used the degree of concentration to regulate industries. Does regulating firms based on their market shares make theoretical sense? We set up a simple duopoly model with stochastic R&D activities to evaluate market share regulation policy. On the one hand, market share regulation discourages the larger company's R&D investment and causes economic inefficiency. On the other hand, it facilitates the smaller company's survival, and prevents the larger company from monopolizing the market. We show that consumers tend to benefit from market share regulation. However, the social welfare including firms' profits would be hurt if both firms are equally good at R&D innovation. Nonetheless, if the smaller firm can make innovations more efficiently, then protecting smaller firms through market share regulation can improve the social welfare. We relate our analysis to a case study of Asahi Brewery's introducing Asahi Super Dry to become the top market share company in the industry.

846. and Christian Merkl (Friedrich-Alexander-University Erlangen-Nuremberg), "" (04/2013: PDF; forthcoming, International Economic Review)

Abstract: We characterize efficient allocations and cyclical fluctuations in a labor selection model. Potential new hires are heterogenous in the cross-section in their degree of training costs. In a calibrated version of the model that identifies costly selection with micro-level data on training costs, efficient fluctuations feature highly volatile unemployment and hiring rates, in line with empirical evidence. We show analytically in a partial equilibrium version of the model that volatility arises from selection effects, rather than general equilibrium effects.

845. and Fabio Ghironi (University of Washington), "" (07/2012: PDF)

Abstract: We study Ramsey-optimal fiscal policy in an economy in which product varieties are the result of forward-looking investment decisions by firms. There are two main results. First, depending on the particular form of variety aggregation in preferences, firms' dividend payments may be either subsidized or taxed in the long run. This policy balances monopoly incentives for product creation with consumers' welfare benefit of product variety. In the most empirically relevant form of variety aggregation, socially efficient outcomes entail a substantial tax on dividend income, removing the incentive for over-accumulation of capital, which takes the form of variety. Second, optimal policy induces dramatically smaller, but efficient, fluctuations of both capital and labor markets than in a calibrated exogenous policy. Decentralization requires zero intertemporal distortions and constant static distortions over the cycle. The results relate to Ramsey theory, which we show by developing welfare-relevant concepts of efficiency that take into account product creation.

844. , "" (02/2013: PDF)

Abstract: I characterize cyclical fluctuations in the cross-sectional dispersion of firm-level productivity in the U.S. manufacturing sector. Using the estimated dispersion, or "risk," stochastic process as an input to a baseline DSGE financial accelerator model, I assess how well the model reproduces aggregate cyclical movements in the financial conditions of U.S. non-financial firms. In the model, risk shocks calibrated to micro data induce large and empirically-relevant fluctuations in leverage, a financial measure typically thought to be closely associated with real activity. In terms of aggregate quantities, however, pure risk shocks account for only a small share of GDP fluctuations in the model, less than one percent. Instead, it is standard aggregate productivity shocks that explain virtually all of the model's real fluctuations. These results reveal a dichotomy at the core of a popular class of DSGE financial frictions models: risk shocks induce large financial fiuctuations, but have little effect on aggregate quantity fluctuations.

843. David Dillenberger (University of Pennsylvania) and , "" (07/2016: PDF)

Abstract: We study the attitude of decision makers to skewed noise. For a binary lottery that yields the better outcome with probability p, we identify noise around p with a compound lottery that induces a distribution over the exact value of the probability and has an average value p. We propose and characterize a new notion of skewed distributions, and use a recursive non-expected utility to provide conditions under which rejection of symmetric noise implies rejection of negatively skewed noise, yet does not preclude acceptance of some positively skewed noise, in agreement with recent experimental evidence. In the context of decision making under uncertainty, our model permits the co-existence of aversion to symmetric ambiguity (as in Ellsberg's paradox) and ambiguity seeking for low likelihood "good" events.

842. David Arseneau (Federal Reserve Board), , and Alan Finkelstein Shapiro (Universidad de los Andes), "" (11/2013, PDF; Journal of Money, Credit and Banking, 2015, 47: 617-672)

Abstract: This paper presents a model in which some goods trade in "customer markets." In these markets, advertising plays a critical role in facilitating long-lived relationships. We estimate both policy and non-policy parameters of the model (which includes New-Keynesian frictions) on U.S. data, including advertising expenditures. The estimated parameters imply a large congestion externality in the pricing of customer market goods. This pricing inefficiency motivates the analysis of optimal policy. When the planner has access to a complete set of taxes and chooses them optimally, fiscal policy eliminates the externalities with large adjustments in the tax rates that operate directly in customer markets; labor tax volatility remains low. If available policy instruments are constrained to the interest rate and labor tax, then the latter displays large and procyclical fluctuations, while the implications for monetary policy are largely unchanged from the model with no customer markets.

841. , Margarita Karpava (MediaCom London), Dorothea Sch盲fer (DIW Berlin) and Andreas Stephan (J枚nk枚ping International Business School), "" (rev. 01/2014, PDF)

Abstract: This paper studies the impact of credit rating agency (CRA) downgrade announcements on the value of the Euro and the yields of French, Italian, German and Spanish long-term sovereign bonds during the culmination of the Eurozone debt crisis in 2011-2012. The employed GARCH models show that CRA downgrade announcements negatively affected the value of the Euro currency and also increased its volatility. Downgrading increased the yields of French, Italian and Spanish bonds but lowered the German bond's yields, although Germany's rating status was never touched by CRA. There is no evidence for Granger causality from bond yields to rating announcements. We infer from these findings that CRA announcements significantly influenced crisis-time capital allocation in the Eurozone. Their downgradings caused investors to rebalance their portfolios across member countries, out of ailing states' debt into more stable borrowers' securities.

840. and Yuya Sasaki (Johns Hopkins University), "" (08/2013; PDF)

Abstract: This paper introduces average treatment effects conditional on the outcome variable in an endogenous setup where outcome Y, treatment X and instrument Z are continuous. These objects allow to refine well studied treatment effects like ATE and ATT in the case of continuous treatment (see Florens et al (2008)), by breaking them up according to the rank of the outcome distribution. For instance, in the returns to schooling case, the outcome conditioned average treatment effect on the treated (ATTO), gives the average effect of a small increase in schooling on the subpopulation characterized by a certain treatment intensity, say 16 years of schooling, and a certain rank in the wage distribution. We show that IV type approaches are better suited to identify overall averages across the population like the average partial effect, or outcome conditioned versions thereof, while selection type methods are better suited to identify ATT or ATTO. Importantly, none of the identification relies on rectangular support of the errors in the identification equation. Finally, we apply all concepts to analyze the nonlinear heterogeneous effects of smoking during pregnancy on infant birth weight.

839. Xavier D'Haultfoeuille (CREST), and Yuya Sasaki (Johns Hopkins University), "" (08/2013; PDF)

Abstract: This paper studies the identification of nonseparable models with continuous, endogenous regressors, also called treatments, using repeated cross sections. We show that several treatment effect parameters are identified under two assumptions on the effect of time, namely a weak stationarity condition on the distribution of unobservables, and time variation in the distribution of endogenous regressors. Other treatment effect parameters are set identified under curvature conditions, but without any functional form restrictions. This result is related to the difference-in-differences idea, but does neither impose additive time effects nor exogenously defined control groups. Furthermore, we investigate two extrapolation strategies that allow us to point identify the entire model: using monotonicity of the error term, or imposing a linear correlated random coefficient structure. Finally, we illustrate our results by studying the effect of mother's age on infants' birth weight.

838. Eric Gautier and , "" (rev. 09/2015; PDF)

Abstract: This paper considers treatment effects under endogeneity with complex heterogeneity in the selection equation. We model the outcome of an endogenous treatment as a triangular system, where both the outcome and first-stage equations consist of a random coefficients model. The first-stage specifically allows for nonmonotone selection into treatment. We provide conditions under which marginal distributions of potential outcomes, average and quantile treatment effects, all conditional on first-stage random coefficients, are identified. Under the same conditions, we derive bounds on the (conditional) joint distributions of potential outcomes and gains from treatment, and provide additional conditions for their point identification. All conditional quantities yield unconditional effects (e.g., the average treatment effect) by weighted integration.

837. and Robert Sherman, "" (07/2012; PDF)

Abstract: We study identification and estimation in a binary response model with random coefficients B allowed to be correlated with regressors X. Our objective is to identify the mean of the distribution of B and estimate a trimmed mean of this distribution. Like Imbens and Newey (2009), we use instruments Z and a control vector V to make X independent of B given V. A consequent conditional median restriction helps identify the mean of B given V. Averaging over V identifies the mean of B. This leads to an analogous localize-then-average approach to estimation. We estimate conditional means with localized smooth maximum score estimators and average to obtain a root-n-consistent and asymptotically normal estimator of a trimmed mean of the distribution of B. Under the conditional median restrictions, the procedure can be adapted to produce a root-n-consistent and asymptotically normal estimator of the nonrandom regression coefficients in the models of Manski (1975,1985) and Horowitz (1992). We explore small sample performance through simulations, and present an application.

836. Holger Dette (University of Bochum), and Natalie Neumeyer (University of Hamburg), "" (09/2013; PDF)

Abstract: This paper is concerned with testing rationality restrictions using quantile regression methods. Specifically, we consider negative semidefiniteness of the Slutsky matrix, arguably the core restriction implied by utility maximization. We consider a heterogeneous population characterized by a system of nonseparable structural equations with infinite dimensional unobservable. To analyze this economic restriction, we employ quantile regression methods because they allow us to utilize the entire distribution of the data. Difficulties arise because the restriction involves several equations, while the quantile is a univariate concept. We establish that we may test the economic restriction by considering quantiles of linear combinations of the dependent variable. For this hypothesis we develop a new empirical process based test that applies kernel quantile estimators, and derive its large sample behavior. We investigate the performance of the test in a simulation study. Finally, we apply all concepts to Canadian microdata, and show that rationality is not rejected.

835. Fabian Dunker (University of Goettingen), and Hiroaki Kaido (Brown University), "" (03/2013; PDF)

Abstract: Individual players in a simultaneous equation binary choice model act differently in different environments in ways that are frequently not captured by observables and a simple additive random error. This paper proposes a random coefficient specification to capture this type of heterogeneity in behavior, and discusses nonparametric identification and estimation of the distribution of random coefficients. We establish nonparametric point identification of the joint distribution of all random coefficients, except those on the interaction effects, provided the players behave competitively in all markets. Moreover, we establish set identification of the density of the coefficients on the interaction effects, and provide additional conditions that allow to point identify this density. Since our identification strategy is constructive throughout, it allows to construct sample counterpart estimators. We analyze their asymptotic behavior, and illustrate their finite sample behavior in a numerical study. Finally, we discuss several extensions, like the semiparametric case, or correlated random coefficients.

834. , Alexander Kurov (West Virginia University), and Marketa Halova Wolfe (Skidmore College), "?" (rev. 06/2015: PDF; forthcoming, Journal of International Money and Finance)

Abstract: We examine the effect of scheduled macroeconomic announcements made by China on world financial and commodity futures markets. All announcements related to Chinese manufacturing and industrial output move stock markets, energy and industrial commodities as well as commodity currencies. News about Chinese domestic consumption leaves most markets unaffected, suggesting that market participants view the announcements primarily as a signal of the state of the global economy rather than merely of China's domestic demand. The market response to unexpectedly strong output announcements is not consistent with investors being concerned about tightening of Chinese macroeconomic policy; instead, the world markets view strong Chinese output as a rising tide that lifts all boats.

833. , "" (11/2013: PDF)

Abstract: This paper examines the changing distribution of where women and girls live in India at the smallest scale possible: India's nearly 600,000 villages. The village level variation in the proportion female is far larger than the variation across districts. Decomposing the variance, I show that village India is becoming more homogeneous in its preferences for boys even as that preference becomes more pronounced. A consequence is that 70% of girls grow up in villages where they are the distinct minority. Most Indian women move on marriage, yet marriage migration has almost no gender equalizing influence. Further, by linking all villages across censuses, I show that most changes in village infrastructure are not related to changes in child gender. Gaining primary schools and increases in female literacy decrease the proportion of girls. The results suggests that there are no easy policy solutions for addressing the increasing masculinization of Indian society.

832. Manoj Atolia (Florida State University) and , "" (09/2013: PDF)

Abstract: What do firms learn from their interactions in markets, and what are the implications for aggregate dynamics? We address this question in a multi-sector real-business cycle model with a sparse input-output structure. In each sector, firms observe their own productivity, along with the prices of their inputs and the price of their output. We show that general equilibrium market-clearing conditions place heavy constraints on average expectations, and characterize a set of cases where average expectations (and average dynamics) are exactly those of the full-information model. This "aggregate irrelevance" of information can occur even when sectoral expectations and dynamics are quite different under partial information, and despite the fact that each sector represents a non-negligible portion of the overall economy. In numerical examples, we show that even when the conditions for aggregate irrelevance of information are not met, aggregate dynamics remain nearly identical to the full-information model under reasonable calibrations.

831. and , "" (rev. 04/2014: PDF)

Abstract: Pricing-to-market (PTM), the practice of differentiating the price of a good across markets, is commonly attributed to differential distribution and border costs. In this paper we show that some of this price differentiation is sustained by manufacturers selling different versions of an otherwise identical good in different markets. We study price differences across countries in the European car market, using a rich data set which includes detailed technical information on each car model. Relative car prices show no sign of convergence during the period 2003-2011. PTM is pervasive in this market: model-specific real exchange rates for mechanically identical cars differ significantly from unity. They also vary significantly across countries and, within countries, across car manufacturers. We find strong evidence that car manufacturers price discriminate by manipulating the menu of included car options and features available in each country, e.g. by including air conditioning as a standard feature as opposed to pricing it separately. We find that such bundling decisions sustain cross-country price differences of 10% and more.

830. Michael Belongia (University of Mississippi) and , "" (08/2013: PDF)

Abstract: Fifty years ago, Friedman and Schwartz presented evidence of pro-cyclical movements in the money stock, exhibiting a lead over corresponding movements in output, found in historical monetary statistics for the United States. Very similar relationships appear in more recent data. To see them clearly, however, one must use Divisia monetary aggregates in place of the Federal Reserve鈥檚 official, simple-sum measures. One must also split the data sample to focus, separately, on episodes before and after 1984 and on a new episode of instability beginning in 2000. A structural VAR draws tight links between Divisia money and output during each of these three periods.

829. and Matthew Osborne (U.S. Bureau of Economic Analysis), "" (02/2012; PDF)

Abstract: By April 2013, the FCC's recent bill-shock agreement with cellular carriers requires consumers be notified when exceeding usage allowances. Will the agreement help or hurt consumers? To answer this question, we estimate a model of consumer plan choice, usage, and learning using a panel of cellular bills. Our model predicts that the agreement will lower average consumer welfare by $2 per year because firms will respond by raising monthly fees. Our approach is based on novel evidence that consumers are inattentive to past usage (meaning that bill-shock alerts are informative) and advances structural modeling of demand in situations where multipart tariffs induce marginal-price uncertainty. Additionally, our model estimates show that an average consumer underestimates both the mean and variance of future calling. These biases cost consumers $42 per year at existing prices. Moreover, absent bias, the bill-shock agreement would have little to no effect.

828. , "" (07/2012; PDF)

Abstract: For many goods and services, such as cellular-phone service and debit-card transactions, the price of the next unit of service depends on past usage. As a result, consumers who are inattentive to their past usage but are aware of contract terms may remain uncertain about the price of the next unit. I develop a model of inattentive consumption, derive equilibrium pricing when consumers are inattentive, and evaluate bill-shock regulation requiring firms to disclose information that substitutes for attention. When inattentive consumers are heterogeneous and unbiased, bill-shock regulation reduces social welfare in fairly-competitive markets, which may be the effect of the FCC's recent bill-shock agreement. If inattentive consumers underestimate their demand, however, then bill-shock regulation can lower market prices and protect consumers from exploitation. Hence the Federal Reserve's new opt-in rule for debit-card overdraft protection may substantially benefit consumers.

827. , "" (07/2013; PDF)

Abstract: Lower fertility can translate into a more male-biased sex ratio if son preference is persistent and technology for sex-selection is easily accessible. This paper investigates whether financial incentives can overcome this trade-off in the context of an Indian scheme, Devirupak, that seeks to decrease both fertility and the sex ratio at birth. First, I construct a model where the effects of incentives are determined by the strength of son preference, the cost of children, and the cost of sex-selection, relative to the size of incentives. Second, I create a woman-year panel dataset from retrospective birth histories and use variation in the composition of pre-existing children as well as the state and the year of program implementation to estimate its causal effect. Devirupak successfully lowers the number of children by 0.9 percent, but mainly through a 1.9 percent decrease in the number of daughters. Faced with a choice between a son and only daughters, couples choose a son despite lower monetary benefits, and thus the sex ratio at birth unintentionally increases. A subsidy worth 10 months of average household consumption expenditure is insufficient to induce parents to give up sons entirely. Instead, Devirupak increases the proportion of one-boy couples by 5 percent. Only the most financially disadvantaged groups exhibit an increase in the proportion of one-girl couples.

826. Marketa Halova Wolfe (Washington State University) and , "" (rev. 04/2014: PDF; published, Journal of Economic Education, 2014, 45:3, 191-210)

Abstract: We describe experiences from integrating a semester-long economic analysis project into an intermediate macroeconomic theory course. Students work in teams of "economic advisors" to write a series of nested reports for a decision-maker, analyzing the current economic situation, evaluating and proposing policies while responding to events during the semester in real-time. The project simulates real-world policy con- sulting with an emphasis on applying economic theory and models. We describe the project setup and how to tailor its theme to current events, explain methods for keep- ing it manageable in larger classes, and document student learning outcomes by survey results and report summaries. Besides improving the learning experience, this project equips economics students to contribute their own views to policy debates and buttress them with tight macroeconomic reasoning.

825. and Thomas Tao Yang, "" (07/2013, PDF; available)

Abstract: Assume individuals are treated if a latent variable, containing a continuous instrument, lies between two thresholds. We place no functional form restrictions on the latent errors. Here unconfoundedness does not hold and identification at infinity is not possible. Yet we still show nonparametric point identification of the average treatment effect. We provide an associated root-n consistent estimator. We apply our model to reinvestigate the inverted-U relationship between competition and innovation, estimating the impact of moderate competitiveness on innovation without the distributional assumptions required by previous analyses. We find no evidence of an inverted-U in US data.

824. Pierluigi Balduzzi (CSOM, Boston College), Emanuele Brancati (University of Rome, Tor Vergata) and , "" (rev. 08/2016; PDF)

Abstract: We test whether adverse changes to banks' market valuations during the financial and sovereign debt crises affected firms' real decisions. Using new data linking over 5,000 non-financial Italian firms to their bank(s), we find that increases in banks' CDS spreads, and decreases in their equity valuations, resulted in lower investment, employment, and bank debt for younger and smaller firms. These effects dominate those of banks' balance-sheet variables. Moreover, CDS spreads matter more than equity valuations. Finally, higher CDS spreads led to lower aggregate investment and employment, and to less efficient resource allocations, especially during the sovereign debt crisis.

823. , "" (rev. 01/2014; PDF; published, Journal of Macroeconomics, 2014, 40, 228-244)

Abstract: This paper considers whether the Phillips curve can explain the recent behavior of inflation in the United States. Standard formulations of the model predict that the ongoing large shortfall in economic activity relative to full employment should have led to deflation over the past several years. I confirm previous findings that the slope of the Phillips curve has varied over time and probably is lower today than it was several decades ago. This implies that estimates using historical data will overstate the responsiveness of inflation to present-day economic conditions. I modify the traditional Phillips curve to explicitly account for time variation in its slope and show how this modified model can explain the recent behavior of inflation without relying on anchored expectations. Specifically, I explore reasons why the slope might vary over time, focusing on implications of the sticky-price and sticky-information approaches to price adjustment. These implications suggest that the inflation environment and uncertainty about regional economic conditions should influence the slope of the Phillips curve. I introduce proxies to account for these effects and find that a Phillips curve modified to allow its slope to vary with uncertainty about regional economic conditions can best explain the recent path of inflation.

822. , Mustafa Caglayan (Heriot-Watt University), Abdul Rashid (International Islamic University, Islamabad), "?" (rev. 08/2016: PDF; forthcoming, Empirical Economics)

Abstract: We show that risk plays an important role in estimating the adjustment of the firm's capital structure. We find that the adjustment process is asymmetric and depends on the type of risk, its magnitude, the firm's current leverage, and its financial status. We also show that firms with financial surpluses and above-target leverage adjust their leverage more rapidly when firm-specific risk is low and when macroeconomic risk is high. Firms with financial deficits and below-target leverage adjust their capital structure more quickly when both types of risk are low. Our investigation suggests that models without risk factors yield biased results.

821. Hans Gersbach (ETH Zurich), Hans Haller (Virginia Tech University), , "" (rev. 12/2014)

Abstract: We explore whether stable matchings and trade in commodities can coexist. For this purpose, we consider competitive markets for multiple commodities with endogenous formation of one- or two-person households. Within each two-person household, individuals obtain utility from his/her own private consumption, from discrete actions such as job choice, from the partner's observable characteristics such as appearance and hobbies, from some of the partner's consumption vectors, and from the partner's action choices. We investigate competitive market outcomes with an endogenous household structure in which no individual and no man/woman-pair can deviate profitably. We find a set of sufficient conditions under which a stable matching equilibrium exists. We further establish the first welfare theorem for this economy.

820. , "" (rev. 06/2015: PDF)

Abstract: Two thirds of all Indian women have migrated for marriage, around 300 million women, but little is known about this vast migration. This paper provides a detailed accounting of the puzzlingly large migration of Indian women and evaluates its causes. Contrary to conventional wisdom, marriage migration does not contribute to risk sharing. Nor is it driven by sex ratio imbalances. Instead, I introduce a simple model in which parents must search for a spouse for their daughter geographically. By adding geographical search frictions, the model helps rationalize the large regional differences.

819. , "" (12/2012: PDF; published, World Development, 59, 434-450, 2014)

Abstract: Despite the evidence for high returns to education at an individual level, large increases in education across the developing world have brought disappointing returns in aggregate. This paper shows that the same pattern holds in India by building aggregates from micro-data so that the comparability and quality issues that plague cross-country analyses are not a problem. In India both men and women with more education live in households with greater consumption per capita. Yet in aggregate, comparing across age cohorts and states, better educated male cohorts consume only about 4% more than less well educated ones. Better educated female cohorts do not live in households with higher consumption. This result is robust to: (1) using econometric models that account for survey measurement error, (2) different measures of household consumption and composition, (3) allowing returns to differ by state, and (4) age mismeasurement. Comparing state returns to a measure of school quality, it does not seem that poor quality is responsible for the low returns.

818. and , "" (rev. 08/2013: PDF)

Abstract: Despite facing some of the same challenges as private insurance markets, little is known about the role of adverse selection in social insurance programs. This paper studies adverse selection in Social Security retirement choices using data from the Health and Retirement Study. We find robust evidence that people who live longer choose larger annuities by delaying the age they first claim benefits, a form of adverse selection. To quantify welfare consequences we develop and estimate a simple model of annuity choice. We exploit variation in longevity, the underlying source of private information, to identify the key structural parameters: the coefficient of relative risk aversion and the discount rate. We estimate that adverse selection reduces social welfare by 2.3-3.5 percent, and increases the costs to the Social Security Trust Fund by 2.1-2.5 percent, relative to the first best allocation. Counterfactual simulations suggest program adjustments could generate both economically significant decreases in costs and small increases in social welfare. We estimate an optimal non-linear accrual rate which would result in welfare gains of 1.4 percent, and cost reductions of 6.1 percent of current program costs.

817. , Xun Lu (Hong Kong University of Science and Technology) and Liangjun Su (Singapore Management University), "" (rev. 05/2013: PDF)

Abstract: Consider a nonseparable model Y=R(X,U) where Y and X are observed, while U is unobserved and conditionally independent of X. This paper provides the first nonparametric test of whether R takes the form of a transformation model, meaning that Y is monotonic in the sum of a function of X plus a function of U. Transformation models of this form are commonly assumed in economics, including, e.g., standard specifications of duration models and hedonic pricing models. Our test statistic is asymptotically normal under local alternatives and consistent against nonparametric alternatives. Monte Carlo experiments show that our test performs well in finite samples. We apply our results to test for specifications of generalized accelerated failure-time (GAFT) models of the duration of strikes and of marriages.

816. Karim Chalak, "" (12/2012: PDF)

Abstract: This paper studies measuring the average effects of X on Y in a structural system with random coefficients and confounding. We do not require (conditionally) exogenous regressors or instruments. Using proxies W for the confounders U, we ask how do the average direct effects of U on Y compare in magnitude and sign to those of U on W. Exogeneity and equi- or proportional confounding are limit cases yielding full identification. Alternatively, the elements of beta-hat are partially identified in a sharp bounded interval if W is sufficiently sensitive to U, and sharp upper or lower bounds may obtain otherwise. We extend this analysis to accommodate conditioning on covariates and a semiparametric separable specification as well as a panel structure and proxies included in the Y equation. After studying estimation and inference, we apply this method to study the financial return to education and the black-white wage gap.

815. Umut Mert Dur (University of Texas at Austin) and , "Two-Sided Matching via Balanced Exchange" (02/2018: PDF)

Abstract: We introduce a new matching model to mimic two-sided exchange programs such as tuition and worker exchanges, in which export-import balances are required for longevity of programs. These exchanges use decentralized markets, making it difficult to achieve this goal. We introduce the two-sided top-trading-cycles, the unique mechanism that is balanced-efficient, worker-strategy-proof, acceptable, individually rational, and respecting priority bylaws regarding worker eligibility. Moreover, it encourages exchange, because full participation is the dominant strategy for firms. We extend it to dynamic settings permitting tolerable yearly imbalances and demonstrate that its regular and tolerable versions perform considerably better than models of current practice.

814. , "" (09/2012: PDF)

Abstract: Do households use savings to buffer against income fluctuations? Despite its common use to understand household savings decisions, the evidence for the buffer-stock model is surprisingly weak and inconsistent. This paper develops new testable implications based on a property of the model that the assets that households target for precautionary reasons should encapsulate all preferences and risks and the target should scale one for one with permanent income. I test these implications using the Survey of Consumer Finances in the United States. Those with incomes over $60,000 fit the model predictions very well, but below $60,000 households become increasingly precautionary. Income uncertainty is unrelated to the level of precaution. Moreover, households hold substantially weaker precautionary tendencies than standard models with yearly income shocks predict. Instead I propose and estimate a model of monthly disposable income shocks and a minimum subsistence level that can accommodate these findings.

813. Peter Arcidiacono (Duke University), and Marjorie McElroy (Duke University), "" (rev. 06/2013: PDF)

Abstract: We develop a directed search model of relationship formation which can disentangle male and female preferences for types of partners and for different relationship terms using only a cross-section of observed matches. Individuals direct their search to a particular type of match on the basis of (i) the terms of the relationship, (ii) the type of partner, and (iii) the endogenously determined probability of matching. If men outnumber women, they tend to trade a low probability of a preferred match for a high probability of a less-preferred match; the analogous statement holds for women. Using data from National Longitudinal Study of Adolescent Health we estimate the equilibrium matching model with high school relationships. Variation in gender ratios is used to uncover male and female preferences. Estimates from the structural model match subjective data on whether sex would occur in one's ideal relationship. The equilibrium result shows that some women would ideally not have sex, but do so out of matching concerns; the reverse is true for men.

812. , "" (08/2012: PDF)

Abstract: How do abortion costs affect non-marital childbearing? While greater access to abortion has the first-order effect of reducing childbearing among pregnant women, it could nonetheless lead to unintended consequences via effects on marriage market norms. Single motherhood could rise if lower-cost abortion makes it easier for men to avoid marriage. We identify the effect of abortion costs on separation, cohabitation and marriage following a birth by exploiting the "miscarriage-as-a-natural experiment" methodology in combination with changes in state abortion laws. Recent increases in abortion restrictions appear to have lead to a sizable decrease in a woman's chances of being single and increased the chances of cohabitation. The result underscores the importance of the marriage market search behavior of men and women, and the positive and negative effects of abortion laws on bargaining power for women who abort and give birth respectively.

811. , "" (rev. 10/2013: PDF)

Abstract: The market for abortion in the U.S. has become increasingly concentrated in recent years and many states have tightened abortion regulations aimed at providers. Using unique data on abortion providers I estimate a dynamic model of entry, exit and service provision which captures the effect of regulation on provider behavior. High fixed costs explain the growth of large clinics and estimates show regulation increased entry costs for small providers. A simulation removing all regulations increases entry by smaller providers into incumbent- markets: competition increases as does the number of abortions. Targeted entry subsidies, however, increase access while only slightly increase abortion.

810. , "" (09/2012: PDF)

Abstract: This chapter provides background for understanding and applying special regressor methods. This chapter is intended for inclusion in the "Handbook of Applied Nonparametric and Semiparametric Econometrics and Statistics," Co-edited by Aman Ullah, Jeffrey Racine, and Liangjun Su, to be published by Oxford University Press.

809. Laurens Cherchye (University of Leuven), Bram De Rock Universit茅 Libre de Bruxelles), and Frederic Vermeulen (Tilburg University), "" (rev. 07/2013: PDF)

Abstract: We propose a method to identify bounds (i.e. set identification) on the sharing rule for a general collective household consumption model. Unlike the effects of distribution factors, it is well known that the level of the sharing rule cannot be uniquely identified without strong assumptions on preferences across households of different compositions. Our new results show that, though not point identified without these assumptions, bounds on the sharing rule can still be obtained. We get these bounds by applying revealed preference restrictions implied by the collective model to the household鈥檚 continuous aggregate demand functions. We obtain informative bounds even if nothing is known about whether each good is public, private, or assignable within the household, though having such information tightens the bounds. An empirical application demonstrates the practical usefulness of our method.

808. and Xun Tang (University of Pennsylvania), "" (rev. 03/2013: PDF)

Abstract: The existing literature on binary games with incomplete information assumes that either payoff functions or the distribution of private information are finitely parameterized to obtain point identification. In contrast, we show that, given excluded regressors, payoff functions and the distribution of private information can both be nonparametrically point identified. An excluded regressor for player i is a sufficiently varying state variable that does not affect other players utility and is additively separable from other components in is payoff. We show how excluded regressors satisfying these conditions arise in contexts such as entry games between firms, as variation in observed components of fixed costs. Our identification proofs are constructive, so consistent nonparametric estimators can be readily based on them. For a semiparametric model with linear payoffs, we propose root-N consistent and asymptotically normal estimators for parameters in players payoffs. Finally, we extend our approach to accommodate the existence of multiple Bayesian Nash equilibria in the data-generating process without assuming equilibrium selection rules.

807. Yingying Dong (California State University-Irvine) and , "" (06/2012: PDF)

Abstract: This paper provides a few variants of a simple estimator for binary choice models with endogenous or mismeasured regressors, or with heteroskedastic errors, or with panel fixed effects. Unlike control function methods, which are generally only valid when endogenous regressors are continuous, the estimators proposed here can be used with limited, censored, continuous, or discrete endogenous regressors, and they allow for latent errors having heteroskedasticity of unknown form, including random coefficients. The variants of special regressor based estimators we provide are numerically trivial to implement. We illustrate these methods with an empirical application estimating migration probabilities within the US.

806. Scott Duke Kominers (University of Chicago) and , "" (06/2012: PDF)

Abstract: To encourage diversity, branches may vary contracts' priorities across slots. The agents who match to branches, however, have preferences only over match partners and contractual terms. Ad hoc approaches to resolving agents' indifferences across slots in the Chicago and Boston school choice programs have introduced biases, which can be corrected with more careful market design. Slot-specific priorities can fail the substitutability condition typically crucial for outcome stability. Nevertheless, an embedding into a one-to-one agent--slot matching market shows that stable outcomes exist and can be found by a cumulative offer mechanism that is strategy-proof and respects unambiguous improvements in priority.

805. Orhan Ayg眉n (Boston College) and , "" (06/2012: PDF)

Abstract: We show that Hatfield and Kojima (2010) inherits a critical ambiguity from its predecessor Hatfield and Milgrom (2005), and clearing this ambiguity has strong implications for the paper. Of the two potential remedies, the first one results in the failure of all theorems except one in the absence of an additional irrelevance of rejected contracts (IRC) condition, whereas the second remedy eliminates the transparency of the results, reduces the scope of the model, and contradicts authors' interpretation of the nature of their contributions. Fortunately all results are restored when IRC is explicitly assumed under the first remedy.

804. Orhan Ayg眉n (Boston College) and , "" (05/2012: PDF)

Abstract: We show that an ambiguity in setting the primitives of the matching with contracts model by Hatfield and Milgrom (2005) has serious implications for the model. Of the two ways to clear the ambiguity, the first (and what we consider more "clean") remedy renders several of the results of the paper invalid in the absence of an additional irrelevance of removed contracts condition implicitly assumed throughout the paper, whereas the second remedy results in the lack of transparency in presentation of results while at the same time reducing the scope of the analysis with no clear benefit.

803. , "" (07/2012: PDF; published, American Economic Journal: Macroeconomics, 6:3, 73-101, 2014)

Abstract: This paper models the tradeoff, perceived by central banks and other public actors, between providing the public with useful information and the risk of overwhelming it with excessive communication. An information authority chooses how many signals to provide regarding an aggregate state and agents respond by choosing how many signals to observe. When agents desire coordination, the number of signals they acquire may decrease in the number released. The optimal quantity of communication is positive, but does not maximize agents' acquisition of information. In contrast to a model without information choice, the authority always prefers to provide more precise signals.

802. Michael Belongia (University of Mississippi) and , "" (rev. 01/2013: PDF)

Abstract: Although a number of economists have tried to revive the idea of nominal GDP targeting since the financial market collapse of 2008, relatively little has been offered in terms of a specific framework for how this objective might be achieved in practice. In this paper we adopt a strategy outlined by Holbrook Working (1923) and employed, with only minor modifications, by Hallman, et al. (1991) in the P-Star model. We then present a series of theoretical and empirical results to show that Divisia monetary aggregates can be controlled by the Federal Reserve and that the trend velocities of these aggregates, by virtue of the properties of superlative indexes, exhibit the stability required to make long-run targeting feasible.

801. Michael Belongia (University of Mississippi) and , "" (06/2012: PDF)

Abstract: Over the last twenty-five years, a set of influential studies has placed interest rates at the heart of analyses that interpret and evaluate monetary policies. In light of this work, the Federal Reserve鈥檚 recent policy of "quantitative easing," with its goal of affecting the supply of liquid assets, appears as a radical break from standard practice. Superlative (Divisia) measures of money, however, often help in forecasting movements in key macroeconomic variables, and the statistical fit of a structural vector autoregression deteriorates significantly if such measures of money are excluded when identifying monetary policy shocks. These results cast doubt on the adequacy of conventional models that focus on interest rates alone. They also highlight that all monetary disturbances have an important "quantitative" component, which is captured by movements in a properly measured monetary aggregate.