<?xml version="1.0" encoding="UTF-8"?>  <!-- encoding must be UTF-8 -->
        
<!-- Metadata for a tech report for use in Google Scholar. It is based on the -->
<!-- NLM Journal Publishing DTD (http://dtd.nlm.nih.gov/publishing/). The     -->
<!-- two changes are: (1) the <articles> element which allows information     -->
<!-- about multiple articles to be included in a single file and (2)          -->
<!-- additional values for the <article-type> element. This file provides     -->
<!-- an example for a technical report and describes the constraints if any   -->
<!-- on the fields. Fields in the NLM Journal Publishing DTD not mentioned    -->
<!-- in this example are ignored at this time.                                -->       

<!-- NBER has decided to use this xml standard instead of journal article     -->
<!-- because none of the papers we issue should be construed as published     -->
<!-- and therefore we should not use the same format as a journal publisher   -->

<!-- The NBER Progams a paper is associated with are listed as -->
<!-- custom-meta tags -->

<!-- This file was generated by /etc/cvslocal/perlscript/all_xml_marc_ris.pl running on host backend at Sun Apr  5 01:35:04 2026 --> 
                   
<articles xmlns:xlink="http://www.w3.org/1999/xlink">

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>A Stochastic Approach to Disequilibrium Macroeconomics</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0001</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Honkapohja</surname>
          <given-names>Seppo</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Ito</surname>
          <given-names>Takatoshi</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>1979</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>In this paper, our aim is to develop an alternative approach to analyzing a macroeconomic model where markets do not clear. Earlier approaches have had difficulties in interpreting effective demand, a key concept in disequilibrium macroeconomics. We propose a new definition of effective demand similar to that of Svensson, Gale, and Green. Given the states of the markets, there is in general uncertainty about the amount of trades individuals can complete. Considering this uncertainty, each individual has to make binding trade offers, i.e., effective demands, a fraction of which will be actually transacted. Using the newly-defined effective demand, we define the rationing equilibrium as a fixed point of disequilibrium signals. We analyze various regimes of rationing equilibria. The most startling conclusion is the multiplicity of equilibria: (1) given wages and prices, there may exist more than one type of equilibrium and (ii) even at Wairasian prices there may exist non-Walrasian equilibria, and these are usually stable with respect to a quantity-adjustment mechanism while the Wairasian equilibrium is unstable, The comparative-static properties of policy we also considered, and they are comparable to those of the earlier approach.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0001.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Issues in Controllability and the Theory of Economic Policy</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0002</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Buiter</surname>
          <given-names>Willem H</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Gersovitz</surname>
          <given-names>Mark</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>07</month>
       <year>1981</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Trade and Investment</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Finance and Macroeconomics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The paper demonstrates that the concepts of dynamic controllability are useful for the theory of economic policy by establishing four propositions. First dynamic controllability is a central concept in stabilization policy. Second, the ability to achieve a target state, even if it cannot be maintained. may be of economic interest. Third, dynamic controllability is of special interest for 'historical' models. Fourth, the conditions for any notion of dynamic controllability are distinct from and weaker than those for Tinbergen static controllability.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0002.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Multiple Shooting in Rational Expectations Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0003</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Lipton</surname>
          <given-names>David</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Poterba</surname>
          <given-names>James M</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Sachs</surname>
          <given-names>Jeffrey D</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Summers</surname>
          <given-names>Lawrence H</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>1983</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Public Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This note describes an algorithm for the solution of rational expectations models with saddlepoint stability properties. The algorithm is based on the method of multiple shooting, which is widely used to solve mathematically similar problems in the physical sciences. Potential applications to economics include models of capital accumulation and valuation, money arid growth, exchange rate determination, and macroeconomic activity. In general, whenever an asset price incorporates information about the future path of key variables, solution algorithms of the type we consider are applicable.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0003.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The Estimation of Distributed Lags in Short Panels</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0004</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Griliches</surname>
          <given-names>Zvi</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Pakes</surname>
          <given-names>Ariel</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>1980</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Productivity, Innovation, and Entrepreneurship</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>In this paper, we investigate the problem of estimating distributed lags in short panels. Estimates of the parameter of distributed lag relationships based on single time-series of observations have been usually rather imprecise. The promise of panel data in this context is in the N repetitions of the time that it contains which should allow one to estimate the identified lag parameters with greater precision. On the other hand, panels tend to track their observations only over a relatively short time interval. Thus, some assumptions will have to be made on the contributions of the unobserved presample x?s to the current values of y before any lag parameters can be identified from such data. In this paper we suggest two such assumptions; both of which are, at least in part, testable, and outline appropriate estimation techniques. The first places reasonable restrictions on the relationship between the presample and insample x?s, while the second imposes conventional functional form constraints on the lag coefficients associated with the presample x?s.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0004.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Solution and Maximum Likelihood Estimation of Dynamic Nonlinear RationalExpectations Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0005</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Fair</surname>
          <given-names>Ray C</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Taylor</surname>
          <given-names>John B</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>1980</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>A solution method and an estimation method for nonlinear rational expectations models are presented in this paper. The solution method can be used in forecasting and policy applications and can handle models with serial correlation and multiple viewpoint dates. When applied to linear models, the solution method yields the same results as those obtained from currently available methods that are designed specifically for linear models. It is, however, more flexible and general than these methods. For large nonlinear models the results in this paper indicate that the method works quite well. The estimation method is based on the maximum likelihood principal. It is, as far as we know, the only method available for obtaining maximum likelihood estimates for nonlinear rational expectations models. The method has the advantage of being applicable to a wide range of models, including, as a special case, linear ,models. The method can also handle different assumptions about the expectations of the exogenous variables, something which is not true of currently available approaches to linear models.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0005.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The Role of Economic Policy After the New Classical Macroeconomics</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0006</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Buiter</surname>
          <given-names>Willem H</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>11</month>
       <year>1980</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Trade and Investment</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Finance and Macroeconomics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The paper considers the implications of the rational expectations - New Classical Macroeconomics revolution for the "rules versus discretion" debate. The following issues are covered 1) The ineffectiveness of anticipated stabilization policy, 2) Non-causal models and rational expectations, 3) optimal control in non-causal models -the inconsistency of optimal plans. I established the robustness of the proposition that contingent (closed-loop or feedback) rules dominate fixed (open-loop) rules. The optimal contingent rule in non-causal models - the innovation or disturbance-contingent feedback rule - is quite different from the state-contingent feedback rule derived by dynamic stochastic programming</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0006.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>A Disaggregated Structural Model of the Treasury Securities, Corporate Bond, and Equity Markets: Estimation and Simulation Results</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0007</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Roley</surname>
          <given-names>V. Vance</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>12</month>
       <year>1980</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The estimation and simulation results of a disaggregated structural model of u\U.S. security markets are presented in this paper. The model consists of estimated demands for corporate bonds, equities, and four distinct maturity classes of Treasury securities by 11 categories of investors. The model is closed with the addition of six market-clearing identities equating market demands with exogenous supplies. The empirical results provide support to the model's specification and indicate that the "within-sample forecasts" of the six endogenous security yields closely track historical data.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0007.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Multivariate Refression Models for Paned Data</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0008</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Chamberlain</surname>
          <given-names>Gary</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>12</month>
       <year>1980</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Productivity, Innovation, and Entrepreneurship</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Under stationarity, the heterogeneous stoahastic processes are the non-ergodic ones. We show that if a distributed lag is of finite order, then its coefficients are unconditional means of the underlying random coefficients. This result is applied to linear transformations of the process. The estimation framework is a multivariate wide-sense regression function. The identification analysis requires certain restrictions on the coefficients. The actual regression function is nonlinear, and so we provide a theory of inference for linear approximations. It rests on obtaining the asymptotic distribution of functions of sample moments. Restrictions are imposed by using a minimum distance estimator; it is generally more efficient than the conventional estimators.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0008.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The Superiority of Contingent Rules over Fixed Rules in Models with Rational Expectations</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0009</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Buiter</surname>
          <given-names>Willem H</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>12</month>
       <year>1981</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Trade and Investment</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Finance and Macroeconomics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The paper investigates the robustness of the proposition that in stochastic models contingent or feddback rules dominate fiped or openloop rules. Four arguments in favour of fixed rules are considere`. 1) The presence of an incompetent op malevolent policy maker. 2) A trade-off between flexibility and simplicity or credibility. 3) The New Classical proposition that only unanticipated (stabilization) policy has real effects. 4) The "time-inconsistency" of optimal plans in non-causal models, that is models in which the current state of the economy depends on expectations of future states. The main conclusion is that the "rational expectations revolution", represented by arguments (3) and (4) does not affect the potential superiority of (time-inconsistent) closed-loop policies over (time-inconsistent) open-loop policies. The case against conditionality in the design of policy must therefore rest on argument (1) or (2) which predate the New Classical Macroeconomics.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0009.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Granger-Causality and Stabilization Policy</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0010</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Buiter</surname>
          <given-names>Willem H</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>1981</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Trade and Investment</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Finance and Macroeconomics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper aims to provide a stochastic, rational expectations extension of Tobin's "Money and Income; Post Hoc Ergo Proper Hoc?". It is well-known that money may Granger-cause real variables even though the joint density function of the real variables is invariant under changes in the deterministic components of the monetary feedback rule. The paper shows that failure of money to Granger-cause real variables does not preclude a stabilization role of money. In a number of examples the conditional second moment of real output is a function of the deterministic components of the monetary feedback rule. Yet money fails to Granger-cause output ("in mean" and "in variance"). In all these models money is a pure stabilization instrument: superneutrality is assumed. If the analysis is extended to "structural" or "al1ocative" instruments such as fiscal instruments, the conclusion is even stronger. Failure of these policy instruments to Granger-cause real variables is consistent with changes in the deterministic parts of the policy feedback rules being associated with changes in the conditional means of the real variables. Granger-causality tests are tests of "incremental predictive content". They convey no information about the invariance of the joint density function of real variables under changes in the deterministic components of policy feedback rules.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0010.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Two-Step Two-Stage Least Squares Estimation in Models with Rational Expectations</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0011</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Obstfeld</surname>
          <given-names>Maurice</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Cumby</surname>
          <given-names>Robert E</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Huizinga</surname>
          <given-names>John</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>07</month>
       <year>1983</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Trade and Investment</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Finance and Macroeconomics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper introduces a limited-information two-step estimator for models with rational expectations and serially correlated disturbances. The estimator greatly extends the area of applicability of McCallum's (1976) instrumental variables approach to rational expectations models. Section I reviews McCallum%s method and discusses in detail the problems surrounding its use in many empirical c/ntexts. Section II presents the two-step two-stage least squares estimator (2S2S1) and demonstrates its efficiency relative to that of McCallum (1979). Section III provides a comparison nf several estim!tors for a two equation macroeconomic model with rational expectations due to Taylor (1979).</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0011.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>A Note on the Solution of A Two-Point Boundary Value Problem Frequently Encountered in Rational Expectations Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0012</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Buiter</surname>
          <given-names>Willem H</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>1981</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Trade and Investment</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Finance and Macroeconomics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The paper analyses a class of two-point boundary value problem for systems of linear differential equations with constant coefficients. The boundary conditions are expressed as linear restrictions on the state vector at an initial time and at a finite terminal time. This is applicable even if the terminal conditions involve the asymptotic convergence of the system to steady-state equilibrium. as is frequently the case in eC9nomic applications. It is also a suitable format for numerical applications using existing computer routines. The case in which there are more stable eigenvalues than predetermined state variables is also considered. An example involving a small open economy macroeconomic model is used to illustrate the analysis.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0012.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Macroeconometric Modelling for Policy Evaluation and Design</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0013</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Buiter</surname>
          <given-names>Willem H</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>1981</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Trade and Investment</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Finance and Macroeconomics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The paper reviews recent developments in macroeconomic theory and their implications for econometric modelling and for policy design. The following issues are addressed. 1) The theoretical and practical problems of modelling sequence economies. 2) Problems of evaluating the role of money given the absence of reasonable microfoundations for monetary theory. 3) The implications of the view that macroeconomic models should be viewed as non-cooperative differential games. 4) Identification and estimation of the policy-invariant structure of rational expectations models. 5) Time inconsistency of optimal plans and 6) The welfare economics of stabilization policy and the need to pay much greater attention to market structure if a market failure-based justification for stabilization policy is to be developed.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0013.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Asymptotic Properties of Quasi-Maximum Likelihood Estimators and Test Statistics</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0014</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>MaCurdy</surname>
          <given-names>Thomas E</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>1981</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We examine the implications of arbitrage in a market with many assets. The absence of arbitrage opportunities implies that the linear functionals that give the mean and cost of a portfolio are continuous; hence there exist unique portfolios that represent these functionals. The mean variance efficient set is a cone generated by these portfolios. Ross [16, 18J showed that if there is a factor structure, then the distance between the vector or mean returns and the space spanned by the factor loadings is bounded as the number of assets increases. We show that if the covariance matrix of asset returns has only K unbounded eigenvalues, then the corresponding K eigenvectors converge and play the role of factor loadings in Ross' result. Hence only a principal components analysis is needed to test the arbitrage pricing theory. Our eigenvalue conditional can hold even though conventional measures of the approximation error in a K factor model are unbounded. We also resolve the question of when a market with many assets permits so much diversification that risk-free investment opportunities are available.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0014.pdf"></self-uri>
    <self-uri xlink:href="http://www.nber.org/papers/t0014.djvu"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Arbitrage and Mean-Variance Analysis on Large Asset Markets</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0015</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Chamberlain</surname>
          <given-names>Gary</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Rothschild</surname>
          <given-names>Michael</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>07</month>
       <year>1981</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We examine the implications of arbitrage in a market with many assets. The absence of arbitrage opportunities implies that the linear functionals that give the mean and cost of a portfolio are continuous; hence there exist unique portfolios that represent these functionals. The mean variance efficient set is a cone generated by these portfolios. Ross [16, 18J showed that if there is a factor structure, then the distance between the vector or mean returns and the space spanned by the factor loadings is bounded as the number of assets increases. We show that if the covariance matrix of asset returns has only K unbounded eigenvalues, then the corresponding K eigenvectors converge and play the role of factor loadings in Ross' result. Hence only a principal components analysis is needed to test the arbitrage pricing theory. Our eigenvalue conditional can hold even though conventional measures of the approximation error in a K factor model are unbounded. We also resolve the question of when a market with many assets permits so much diversification that risk-free investment opportunities are available.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0015.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Welfare Analysis of Tax Reforms Using Household Data</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0016</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>King</surname>
          <given-names>Mervyn A</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>07</month>
       <year>1981</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Public Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The paper discusses a methodology for calculating the distribution of gains and losses from a policy change using data for a large sample of households. Estimates are based on the equivalent income function, which is money metric utility defined over observable variables. This enables calculations to be standardised, and a computer program to compute the statistics presented in the paper is available for a general demand system. Equivalent income is related to measures of deadweight loss, and standard errors are computed for each of the welfare measures. An application to UK data for 5895 households is given which simulates a reform that involves eliminating housing subsidies.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0016.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Econometric Models for Count Data with an Application to the Patents-R&amp;D Relationship</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0017</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hausman</surname>
          <given-names>Jerry A</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hall</surname>
          <given-names>Bronwyn H</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Griliches</surname>
          <given-names>Zvi</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>1984</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Productivity, Innovation, and Entrepreneurship</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper focuses on developing and adapting statistical models of counts (non-negative integers) in the context of panel data and using them to analyze the relationship between patents and R&amp;D expenditures. The model used is an application and generalization of the Poisson distribution to allow for independent variables; persistent individual (fixed or random) effects, and "noise" or randomness in the Poisson probability function. We apply our models to a data set previously analyzed by Pakes and Griliches using observations on 128 firms for seven years, 1968-74. Our statistical results indicate clearly that to rationalize the data, we need both a disturbance in the conditional within dimension and a different one, with a different variance, in the marginal (between) dimension. Adding firm specific variables, log book value and a scientific industry dummy, removes most of the positive correlation between the individual firm propensity to patent and its R&amp;D intensity. The other new finding is that there is an interactive negative trend in the patents - R&amp;D relationship, that is, firms are getting less patents from their more recent R&amp;D investments, implying a decline in the "effectiveness" or productivity of R&amp;D.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0017.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>On the Estimation of Structural Hedonic Price Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0018</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Brown</surname>
          <given-names>James N</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Rosen</surname>
          <given-names>Harvey S</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1982</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Public Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>MANY COMMODITIES can be viewed as bundles of individual attributes for which no explicit markets exist. It is often of interest to estimate structural demand and supply functions for these attributes, but the absence of directly observable attribute prices poses a problem for such estimation. In an influential paper published several years ago, Rosen [3] proposed an estimation procedure to surmount this problem. This procedure has since been used in a number of applications (see, for example, Harrison and Rubinfeld [2] or Witte, et al. [4]). The purpose of this note is to point out certain pitfalls in Rosen's procedure, which, if ignored, could lead to major identification problems. In Section 2 we summarize briefly the key aspects of Rosen's method as it has been applied in the literature. Section 3 discusses the potential problems inherent in this procedure and provides an example. Section 4 concludes with a few suggestions for future research.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0018.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Bliss Points in Mean-Variance Portfolio Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0019</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Jones</surname>
          <given-names>David S.</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Roley</surname>
          <given-names>V. Vance</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>12</month>
       <year>1981</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>When all financial assets have risky returns, the mean-variance portfolio model is potentially subject to two types of bliss points. One bliss point arises when a von Neumann-Morgenstern utility function displays negative marginal utility for sufficiently large end-of-period wealth, such as in quadratic utility. The second type of bliss point involves satiation in terms of beginning-of-period wealth and afflicts many commonly used mean-variance preference functions. This paper shows that the two types of bliss points are logically independent of one another and that the latter places the effective constraint on an investor's welfare. The paper also uses Samuelson's Fundamental Approximation Theorem to motivate a particular mean-variance portfolio choice model which is not affected by either type of bliss point.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0019.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Saddlepoint Problems in Contifuous Time Rational Expectations Models: A General Method and Some Macroeconomic Ehamples</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0020</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Buiter</surname>
          <given-names>Willem H</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>1984</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Trade and Investment</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Finance and Macroeconomics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The paper presents a general solution method for rational expectations models that can be represented by systems of. deterministic first order linear differential equations with constant coefficients. It is the continuous time adaptation of the method of Blanchard and Kahn. To obtain a unique solution there must be as many linearly independent boundary conditions as there are linearly independent state variables. Three slightly different versions of a well-known small open economy macroeconomic model were used to illustrate three fairly general ways of specifying the required boundary conditions. The first represents the standard case in which the number of stable characteristic roots equals the number of predetermined variables. The second represents the case where the number of stable roots exceeds the number of predetermined variables but equals the number of predetermined variables plus the number of "backward-looking" but non-predetermined variables whose discontinuities are linear functions of the discontinuities in the forward-looking variables. The third represents the case where the number of unstable roots is less than the number of forward-looking state variables. For the last case, boundary conditions are suggested that involve linear restrictions on the values of the state variables at a future date. The method of this paper permits the numerical solution of models with large numbers of state variables. Any combination of anticipated or unanticipated, current or future and permanent or transitory shocks can be analyzed.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0020.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Predetermined and Non-Predetermined Variables in Rational Expectations Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0021</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Buiter</surname>
          <given-names>Willem H</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>01</month>
       <year>1983</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Trade and Investment</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Finance and Macroeconomics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The distinctiof between predetermined and non-predetermined variables is a crucial one in rational expectations models. I consider and reject two definitions, one proposed by Blanchard and Kahn and one by Chow. Both definitions lead to possible misc1assifications. Instead I propose the following defin)tion. A variable is non-`redetermined if and only if its current value is a function of current anticipations mf future values of endogenous and/or exogenous variables. This definition focuses on the essential economic property of non-predetermined variables: unlike predetermined variables they can respond instantaneously to changes in expectations due to "news." The new definition also fits the structure of rational expectations models solution algorithms such as the one proposed by Blanchard and Kahn.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0021.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Using Information on the Moments of Disturbances to Increase the Efficiency of Estimation</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0022</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>MaCurdy</surname>
          <given-names>Thomas E</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1982</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Econometric analyses of treatment response commonly use instrumental variable (IV) assumptions to identify treatment effects.  Yet the credibility of IV assumptions is often a matter of considerable disagreement, with much debate about whether some covariate is or is not a 'valid instrument' in an application of interest.  There is therefore good reason to consider weaker but more credible assumptions.  To this end, we introduce monotone instrumental variable (MIV) assumptions.  A particularly interesting special case of an MIV assumption is monotone treatment selection (MTS). IV and MIV assumptions may be imposed alone or in combination with other assumptions. We study the identifying power of MIV assumptions in three informational settings: MIV alone; MIV combined with the classical linear response assumption; MIV combined with the monotone treatment response (MTR) assumption.  We apply the results to the problem of inference on the returns to schooling.  We analyze wage data reported by white male respondents to the National Longitudinal Survey of Youth (NLSY) and use the respondent's AFQT score as an MIV.  We find that this MIV assumption has little identifying power when imposed alone.  However, combining the MIV assumption with the MTR and MTS assumptions yields fairly tight bounds on two distinct measures of the returns to schooling.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0022.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Stochastic Capital Theory I. Comparative Statics</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0023</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Brock</surname>
          <given-names>William</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Rothschild</surname>
          <given-names>Michael</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Stiglitz</surname>
          <given-names>Joseph E</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1982</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Introductory lectures on capital theory often begin by analyzing the following problem: I have a tree which will be worth X(t) if cut down at time t. If the discount rate is r, when should the tree be cut down? What is the present value of such a tree? The answers to these questions are straightforward. Since at time t a tree which I plan to cut down at time T is worth e[to the power of rt]e[to the power of ?rT]X(T), I should choose the cutting date T* to maximize e[to the power of -rT]X(T); at t < T* a tree is worth e[to the power of rt]e[to the power of -rT*]X(T*). In this paper we analyze how the answers to these questions of timing and evaluation change when the tree's growth is stochastic rather than deterministic. Suppose a tree will be worth X(t,w) if cut down at time t when X(t,w) is a stochastic process. When should it be cut down? What is its present value? We study these questions for trees which grow according to both discrete and continuous stochastic processes. The approach to continuous time stochastic processes contrasts with much of the finance literature in two respects. First, we obtain sharp aomparative statics results without restricting ourselves to particu,ar stochastic specifications. Second, while the option pricing literature seems to imply that increases in variance always increase value, we show that an increase in the variance of a Tree's growth has ambiguous effects on its value.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0023.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Identification in Dynamic Linear Models with Rational Expectations</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0024</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Blanchard</surname>
          <given-names>Olivier J</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>07</month>
       <year>1982</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper characterizes identification in dynamic linear models. It shows that identification restrictions are linear in the structural parameters and are therefore easy to use. Using these restrictions, it analyzes the role of exogenous variables in helping to achieve identification.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0024.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Smoothness Priors and Nonlinear Regression</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0025</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Shiller</surname>
          <given-names>Robert J</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>1982</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>In applications, the linear multiple regression model is often modified to allow for nonlinearity in an independent variable. It is argued here that in practice it may often be desirable to specify a Bayesian prior that the unknown functional form is "simple" or "uncomplicated" rather than to parametize the nonlinearity. "Discrete smoothness priors" and "continuous smoothness priors" are defined and it is shown how posterior mean estimates can easily be derived using ordinary multiple linear regression modified with dummy variables and dummy observations. Relationships with spline and polynomial interpolation are pointed out. Illustrative examples of cost function estimation are provided.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0025.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Formulation and Estimation of Dynamic Factor Demand Equations Under Non-Static Expectations: A Finite Horizon Model</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0026</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Prucha</surname>
          <given-names>Ingmar</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Nadiri</surname>
          <given-names>M. Ishaq</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>1982</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Productivity, Innovation, and Entrepreneurship</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper proposes a discrete model of investment behavior that incorporates general nonstatic expectations with a general cost of adjustment technology. The combination of these two features usually leads to a set of highly nonlinear first order conditions for the optimal input plan; the expectational variables work in addition as shift parameters. Consequently, an explicit analytic solution for derived factor demand is in general difficult if not impossible to obtain. Simplifying assumptions on the technology and/or the form of the expectational process are therefore typically made in the literature. In this paper we develop an algorithm for the estimation of flexible forms of derived factor demand equations within the above general setting. By solving the first order conditions numerically at each iteration step this algorithm avoids the need for an explicit analytic solution. In particular we consider a model with a finite planning horizon. The relationship between the optimal input plans of the finite and infinite planning horizon model is explored. Due to the discrete setting of the model the forward looking behavior of investment is brought out very clearly. As a byproduct a consistent framework for the use of anticipation data on planned investment is developed.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0026.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The Effect of Ignoring Heteroscedasticity on Estimates of the Tobit Model</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0027</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Brown</surname>
          <given-names>Charles C</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Moffitt</surname>
          <given-names>Robert A</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>01</month>
       <year>1983</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>We consider the sensitivity of the Tobit estimator to heteroscedasticity. Our single independent variable is a dummy variable whose coefficient is a difference between group means, and the error variance differs between groups. Heteroscedasticity biases the Tobit estimate of the two means in opposite directions, so the bias in estimating their difference can be significant. This bias is not monotonically related to the true difference, and is greatly increased if the limit observations are not available. Perhaps surprisingly, the Tobit estimates are sometimes more severely biased than are OLS estimates.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0027.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Methods of Solution and Simulation for Dynamic Rational Expectations Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0028</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Blanchard</surname>
          <given-names>Olivier J</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>1983</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Many methods have been proposed for the solution and simulation of medium or large size models under the assumption of rational expectations. The purpose of this paper is to present these methods, and to show how and where each can be applied. The methods fall into two groups. Methods in the first can be used to solve for perfect foresight paths in non-linear models. Methods in the second can be used in linear models, to solve either for paths or processes followed by endogenous variables. All the methods described here have been used in empirical applications and computer algorithms are available for most.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0028.pdf"></self-uri>
    <self-uri xlink:href="http://www.nber.org/papers/t0028.djvu"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Optimal and Time-Consistent Polices in Continuous Time Rational Expectations Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0029</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Buiter</surname>
          <given-names>Willem H</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>1983</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Trade and Investment</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Finance and Macroeconomics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>In this note the method of Hamiltonian dynamics is used to characterize the time-consistent solution to the optimal control problem in a deterministic continuous time rational expectations model. A linear quadratic example based on the work of Miller and Salmon is used for simplicity. To derive the time-consistent rational expectations (or subgame-perfect) solution we first characterize the optimal solution made familiar e.g. through the work of Calvo. The time-consistent solution is then obtained by modifying the optimal solution through the requirement that the co-state variables (shadow prices) of the non-predetermined variables be zero at each instant. Existing solution methods and computational algorithms can be used to obtain the behaviour of the system under optimal policy and under time-consistent policy.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0029.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Pitfalls in the use of Time as an Explanatory Variable in Regression</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0030</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Nelson</surname>
          <given-names>Charles</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Kang</surname>
          <given-names>Heejoon</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>11</month>
       <year>1983</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Regression of a trendless random walk on time produces R-squared values around .44 regardless of sample length. The residuals from the regression exhibit only about 14 percent as much variation as the original series even though the underlying process has no functional dependence on time. The autocorrelation structure of these "detrended" random walks is pseudo-cyclical and purely artifactual. Conventional tests for trend are strongly biased towards finding a trend when none is present, and this effect is only partially mitigated by Cochrane-Orcutt correction for autocorrelation. The results are extended to show that pairs of detrended random walks exhibit spurious correlation.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0030.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Deep Structral Excavation?  A Critique of Euler Equation Methods</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0031</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Garber</surname>
          <given-names>Peter M</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>King</surname>
          <given-names>Robert G</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>11</month>
       <year>1983</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Rational expectations theory instructs empirical researchers to uncover the values of 'deep' structural parameters of preferences and technology rather than the parameters of decision rules that confound these structural parameters with those of forecasting equations. This paper reevaluates one method of identifying and estimating such deep parameters, recently advanced by Hansen and Singleton, that uses intertemporal efficiency expressions (Euler equations) and basic properties of expectations to produce orthogonality conditions that permit parameter estimation and hypothesis testing. These methods promise the applied researcher substantial freedom, as it is apparently not necessary to specify the details of dynamic general equilibrium to study the behavior of a particular market participant. In this paper, we demonstrate that this freedom is illusory. That is, if there are shifts in agents' objectives which are not directly observed by the econometrician, then Euler equation methods encounter serious identification and estimation difficulties. For these difficulties to be overcome the econometrician must have prior knowledge concerning variables that are exogenous to the agent under study, as in conventional simultaneous equations theory.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0031.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Estimating Autocorrelations in Fixed-Effects Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0032</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Solon</surname>
          <given-names>Gary</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>02</month>
       <year>1984</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper discusses the estimation of serial correlation in fixed effects models for longitudinal data. Like time series data, longitudinal data often contain serially correlated error terms, but the autocorrelation estimators commonly used for time series, which are consistent as the length of the time series goes to infinity, are not consistent for a short time series as the size of the cross-section goes to infinity. This form of inconsistency is of particular concern because a short time series of a large cross-section is the typical case in longitudinal data. This paper extends Nickell's method of correcting for the inconsistency of autocorrelation estimators by generalizing to higher than first-order autocorrelations and to error processes other than first-order autoregressions. The paper also presents statistical tables that facilitate the identification and estimation of autocorrelation processes in both the generalized Nickell method and an alternative method due to MaCurdy. Finally, the paper uses Monte Carlo methods to explore the finite-sample properties of both methods.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0032.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Consistent Estimation Using Data From More Than One Sample</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0033</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Dickens</surname>
          <given-names>William</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Ross</surname>
          <given-names>Brian A</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>1984</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper considers the estimation of linear models when group average data from more than one sample is used. Conditions under which OL8 coefficient estimates are consistent are identified. The standard OL8 covariance estimate is shown to be inconsistent and a consistent estimator is proposed. Finally, since the conditions under which OL8 is consistent are quite restrictive, several estimators which are consistent in many cases where OL8 is not are developed. The large sample distribution properties and an estimator for the asymptotic covariance matrix for the most general of these alternative estimators is also presented. One important application of these findings is to estimating compensating wage differences. Past authors, beginning with Thaler and Rosen (1976) have argued that finer classification schemes would reduce errors-in-variable bias. The analysis presented here suggests that the opposite is true if finer classification results in fewer observations per classification. This could explain why authors using the broader (industry) classification schemes have found larger compensating differences and suggests that these estimates may be closer to the true values.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0033.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Policy evaluation and design for continuous time linear rational expectations models:  some recent development</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0034</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Buiter</surname>
          <given-names>Willem H</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>04</month>
       <year>1984</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Trade and Investment</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Finance and Macroeconomics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The paper surveys some recent developments in policy evaluation and design in continuous time linear rational expectations models. Much recent work in macroeconomics and open economy macroeconomics fits into this category. First the continuous time analogue is reviewed of the discrete time solution method of Blanchard and Kahn. Some problems associated with this solution method are then discussed, including non-uniqueness and zero roots. Optimal (but in general time-inconsistent) and time-consistent (but in general suboptimal) solutions are derived to the general linear-quadratic optimal control problem, based on work by Calvo, Driffill, Miller and Salmon and the author. A numerical example is solved, involving optimal and time-consistent anti-inflationary policy design in a contract model.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0034.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Misperceptions, Moral Hazard, and Incentives in Groups</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0035</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Gaynor</surname>
          <given-names>Martin</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1987</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economics of Health</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Recent work has shown that, in the presence of moral hazard, balanced budget Nash equilibria in groups are not pareto-optimal. This work shows that when agents misperceive the effects of their actions on the joint outcome, there exist a set of sharing rules which balance the budget and lead to a pareto-optimal Nash equilibria.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0035.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Conditional Projection by Means of Kalman Filtering</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0036</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Clarida</surname>
          <given-names>Richard H</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Coyle</surname>
          <given-names>Diane</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1984</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We establish that the recursive, state-space methods of Kalman filtering and smoothing can be used to implement the Doan, Litterman, and Sims (1983) approach to econometric forecast and policy evaluation. Compared with the methods outlined in Doan, Litterman, and Sims, the Kalman algorithms are more easily programmed and modified to incorporate different linear constraints, avoid cumbersome matrix inversions, and provide estimates of the full variance covariance matrix of the constrained projection errors which can be used directly, under standard normality assumptions, to test statistically the likelihood and internal consistency of the forecast under study.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0036.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Errors in Variables in Panel Data</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0037</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Griliches</surname>
          <given-names>Zvi</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hausman</surname>
          <given-names>Jerry A</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1984</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Panel data based on various longitudinal surveys have become ubiquitous in economics in recent years. Estimation using the analysis of covariance approach allows for control of various "individual effects" by estimation of the relevant relationships from the "within" dimension of the data. Quite often, however, the "within" results are unsatisfactory, "too low" and insignificant. Errors of measurement in the independent variables whose relative importance gets magnified in the within dimension are often blamed for this outcome. However, the standard errors-in-variables model has not been applied widely, partly because in the usual micro data context it requires extraneous information to identify the parameters of interest. In the panel data context a variety of errors-in-variables models may be identifiable and estimable without the use of external instruments. We develop this idea and illustrate its application in a relatively simple but not uninteresting case: the estimation of "labor demand" relationships, also known as the "short run increasing returns to scale" puzzle.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0037.pdf"></self-uri>
    <self-uri xlink:href="http://www.nber.org/papers/t0037.djvu"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Correcting for Truncation Bias Caused by a Latent Truncation Variable</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0038</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Bloom</surname>
          <given-names>David E</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Killingsworth</surname>
          <given-names>Mark R</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>1984</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We discuss estimation of the model Y[sub i] = X[sub i]b[sub y] + e[sub Yi]  and  T[sub i] =X[sub i]b[sub T] + e[sub Ti] when data on the continuous dependent variable Y and on the independent variables X are observed if the "truncation variable" T > 0 and when T is latent. This case is distinct from both (i) the "censored sample" case, in which Y data are available if T > 0, T is latent and X data are available for all observations, and (ii) the "observed truncation variable" case, in which both Y and X are observed if T > 0 and in which the actual value of T is observed whenever T > O. We derive a maximum-likelihood procedure for estimating this model and discuss identification and estimation.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0038.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Data Problems in Econometrics</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0039</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Griliches</surname>
          <given-names>Zvi</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>07</month>
       <year>1984</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Productivity, Innovation, and Entrepreneurship</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This review of data problems in econometrics has been prepared for the Handbook of Econometrics (Vol. 3, Chap. 25, forthcoming). It starts with a review of the ambivalent relationship between data and econometricians, emphasizing the largely second-hand nature of economic data and the consequences that flow from the distance between econometricians as users of data and its producers. Section II describes the major types of economic data while Section III reviews some of the problems that arise in trying to use such data to estimate model parameters and to test economic theories. Section IV reviews the classical errors in variables model and its applicability to micro-data, especially panel data. Section V discusses missing data models and methods and illustrates them with an empirical example. Section VI focuses on the problem of estimating models in the absence of a full history, suggests a possible range of solutions, and provides again an empirical example: using a short panel to investigate the weights to be used in constructing a correct "capital" measure. The chapter closes (Section VII) with some final remarks on the existential problem of econometrics: life with imperfect data and inadequate theories.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0039.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Flexible Functional Forms and Global Curvature Conditions</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0040</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Diewert</surname>
          <given-names>W. Erwin</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Wales</surname>
          <given-names>T.J.</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1989</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Productivity, Innovation, and Entrepreneurship</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Empirically estimated flexible functional forms frequently fail to satisfy the appropriate theoretical curvature conditions. Lau and Gallant and Golub have worked out methods for imposing the appropriate curvature conditions locally, but those local techniques frequently fail to yield satisfactory results. We develop two methods for imposing curvature conditions globally in the context of cost function estimation. The first method adopts Lau's technique to a generalization of a functional form first proposed by McFadden. Using this Generalized McFadden functional form, it turns out that imposing the appropriate curvature conditions at one data point imposes the conditions globally. The second method adopts a technique used by McFadden and Barnett, which is based on the fact that a non-negative sum of concave functions will be concave. Our various suggested techniques are illustrated using the U.S. Manufacturing data utilized by Berndt and Khaled</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0040.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Rational Expectations Models with a Continuum of Convergent Solutions</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0041</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Mussa</surname>
          <given-names>Michael L</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>1984</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper examines five examples of rational expectations models with a continuum of convergent solutions and demonstrates serious difficulties in the economic interpretation of these solutions. The five examples are (1) a model of optimal capital accumulation with a negative rate of time preference, (2) Taylor's (1977) linear rational expectations model of macroeconomic equilibrium; (3) Calvo's (1984) model of contract setting and price dynamics; (4) Obstfeld's (1984) equilibrium model of monetary dynamics with individual optimizing agents; and (5) Calvo's (1978) life-cycle model of savings and asset valuation. In every case, when these models yield a continuum of convergent infinite horizon solutions, these solutions fail to exhibit economically appropriate, forward looking dependence of the endogenous variables on the paths of the exogenous forcing variab1es--a difficulty that does not arise under the circumstances where these models yield unique convergent infinite horizon solutions. Further, the three models that have natural finite horizon versions, either lack finite horizon solutions or have solutions that do not converge to any of the infinite horizon solutions. Again, this difficulty arises only under the circumstances where these models have a continuum of infinite horizon solutions.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0041.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>New Econometric Techniques for Marcoeconomic Policy Evaluation</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0042</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Taylor</surname>
          <given-names>John B</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>11</month>
       <year>1984</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper is an expository review of recently developed techniques that are designed to evaluate macroeconomic policy using econometric models ; The exposition focuses on dynamic stochastic models with rational expectations and with discrete time. The method of undetermined coefficients is used to calculate the effects of anticipated, unanticipated, permanent, and temporary policy shocks; the same method is also used to calculate the effect of alternative policy rules on the stochastic equilibrium. This method provides a convenient unifying framework for comparing alternative solution methods for models with rational expectations. Estimation, testing and identification techniques are reviewed as well as recent methods for solving large nonlinear models.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0042.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Error Components in Grouped Data:  Why It's Never Worth Weighting</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0043</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Dickens</surname>
          <given-names>William</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>02</month>
       <year>1985</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>When estimating linear models using grouped data researchers typically weight each observation by the group size. Under the assumption that the regression errors for the underlying micro data have expected values of zero, are independent and are homoscedastic, this procedure produces best linear unbiased estimates. This note argues that for most applications in economics the assumption that errors are independent within groups is inappropriate. Since grouping is commonly done on the basis of common observed characteristics, it is inappropriate to assume that there are no unobserved characteristics in common. If group members have unobserved characteristics in common, individual errors will be correlated. If errors are correlated within groups and group sizes are large then heteroscedasticity may be relatively unimportant and weighting by group size may exacerbate heteroscedasticity rather than eliminate it. Two examples presented here suggest that this may be the effect of weighting in most non-experimental applications. In many situations unweighted ordinary least squares may be a preferred alternative. For those cases where it is not, a maximum likelihood and an asymptotically efficient two-step generalized least squares estimator are proposed. An extension of the two-step estimator for grouped binary data is also presented.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0043.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Asset Pricing Theories</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0044</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Rothschild</surname>
          <given-names>Michael</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>1985</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This article compares two leading models of asset pricing: the capital asset pricing model (CAPM) and the arbitrage pricing theory (APT): I argue that while the APT is compatible with the data available for testing theories of asset pricing, the CAPM is not. In reaching this conclusion emphasis is placed on the distinction between the unconditional (relatively incomplete) information which econometricians must use to estimate asset pricing models and the conditional (complete) information which investors use in making the portfolio decisions which determine asset prices. Empirical work to date suggests that it is unlikely that the APT will produce a simple equation which explains differences in risk premium well with a few parameters. If the CAPM were correct, it would provide such an equation.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0044.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Testing the Random Walk Hypothesis:  Power versus Frequency  of Observation</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0045</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Shiller</surname>
          <given-names>Robert J</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Perron</surname>
          <given-names>Pierre</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>04</month>
       <year>1985</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Power functions of tests of the random walk hypothesis versus stationary first order autoregressive alternatives are tabulated for samples of fixed span but various frequencies of observation.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0045.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Is There Chronic Excess Supply of Labor?  Designing a Statistical Test</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0046</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Quandt</surname>
          <given-names>Richard E</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Rosen</surname>
          <given-names>Harvey S</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>04</month>
       <year>1985</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>In this paper we present and implement a statistical test of the hypothesis that the labor market has chronic excess supply. The procedure is to estimate a disequilibrium labor market model, and construct a test statistic based on the unconditional probability that there is excess supply each period. We find that the data reject the hypothesis of chronic excess supply. Hence, one cannot assume that all observations lie on the demand curve.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0046.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Technical Progress in U.S. Manufacturing Sectors, 1948-1973:  An Application of Lie Groups</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0047</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Sato</surname>
          <given-names>Ryuzo</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Mitchell</surname>
          <given-names>Thomas M</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1985</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>The purpose of this paper is to apply the theory of Lie transformation groups as developed by the first author, and derive a testable model of production and technical change. The econometric model is then applied to data derived by F. Gollop and D. Jorgenson for U.S. manufacturing industries for the years 1948-1973. This is the first empirical work in economics to incorporate the theory of Lie transformation groups, so the results are new, but they are also interesting. Using Zellner's seemingly unrelated regression equations method of generalized least squares produces an estimate of a model for the 21-industry system which has a high degree of explanatory power: The system's weighted-R2 is 0.9675 and all coefficients are statistically significant at the 5% level (on the basis of t-tests). While the "form" of technical change in a given industry of the model is probably new, it is easily characterized within the Lie group structure and the system estimate is statistically significant.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0047.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Implementing Causality Tests with Panel Data, with an Example from LocalPublic Finance</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0048</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Holtz-Eakin</surname>
          <given-names>Douglas</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Newey</surname>
          <given-names>Whitney</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Rosen</surname>
          <given-names>Harvey S</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>1989</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Public Economics</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper considers estimation and testing of vector autoregression coefficients in panel data, and applies the techniques to analyze the dynamic properties of revenues, expenditures, and grants in a sample of United States municipalities. The model allows for nonstationary individual effects, and is estimated by applying instrumental variables to the quasi-differenced autoregressive equations Q Particular attention is paid to specifying lag lengths and forming convenient test statistics. The empirical results suggest that intertemporal linkages are important to the understanding of state and local behavior. Such linkages are ignored in conventional cross sectional regressions. Also, we present evidence that past grant revenues help to predict current expenditures, but that past expenditures do not help to predict current revenues.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0048.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Alternative Nonnested Specification Tests of Time Series Investment Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0049</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Bernanke</surname>
          <given-names>Ben S</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Bohn</surname>
          <given-names>Henning</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Reiss</surname>
          <given-names>Peter C</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>1985</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper develops and compares nonnested hypothesis tests for linear regression models with first-order serially correlated errors. It extends the nonnested testing procedures of Pesaran, Fisher and McAleer, and Davidson and MacKinnon, and compares their performance on four conventional models of aggregate investment demand using quarterly U.S. investment data from 1951:1 to 1983:IV. The data and the nonnested hypothesis tests initially indicate that no model is correctly specified, and that the tests are occasionally intransitive in their assessments. Before rejecting these conventional models of investment demand, we go on to investigate the small sample properties of these different nonnested test procedures through a series of monte carlo studies. These investigations demonstrate that when there is significant serial correlation, there are systematic finite sample biases in the nominal size and power of these test statistics. The direction of the bias is toward rejection of the null model, although it varies considerably by the type of test and estimation technique. After revising our critical levels for this finite sample bias, we conclude that the accelerator model of equipment investment cannot be rejected by any of the other alternatives.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0049.pdf"></self-uri>
    <self-uri xlink:href="http://www.nber.org/papers/t0049.djvu"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Estimation and Hypothesis Testing with Restricted Spectral Density Matrices:  An Application to Uncovered Interest Parity</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0050</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Quah</surname>
          <given-names>Danny</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Ito</surname>
          <given-names>Takatoshi</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>1989</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper explores an econometric estimation technique for dynamic linear models. The method combines the analytics of moving average solutions to dynamic models together with computational advantages of the Whittle likelihood. A hypothesis of interest to international and financial economists is represented in the form of cross-equation restrictions and tested under the technique. This paper employs data on Japanese yen- and U.S. dollar-denominated interest rates and yen/dollar exchange rates to examine the hypothesis of uncovered interest parity under rational expectations.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0050.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Do We Reject Too Often?  Small Sample Properties of Tests of Rational Expectations Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0051</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Mankiw</surname>
          <given-names>N. Gregory</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Shapiro</surname>
          <given-names>Matthew D</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>1985</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We examine the small sample properties of tests of rational expectations models. We show using Monte Carlo experiments that the asymptotic distribution of test statistics can be extremely misleading when the tine series examined are highly autoregressive. In particular, a practitioner relying on the asymptotic distribution will reject true models too frequently. We also show that this problem is especially severe with detrended data. We present correct small sample critical values for our canonical problem.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0051.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>A Fiscal Theory of Hyperdeflations?  Some Surprising Monetarist Arithmetic</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0052</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Buiter</surname>
          <given-names>Willem H</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>11</month>
       <year>1985</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The note mines an unsuspected lode in the Sargent-Wallace "Unpleasant Monetarist Arithmetic" deposit. While that model is shown to be incapable of generating hyperinflations as a result of large monetized public sector deficits, it can generate hyperdeflations or perhaps more accurately, the first stages of an unsustainable process of hyperdeflation. The drawing of policy conclusions is left as an exercise for the reader.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0052.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Microeconomic Approaches to the Theory of International Comparisons</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0053</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Diewert</surname>
          <given-names>W. Erwin</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>12</month>
       <year>1985</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Productivity, Innovation, and Entrepreneurship</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The paper considers alternative approaches to providing consistent multilateral indexes of real output, real input, real consumption or productivity across many regions, countries or industries at one point in time. The recommended approaches are based on aggregating up various bilateral indexes which in turn are based on the economic theory of index numbers, either in the producer or consumer theory context. In order to distinguish between various competing multilateral approaches, an axiomatic or test approach to multilateral comparisons is developed. This test approach indicates that the Geary-Khamis and Van Yzeren approaches to multilateral output comparisons are dominated by the (new) own share and the Elteto-Koves-Szulc methods.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0053.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Full Versus Limited Information Estimation of a Rational Expectations Model:  Some Numerical Comparisons</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0054</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>West</surname>
          <given-names>Kenneth D</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>02</month>
       <year>1986</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper compares numerically the asymptotic distributions of parameter estimates and test statistics associated with two estimation techniques: (a)a limited information one, which uses instrumental variables to estimate a single equation (Hansen and Singleton (1982)), and (b)a full information one, which uses a procedure asymptotically equivalent to maximum likelihood to simultaneously estimate multiple equations (Hansen and Sargent (1980)). The paper compares the two with respect to both (1)asymptotic efficiency under the null hypothesis of no misspecification, and (2)asymptotic bias and power in the presence of certain local alternatives. It is found that: (l)Full information standard errors are only moderately smaller than limited information standard errors. (2)When the model is misspecified, full information tests tend to be more powerful, and its parameter estimates tend to be more biased. This suggests that at least in the model considered here, the gains from the use of the less robust and computationally more complex full information technique are not particularly large.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0054.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>A Simple, Positive Semi-Definite, Heteroskedasticity and AutocorrelationConsistent Covariance Matrix</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0055</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Newey</surname>
          <given-names>Whitney</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>West</surname>
          <given-names>Kenneth D</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>04</month>
       <year>1986</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper describes a simple method of calculating a heteroskedasticity and autocorrelation consistent covariance matrix that is positive semi-definite by construction. It also establishes consistency of the estimated covariance matrix under fairly general conditions.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0055.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Sequential Bargaining Under Asymmetric Information</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0056</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Grossman</surname>
          <given-names>Sanford J</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Perry</surname>
          <given-names>Motty</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1986</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>We analyze an infinite stage, alternating offer bargaining game in which the buyer knows the gains from trade but the seller does not. Under weak assumptions the game has a unique candidate Perfect Sequential Equilibrium, and it can be solved by backward induction. Equilibrium involves the seller making an offer which is accepted by buyers with high gains from trade, while buyers with medium gains reject and make a counteroffer which the seller accepts. Buyers with low gains make an unacceptable offer, and then the whole process repeats itself, Numerical simulations demonstrate the effects of uncertainty on the length of bargaining.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0056.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Testing for Individual Effects in Dynamic Models Using Panel Data</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0057</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Holtz-Eakin</surname>
          <given-names>Douglas</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>1986</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Public Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This note presents a simple, linear test for individual effects in dynamic models using panel data; building upon the techniques of Holtz-Eakin, Newey, and Rosen (HNR) [198S] for estimating vector autoregressions using panel data. While implementing estimators which are consistent in the presence of individual effects is straightforward, there is no guarantee that this form of heterogeneity is an -important feature of the data. Moreover, there are advantages to avoiding an individual v effects specification. Thus, it is useful to have a test for the existence of individual effects. The test focuses on sample moment conditions implied by the presence of individual effects and is particularly suited for dynamic models using panel data. The calculations follow directly from linear, instrumental variable techniques which are computationally straightforward. Moreover, the test statistics follows directly from the estimation of autoregressive models.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0057.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Bias in Longitudinal Estimation of Wage Gaps</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0058</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Solon</surname>
          <given-names>Gary</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>1986</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Cross-sectional regression analyses of wage gaps may be biased by omission of unobserved worker characteristics. Recent studies therefore have used longitudinal data to "difference out" the effects of such variables. This paper. however. shows that self-selection of job changers may cause longitudinal estimation of wage gaps to be inconsistent.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0058.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Maximum Likelihood Estimation of Generalized Ito Processes with Discretely Sampled Data</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0059</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Lo</surname>
          <given-names>Andrew W</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>1986</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>In this paper, we consider the parametric estimation problem for continuous time stochastic processes described by general first-order nonlinear stochastic differential equations of the Ito type. We characterize the likelihood function of a discretely-sampled set of observations as the solution to a functional partial differential equation. The consistency and asymptotic normality of the maximum likelihood estimators are explored, and several illustrative examples are provided.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0059.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Temporal Aggregation and Structural Inference in Macroeconomics</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0060</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Christiano</surname>
          <given-names>Lawrence</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Eichenbaum</surname>
          <given-names>Martin S</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>1986</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper examines the quantitative importance of temporal aggregation bias in distorting parameter estimates and hypothesis tests. Our strategy is to consider two empirical examples in which temporal aggregation bias has the potential to account for results which are widely viewed as being anomalous from the perspective of particular economic models. Our first example investigates the possibility that temporal aggregation bias can lead to spurious Granger causality relationships. The quantitative importance of this possibility is examined in the context of Granger causal relations between the growth rates of money and various measures of aggregate output. Our second example investigates the possibility that temporal aggregation bias can account for the slow speeds of adjustment typically obtained with stock adjustment models. The quantitative importance of this possibility is examined in the context of a particular class of continuous and discrete time equilibriurn models of inventories and sales. The different models are compared on the basis of the behavioral implications of the estimated values of the structural parameters which we obtain and their overall statistical performance. The empirical results from both examples provide support for the view that temporal aggregation bias can be quantitatively important in the sense of Significantly distorting inference.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0060.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Granger-Causality and Policy Ineffectiveness:  A Rejoinder</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0061</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Buiter</surname>
          <given-names>Willem H</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>1986</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>In an earlier paper,"Granger-causality and Policy Effectiveness," Economica [1984], I showed that for a policy instrument x to Granger-cause some target variable y is not necessary for x to be useful in controlling y. (The argument that it is not sufficient was already familiar, e.g. from the work of Sargenti. Using a linear rational expectations model I showed that x would fail to Granger-cause y (while Y did, in some cases, Granger-cause x) if x were set by a variety of optimal, time-consistent or ad hoc policy feedback rules. Yet in all the examples, x was an effective policy instrument. In response to some comments by Professor Granger, I now show that my earlier results are unaffected when the following 3 concessions to "realism" are made: I. Controllers do not have perfect control of the instruments (this was already allowed for in my earlier paper). 2. Governments may use a different information set to determine instruments than that used by the public. 3. The controller may not have perfect specifications and estimates of models of the economy. The analysis confirms that Granger-causality tests are uninformative about the presence, absence, degree or kind of policy (in)effectiveness.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0061.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Consistent Covariance Matrix Estimation with Cross-Sectional Dependence and Heteroskedasticity in Cross-Sectional Financial Data</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0062</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Froot</surname>
          <given-names>Kenneth A</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>1990</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper provides a simple method to account for heteroskesdasticity and cross-sectional dependence in samples with large cross sections and relatively few time series observations. The estimators we derive are motivated by cross-sectional regression studies in finance and accounting. Simulation evidence suggests that the estimators are dependable in small samples and may be useful when generalized least squares is infeasible, unreliable, or computationally too burdensome. The approach allows a relatively small number of time series observations to yield a rich characterization of cross-sectional correlations. We also consider efficiency issues and show that in principle asymptotic efficiency can be improved using a technique due to Cragg (1983).</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0062.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Spurious Trend and Cycle in the State Space Decomposition of a Time Series with a Unit Root</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0063</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Nelson</surname>
          <given-names>Charles</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>11</month>
       <year>1987</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Recent research has proposed the state space (88) framework for decomposition of GNP and other economic time series into trend and cycle components, using the Kalman filter. This paper reviews the empirical evidence and suggests that the resulting decomposition may be spurious, just as detrending by linear regression is known to generate spurious trends and cycles in nonstationary time series. A Monte Carlo experiment confirms that when data is generated by a random walk, the 88 model tends to indicate (incorrectly) that the series consists of cyclical variations around a smooth trend. The improvement in fit over the true model will typically appear to be statistically significant. These results suggest that caution should be exercised in drawing inferences about the nature of economic processes from the 88 decomposition.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0063.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Exchange-Rate Dynamics and Optimal Asset Accumulation Revisited</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0064</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Obstfeld</surname>
          <given-names>Maurice</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>02</month>
       <year>1988</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Trade and Investment</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Finance and Macroeconomics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>It has recently been observed that when equations of motion for state variables are nonautonomous, optimal control problems involving Uzawa's endogenous rate of time preference cannot be solved using the change-of-variables method common in the literature. Instead, the problem must be solved by explicitly adding an additional state variable that measures the motion of time preference over time. This note reassesses earlier work of my own on exchange rate dynamics, which was based on a change-of- variables solution procedure. When the correct two-state-variable solution procedure is used, the model's qualitative predictions are unchanged. In addition, the analysis yields an intuitive interpretation of the extra co-state variable that arises in solving the individual's maximization problem.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0064.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Asset Pricing with a Factor Arch Covariance Structure:  Empirical Estimates for Treasury Bills</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0065</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Engle</surname>
          <given-names>Robert F</given-names>
          <suffix>III</suffix>
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Ng</surname>
          <given-names>Victor</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Rothschild</surname>
          <given-names>Michael</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>11</month>
       <year>1988</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Asset pricing relations are developed for a vector of assets with a time varying covariance structure. Assuming that the eigenvectors are constant but the eigenvalues changing, both the Capital Asset Pricing Model and the Arbitrage Pricing Theory suggest the same testable implication: the time varying part of risk premia are proportional to the time varying eigenvalues. Specifying the eigenvalues as general ARCH processes. the model is a multivariate Factor ARCH model. Univariate portfolios corresponding to the eigenvectors will have (time varying) risk premia proportional to their own (time varying) variance and can be estimated using the GARCH-M model. This structure is applied to monthly treasury bills from two to twelve months maturity and the value weighted NYSE returns index. The bills appear to have a single factor in the variance process and this factor is influenced or "caused in variance" by the stock returns.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0065.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The Size and Power of the Variance Ratio Test in Finite Samples:  A Monte Carlo Investigation</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0066</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Lo</surname>
          <given-names>Andrew W</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>MacKinlay</surname>
          <given-names>A. Craig</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>1988</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We examine the finite sample properties of the variance ratio test of the random walk hypothesis via Monte Carlo simulations under two null and three alternative hypotheses. These results are compared to the performance of the Dickey-Fuller t and the Box-Pierce Q statistics. Under the null hypothesis of a random walk with independent and identically distributed Gaussian increments, the empirical size of all three tests are comparable. Under a heteroscedastic random walk null, the variance ratio test is more reliable than either the Dickey-Fuller or Box-Pierce tests. We compute the power of these three tests against three alternatives of recent empirical interest: a stationary AR(1), the sum of this AR(1) and a random walk, and an integrated AR( 1). By choosing the sampling frequency appropriately, the variance ratio test is shown to be as powerful as the Dickey-Fuller and Box-Pierce tests against the stationary alternative, and is more powerful than either of the two tests against the two unit-root alternatives.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0066.pdf"></self-uri>
    <self-uri xlink:href="http://www.nber.org/papers/t0066.djvu"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The Dividend Ratio Model and Small Sample Bias:  A Monte Carlo Study</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0067</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Campbell</surname>
          <given-names>John Y</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Shiller</surname>
          <given-names>Robert J</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>07</month>
       <year>1988</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Small sample properties of parameter estimates and test statistics in the vector autoregressive dividend ratio model (Campbell and Shiller [1988 a,b]) are derived by stochastic simulation. The data generating processes are co integrated vector autoregressive models, estimated subject to restrictions implied by the dividend ratio model, or altered to show a unit root.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0067.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Some Further Results on the Exact Small Sample Properties of the Instrumental Variable Estimator</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0068</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Nelson</surname>
          <given-names>Charles</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Startz</surname>
          <given-names>Richard</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>1988</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>New results on the exact small sample distribution of the instrumental variable estimator are presented by studying an important special case. The exact closed forms for the probability density and cumulative distribution functions are given. There are a number of surprising findings. The small sample distribution is bimodal. with a point of zero probability mass. As the asymptotic variance grows large, the true distribution becomes concentrated around this point of zero mass. The central tendency of the estimator may be closer to the biased least squares estimator than it is to the true parameter value. The first and second moments of the IV estimator are both infinite. In the case in which least squares is biased upwards, and most of the mass of the IV estimator lies to the right of the true parameter, the mean of the IV estimator is infinitely negative. The difference between the true distribution and the normal asymptotic approximation depends on the ratio of the asymptotic variance to a parameter related to the correlation between the regressor and the regression, error. In particular, when the instrument is poorly correlated with the regressor, the asymptotic approximation to the distribution of the instrumental variable estimator will not be very accurate.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0068.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The Distribution of the Instrumental Variables Estimator and Its t-RatioWhen the Instrument is a Poor One</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0069</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Nelson</surname>
          <given-names>Charles</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Startz</surname>
          <given-names>Richard</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>1988</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>When the instrumental variable is a poor one, in the sense of being weakly correlated with the variable it proxies, the small sample distribution of the IV estimator is concentrated around a value that is inversely related to the feedback in the system and which is often further from the true value than is the plim of OLS. The sample variance of residuals similarly becomes concentrated around a value which reflects feedback and not the variance of the disturbance. The distribution of the t-ratio reflects both of these effects, stronger feedback producing larger t-ratios. Thus, in situations where OLS is badly biased, a poor instrument will lead to spurious inferences under IV estimation with high probability, and generally perform worse than OLS.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0069.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The Time-Varying-Parameter Model as an Alternative to ARCH for Modeling Changing Conditional Variance:  The Case of Lucas Hypothesis</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0070</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Nelson</surname>
          <given-names>Charles</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Kim</surname>
          <given-names>Chang-Jin</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>1988</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The main econometric issue in testing the Lucas hypothesis (1973) in a times series context is the estimation of the variance conditional on past information. The ARCH model, proposed by Engle (1982), is one way of specifying the conditional variance. But the assumption underlying the ARCH specification is ad-hoc. The existence of ARCH can sometimes be interpreted as evidence of misspecification. Under the assumption that a monetary policy regime is continuously changing, a time-varying-parameter (TVP) model is proposed for the monetary growth function. Based on Kalman filtering estimation of recursive forcast errors and their conditional variances, the Lucas hypothesis is tested for the U.S. economy (1964.1 - 1985.4) using monetary growth as an aggregate demand variable. The Lucas hypothesis is rejected in favor of Friedman's (1977) hypothesis: the conditional variance of monetary growth affects real output directly, not through the coefficients on the forcast error term in the Lucas-type output equation.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0070.pdf"></self-uri>
    <self-uri xlink:href="http://www.nber.org/papers/t0070.djvu"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Smart Money, Noise Trading and Stock Price Behavior</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0071</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Campbell</surname>
          <given-names>John Y</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Kyle</surname>
          <given-names>Albert S</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>1988</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper derives and estimates an equilibrium model of stock price behavior in which exogenous "noise traders" interact with risk-averse "smart money" investors. The model assumes that changes in exponentially detrended dividends and prices are normally distributed, and that smart money investors have constant absolute risk aversion. In equilibrium, the stock price is the present value of expected dividends, discounted at the riskless interest rate, less a constant risk premium, plus a term which is due to noise trading. The model expresses both stock prices and dividends as sums of unobserved components in continuous time. The model is able to explain the volatility and predictability of U.S. stock returns in the period 1871-1986 in either of two ways. Either the discount rate is 4% or below, and the constant risk premium is large; or the discount rate is 5% or above, and noise trading, correlated with fundamentals, increases the volatility of stock prices. The data are not well able to distinguish between these explanations.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0071.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The R&amp;D Master File Documentation</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0072</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hall</surname>
          <given-names>Bronwyn H</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Cumminq</surname>
          <given-names>Clint</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Laderman</surname>
          <given-names>Elizabeth S</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Mundy</surname>
          <given-names>Joy</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>12</month>
       <year>1988</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Productivity, Innovation, and Entrepreneurship</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This document describes the panel of publicly traded United States manufacturing firms which was created and updated at the National Bureau of Economic Research from 1978 through 1988 within the Productivity Program. The panel consists of about 2600 large manufacturing firms with three to twenty-seven years of data each; the period covered by the sampling frame was 1976 through 1985, with data back to 1959 where possible. There are approximately 70 variables for each firm-year of data, consisting of income statement and balance sheet variables and the corresponding common stock data. The technological data available for these firms consist of R&amp;D expenditures and patents granted, both by date of application and by granting date. The patents data are available only through about 1981, due to the limitations of our sources and budget. The firms on the file are identified both by their CUSIP number and by name, making it feasible to match this data to other sources.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0072.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Tests For Unit Roots:  A Monte Carlo Investigation</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0073</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Schwert</surname>
          <given-names>G. William</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>12</month>
       <year>1988</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Recent work by Said and Dickey (1984 ,1985) , Phillips (1987), and Phillips and Perron(1988) examines tests for unit roots in the autoregressive part of mixed autoregressive-integrated-moving average (ARIHA) models (tests for stationarity). Monte Carlo experiments show that these unit root tests have different finite sample distributions than the unit root tests developed by Fuller(1976) and Dickey and Fuller (1979, l981) for autoregressive processes. In particular, the tests developed by Philllps (1987) and Phillips and Perron (1988) seem more sensitive to model misspeciflcation than the high order autoregressive approximation suggested by Said and Diekey(1984).</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0073.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Endogenous Output in an Aggregate Model of the Labor Market</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0074</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Quandt</surname>
          <given-names>Richard E</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Rosen</surname>
          <given-names>Harvey S</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>01</month>
       <year>1989</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>A common feature to most aggregative studies of the labor market is a marginal productivity expression in which the quantity of labor appears on the left hand side of the equation, and the right hand side includes the real wage and output. A number of researchers have cautioned that if the output variable is treated as exogenous, serious econometric difficulties may result. However, the assumption that output is exogenous has not been tested. In this paper, we estimate an equilibrium model of the labor market, and use it to test the assumption of output exogeneity. We find that the assumption that output is exogenous cannot be rejected by the data.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0074.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The Delivery of Market Timing Services:  Newsletters Versus Market Timing Funds</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0075</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Kane</surname>
          <given-names>Alex</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>1991</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper examines the dissemination of market timing information (signals on the overall performance of risky assets relative to the risk free rate). We consider two delivery systems. Under the newsletter delivery system market timing information is disseminated solely through newsletter. Under the fund delivery system, timers set up timing funds in which investors can invest. In the absence of market imperfections we find that both systems produce the same result. With restrictions on borrowing or with other nonlinearities we find the newsletter system to be superior. This is one possible explanation for the plethora of market timing newsletters and the paucity of market timing funds.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0075.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Kolmogorov-Smirnov Tests For Distribution Function Similarity With Applications To Portfolios of Common Stock</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0076</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Meyer</surname>
          <given-names>Jack</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname></surname>
          <given-names></given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>1989</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>If the elements of the choice set in a decision model involving randomness are not arbitrary, but restricted appropriately, an expected utility ordering of them can be represented by a mean standard deviation ranking function. These restrictions can apply to the form of, or can specify relationships among, the distribution functions. A particularly useful restriction is one which requires that elements in the choice set, when normalized to have a zero mean and unit variance, be identically distributed. No restriction is placed on the form of any individual distribution function. This research empirically tests for this and other useful restrictions on the relationships among the elements of a set of random variables. Observations from the random variables are used to test whether or not they have distribution functions which are appropriately related to one another. The tests are applied to rate of return data for portfolios of common stock. The tests indicate that one cannot reject the hypothesis that the distribution functions of these portfolios are sufficiently similar to imply that the efficient set of portfolios for any risk averse expected utility maximizer is contained in the mean-standard deviation efficient set.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0076.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Super Contact and Related Optimality Conditions: A Supplement to AvinashDixits:"A Simplified Exposition of Some Results Concerning Regulated Brownian.</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0077</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Dumas</surname>
          <given-names>Bernard</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>04</month>
       <year>1989</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Trade and Investment</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Finance and Macroeconomics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Dixit (1988) observed that the mathematical construct of "regulated Brownian motion" developed by Harrison (1985) had proved useful in economic models of decision-making under uncertainty. In a recent note he provided a number of methods for calculating expected discounted payoff functions based on such processes. The purpose of this supplement is twofold: -determine to what extent the first-degree conditions reached by Dixit (his equations (12) and (13) or (12') and (13')) are simply a consequence of the definition of the expected discounted payoff, or to what extent they can be interpreted as first order conditions of some optimization problem, as has been suggested in Dumas(1988); -extend Dixit's treatment to the case where there are fixed costs of regulation as 1n Grossman-Laroque (1987).</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0077.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Full Information Estimation and Stochastic Simulation of Models with    Rational Expectations</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0078</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Fair</surname>
          <given-names>Ray C</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Taylor</surname>
          <given-names>John B</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>1991</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>A computationally feasible method for the full information maximum likelihood estimation of models with rational expectations is described in this paper. The stochastic simulation of such models is also described. The methods discussed in this paper should open the way for many more tests of the rational expectations hypothesis within macroeconometric models.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0078.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Estimation of Polynomial Distributed Lags and Leads with End Point Constraints</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0079</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Andrews</surname>
          <given-names>Donald</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Fair</surname>
          <given-names>Ray C</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>1989</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper considers the use of the polynomial distributed lag (PDL) technique when the lag length is estimated rather than fixed. We focus on the case where the degree of the polynomial is fixed, the polynomial is constrained to be zero at a certain lag length q, and q is estimated along with the other parameters. We extend the traditional PDL setup by allowing q to be real-valued rather than integer-valued, and we derive the asymptotic covariance matrix of all the parameter estimates, including the estimate of q. The paper also considers the estimation of distributed leads rather than lags, a case that can arise if expectations are assumed to be rational.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0079.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>A Simple, Consistent Estimator for Disturbance Components in Financial Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0080</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Levinsohn</surname>
          <given-names>James A</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>MacKie-Mason</surname>
          <given-names>Jeffrey K</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>1989</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Many recent papers have estimated components of the disturbance term in the "market model" of equity returns. In particular, several studies of regulatory changes and other policy events have decomposed the event effects in order to allow for heterogeneity across firms. In this paper we demonstrate that the econometric method applied in some papers yields biased and inconsistent estimates of the model parameters. We demonstrate the consistency of a simple and easily-implemented alternative method.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0080.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The Influence Of Probability on Risky Choice: A parametric Examination</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0081</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Lattimore</surname>
          <given-names>Pamela K</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Baker</surname>
          <given-names>Joanna R</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Witte</surname>
          <given-names>Ann Dryden</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>1992</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The appeal of expected utility theory as a basis for a descriptive model of risky decision making has diminished is a result of empirical evidence which suggests that individuals do not behave in a manner consistent with the prescriptive tenets of EUT. In this paper, we explore the influence of probability on risky choice. by proposing and estimating a parametric model of risky decision making. Our results suggest that models which provide for probability transformations are most appropriate for the majority of subjects. Further. we find that the transformation differs for most subjects depending upon whether the risky outcomes are gains or losses. Most subjects are considerably less sensitive to changes in mid-range probability than is proposed by the expected utility model and risk-seeking behavior over "long-shot" odds is common</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0081.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The Positive Economics of Methodology</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0082</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Kahn</surname>
          <given-names>James</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Landsburg</surname>
          <given-names>Steve</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Stockman</surname>
          <given-names>Alan C</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>11</month>
       <year>1989</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Does an observation constitute stronger evidence for a theory if it was made after rather than before the theory was formulated, when it may have influenced the theory's construction? Philosophers have discussed this question (of "novel confirmation") but have lacked a formal model of scientific research and incentives. The question applies to all types of research. One example in economics involves evaluating models constructed on the basis of VARs (where a researcher looks at evidence and then constructs a theory) versus structural models with formal econometric tests (where a model is constructed before some of the evidence on it is obtained). This paper develops a simple model of scientific research. It discusses the issues that affect the answer to this question of the timing and theory-construction and observation or experimentation. We also address issues of social versus private incentives in the choice of research strategies, and of socially optimal rewards for researchers in the presence of information and incentive constraints.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0082.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>A Simple MLE of Cointegrating Vectors in Higher Order Integrated Systems</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0083</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Stock</surname>
          <given-names>James H</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Watson</surname>
          <given-names>Mark W</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>12</month>
       <year>1989</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>An MLE of the unknown parameters of co integrating vectors is presented for systems in which some variables exhibit higher orders of integration, in which there might be deterministic components, and in which the co integrating vector itself might involve variables of differing orders of integration. The estimator is simple to compute: it can be calculated by running GLS for standard regression equations with serially correlated errors. Alternatively, an asymptotically equivalent estimator can be computed using OLS. Usual Wald test statistics based on these MLE's (constructed using an autocorrelation robust covariance matrix in the case of the OLS estimator) have asymptotic x2 distributions.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0083.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The Ramsey Problem for Congestible Facilities</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0084</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Arnott</surname>
          <given-names>Richard J</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Kraus</surname>
          <given-names>Marvin</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1994</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Public Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>In recent years, a new set of models drawing on Vickrey [1969] has been developed to analyze the economics of congestible facilities. These models are structural in that they derive the cost function from consumers' time-of-use decisions and the congestion technology. Standard models, in contrast, simply assume the general form of the cost function. We apply the new approach to analyze the Ramsey problem for a congestible facility, and show that the solution generally entails cost inefficiency. Standard models have failed to reveal this result because they treat the cost function as completely determined by technology.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0084.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>On the Formulation of Uniform Laws of Large Numbers:  A Truncation Approach</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0085</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Potscher</surname>
          <given-names>Benedikt M</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Prucha</surname>
          <given-names>Ingmar</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>04</month>
       <year>1994</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Productivity, Innovation, and Entrepreneurship</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The paper develops a general framework for the formulation of generic uniform laws of large numbers. In particular, we introduce a basic generic uniform law of large numbers that contains recent uniform laws of large numbers by Andrews [2] and Hoadley [7J as special cases. We also develop a truncation approach .that makes it possible to obtain uniform laws of large numbers for the functions under consideration from uniform laws of large numbers for truncated versions of those functions. The point of the truncation approach is that uniform laws of large numbers for the truncated versions are typically easier to obtain. By combining the basic uniform law of large numbers and the truncation approach we also derive generalizations of recent uniform laws of large numbers introduced in Potscher and Prucha [13, l5].</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0085.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Efficient Estimation of Linear Asset Pricing Models with Moving-Average Errors</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0086</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hansen</surname>
          <given-names>Lars P</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Singleton</surname>
          <given-names>Kenneth J</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>1997</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper explores in depth the nature of the conditional moment restrictions implied by log-linear intertemporal capital asset pricing models (ICAPMs) and shows that the generalized instrumental variables (GMM) estimators of these models (as typically implemented in practice) are inefficient. The moment conditions in the presence of temporally aggregated consumption are derived for two log-linear ICAPMs. The first is a continuous time model in which agents maximize expected utility. In the context of this model, we show that there are important asymmetries between the implied moment conditions for infinitely and finitely-lived securities. The second model assumes that agents maximize non-expected utility, and leads to a very similar econometric relation for the return on the wealth portfolio. Then we describe the efficiency bound (greatest lower bound for the asymptotic variances) of the CNN estimators of the preference parameters in these models. In addition, we calculate the efficient CNN estimators that attain this bound. Finally, we assess the gains in precision from using this optimal CNN estimator relative to the commonly used inefficient CMN estimators.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0086.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Simulated Moments Estimation of Markov Models of Asset Prices</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0087</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Duffie</surname>
          <given-names>Darrell</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Singleton</surname>
          <given-names>Kenneth J</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>1990</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper provides a simulated moments estimator (SME) of the parameters of dynamic models in which the state vector follows a time-homogeneous Markov process. Conditions are provided for both weak and strong consistency as well as asymptotic normality. Various tradeoff's among the regularity conditions underlying the large sample properties of the SME are discussed in the context of an asset pricing model.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0087.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Does Correcting for Heteroskedasticity Help?</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0088</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Mishkin</surname>
          <given-names>Frederic S</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>1991</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <self-uri xlink:href="http://www.nber.org/papers/t0088.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Implications of Security Market Data for Models of Dynamic Economies</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0089</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hansen</surname>
          <given-names>Lars P</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Jagannathan</surname>
          <given-names>Ravi</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1990</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We show how to use security market data to restrict the admissible region for means and standard deviations of intertemporal marginal rates of substitution (IMRS's) of consumers. Our approach is (i) nonparametric and applies to a rich class of models of dynamic economies; (ii) characterizes the duality between the mean-standard deviation frontier for IMRS's and the familiar mean-standard deviation frontier for asset returns; and (iii) exploits the restriction that IMRS's are positive random variables. The region provides a convenient summary of the sense in which asset market data are anomalous from the vantage point of intertemporal asset pricing theory.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0089.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Spectral Based Testing of the Martingale Hypothesis</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0090</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Durlauf</surname>
          <given-names>Steven N</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>04</month>
       <year>1992</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper proposes a method of testing whether a time series is a martingale. The procedure develops an asymptotic theory for the shape of the spectral distribution function of the first differences. Under the null hypothesis, this shape should be a diagonal line. several tests are developed which determine whether the deviation of the sample spectral distribution function from a diagonal line, when treated as an element of a function space, is too erratic to be attributable to sampling error. These tests are consistent against all moving average alternatives. The testing procedure possesses the additional advantage that it eliminates discretion in choosing a particular H[sub 1] by the researcher and therefore guards against data mining, The tests may further be adjusted to analyze subsets of frequencies in isolation, which can enhance power against particular alternatives. Application of the test to stock prices finds some evidence against the random walk theory.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0090.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Testing For Common Features</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0091</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Engle</surname>
          <given-names>Robert F</given-names>
          <suffix>III</suffix>
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Kozicki</surname>
          <given-names>Sharon</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>1990</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper introduces a class of statistical tests for the hypothesis that some feature of a data set is common to several variables. A feature is detected in a single series by a hypothesis test where the null is that it is absent, and the alternative is that it is present. Examples are serial correlation, trends, seasonality, heteroskedasticity, ARCH, excess kurtosis and many others. A feature is common to a multivariate data set if a linear combination of the series no longer has the feature. A test for common features can be based on the minimized value of the feature test over all linear combinations of the data. A bound on the distribution for such a test is developed in the paper. For many important cases, an exact asymptotic critical value can be obtained which is simply a test of overidentifying restrictions in an instrumental variable regression.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0091.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Testing The Autocorrelation Structure of Disturbances in Ordinary       Least Squares and Instrumental Variables Regressions</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0092</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Cumby</surname>
          <given-names>Robert E</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Huizinga</surname>
          <given-names>John</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>1990</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Trade and Investment</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Finance and Macroeconomics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper derives the asymptotic distribution for a vector of sample autocorrelations of regression residuals from a quite general linear model. The asymptotic distribution forms the basis for a test of the null hypothesis that the regression error follows a moving average of order q [greaterthan or equal] 0 against the general alternative that autocorrelations of the regression error are non-zero at lags greater than q. By allowing for endogenous, predetermined and/or exogenous regressors, for estimation by either ordinary least squares or a number of instrumental variables techniques, for the case q>0, and for a conditionally heteroscedastic error term, the test described here is applicable in a variety of situations where such popular tests as the Box-Pierce (1970) test, Durbin's (1970) h test, and Godfrey's (1978b) Lagrange multiplier test are net applicable. The finite sample properties of the test are examined in Monte Carlo simulations where, with a sample sizes of 50 and 100 observations, the test appears to be quite reliable.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0092.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Sorting Out the Differences Between Signaling and Screening Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0093</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Stiglitz</surname>
          <given-names>Joseph E</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Weiss</surname>
          <given-names>Andrew</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>11</month>
       <year>1990</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>In this paper we analyze games in which there is trade between informed and uninformed players. The informed know the value of the trade (for instance, the value of their productivity in a labor market example); the uninformed only know the distribution of attributes among the informed. The informed choose actions (education levels in the Spence model); the uninformed choose prices (wages of interest rates). We refer to games in which the informed move first as signaling games - they choose actions to signal their type. Games when the uninformed move first are referred to as screening games. We show that in sequential equilibria of screening games same contracts can generate positive profits and others negative profits, while in signaling games all contracts break even. However, if the indifference carves of the informed agents satisfy what roughly would amount to a single crossing property in two dimensions, and some technical conditions hold, then all contacts in the screening game break even, and the set of outcomes of the screening game is a subset of the outcomes of the corresponding signaling game. In the postscript we take a broad view of the strengths and weakness of the approach taken in this and other papers to problems of asymmetric information, and present recommendations for how future research should proceed in this field.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0093.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Heteroscedasticity Diagnostics Based on "Corrected" Standard Errors</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0094</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Leamer</surname>
          <given-names>Edward E</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>01</month>
       <year>1991</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Public Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Weights are found for weighted least squares estimates such that a selected coefficient (a) changes by one standard deviation or (b) changes in sign. The length of the vector of weight changes is equal to the usual OLS standard error divided by the White-corrected standard errors. Thus the White-corrected standard errors can help decide if it is necessary to adjust the location of the confidence sets to correct for heteroscedasticity. The vector of weight changes is similar to the effect of omitting observations, one at a time. The sensitivity diagnostics of Belsley, Kuh and Welsch are therefore linked with heteroscedasticity issues.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0094.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The Effect of Insider Trading on Insiders' Reaction to Opportunities to "Waste" Corporate Value</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0095</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Bebchuk</surname>
          <given-names>Lucian A</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Fershtman</surname>
          <given-names>Chaim</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>02</month>
       <year>1991</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper analyzes certain effects of insider trading on the principal-agent problem in corporations. Specifically, we focus on those managerial choices that confront managers with the need to decide between options that produce different corporate value but do not differ in the managerial effort involved. In the absence of insider trading, and as long as managers' salaries are positively correlated with their firms results, managers will make such choices efficiently, and consequently such choices have previously received little attention, we show that, in the presence of insider trading, managers may make such choices inefficiently. With such trading, managers night elect to have a lower corporate value -- that is, they may 'waste' corporate value -- because having such a value might enable them to make greeter trading profits. We analyze the conditions under which the problem we identify is likely to arise and the factors that determine its severity. We also identify those restrictions en insider trading that can eliminate this problem.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0095.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The Effects of Insider Trading on Insiders' Choice Among Risky          Investment Projects</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0096</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Bebchuk</surname>
          <given-names>Lucian A</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Fershtman</surname>
          <given-names>Chaim</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>02</month>
       <year>1991</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper studies certain effects of insider trading on the principal-agent problem in corporations. Specifically, we focus on insiders' choice among investment projects. Other things equal, insider trading leads insiders to choose riskier investment projects, because increased volatility of results enables insiders to make greater trading profits if they learn these results in advance of the market. This effect might or might not be beneficial, however, because insiders' risk-aversion pulls them toward a conservative investment policy. We identify and compare insiders' choices of projects with insider trading and those without such trading. We also study the optimal contract design with insider trading and without such trading, thus identifying the effects that allowing such trading has on other elements of insiders' compensation. Using these results, we identify the conditions under which insider trading increases or decreases corporate value by affecting the choice of projects with uncertain returns .</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0096.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Bargaining and the Division of Value in Corporate Reorganization</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0097</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Bebchuk</surname>
          <given-names>Lucian A</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Chang</surname>
          <given-names>Howard F</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>02</month>
       <year>1991</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper develops a sequential bargaining model of the negotiations in corporate reorganizations under Chapter 11. We identify the expected outcome of the bargaining process and examine the effects of the legal rules that shape the bargaining. We determine how much value equity holders and debt holders receive under the Chapter 11 process, and compare the value obtained by each class with the 'contractual right' of that class. We identify and analyze three reasons that the equity holders can expect to obtain some value even when the debt holders are not paid in full. Finally, we show how the features of the reorganization process and of the company filing under Chapter 11 affect the division of value, and in this way we provide several testable predictions.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0097.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Do Short-Term Managerial Objectives Lead to Under- or Over-Investment in Long-Term Projects</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0098</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Bebchuk</surname>
          <given-names>Lucian A</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Stole</surname>
          <given-names>Lars</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1994</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper studies managerial decisions about investment in long-run projects in the presence of imperfect information (the market knows less about such investments than the firm's managers) and short-term managerial objectives (the managers are concerned about the short-term stock price as well as the long-term stock price). Prior work has suggested that imperfect information and short-term managerial objectives induce managers to underinvest in long-run projects. We show that either underinvestment or overinvestment is possible, and we identify the connection between the type of informational imperfection present and the direction of the distortion. When investors cannot observe the level of investment in long-run projects, suboptimal investment will be induced. When investors can observe investment but not its productivity, however, an excessive level of investment will be induced.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0098.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Standard Risk Aversion</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0099</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Kimball</surname>
          <given-names>Miles S</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>1991</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper introduces the concept of standard risk aversion. A von Neumann-Morgenstern utility function has standard risk aversion if any risk makes a small reduction in wealth more painful (in the sense of an increased reduction in expected utility) also makes any undesirable, independent risk more painful. It is shown that, given monotonicity and concavity, the combination of decreasing absolute risk aversion and decreasing absolute prudence is necessary and sufficient for standard risk aversion. Standard risk aversion is shown to imply not only Pratt and Zeckhauser's 'proper risk aversion" (individually undesirable, independent risks always being jointly undesirable) , but also that being forced to face an undesirable risk reduces the optimal investment in a risky security with and independent return. Similar results are established for the effect of broad class of increases in one risk on the desirability of (or optimal investment in) a second, independent risk.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0099.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Pitfalls and Opportunities:  What Macroeconomists Should Know About Unit Roots</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0100</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Campbell</surname>
          <given-names>John Y</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Perron</surname>
          <given-names>Pierre</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>04</month>
       <year>1991</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper is an introduction to unit root econometrics as applied in macroeconomics. The paper first discusses univariate time series analysis, emphasizing the following topics: alternative representations of unit root processes, unit root testing procedures, the power of unit root tests, and the interpretation of unit root econometrics in finite samples. A second part of the paper tackles similar issues in a multivariate context where cointegration is now the central concept. The paper reviews representation, testing, and estimation of multivariate time series models with some unit roots. Two important themes of this paper are first, the importance of correctly specifying deterministic components of the series; and second, the usefulness of unit root tests not as methods to uncover some -true relation" but as practical devices that can be used to impose reasonable restrictions on the data and to suggest what asymptotic distribution theory gives the best approximation to the finite-sample distribution of coefficient estimates and test statistics.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0100.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>On the Optimality of Reserve Requirements</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0101</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Cothren</surname>
          <given-names>Richard D</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Waud</surname>
          <given-names>Roger N</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>04</month>
       <year>1991</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>An implicit rationale for a bank reserve requirement is that a central monetary authority is in a unique position (as "social planner) to impose a "socially superior" outcome to that yielded by a free banking system. We illustrate how this can be true in the context of a simple economy modeled to mimic certain basic characteristics of a monetary economy with banks and agents who trade with one another. Banks exist in our model because by pooling liquidation risks they provide liquidity otherwise unavailable to depositors, which, in turn, provides the incentive - for using deposit claims as the medium of exchange.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0101.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Measures of Fit for Calibrated Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0102</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Watson</surname>
          <given-names>Mark W</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1991</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Public Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper develops a new procedure for assessing how well a given dynamic economic model describes a set of economic time series. To answer the question, the variables in the model are augmented with just enough error so that the model can exactly mimic the second moment properties of the actual data. The properties of this error provide a useful diagnostic for the economic model, since they show the dimensions in which model fits the data relatively well and the dimensions in which it fits the data relatively poorly.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0102.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>A Theory of Workouts and the Effects of Reorganization Law</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0103</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Gertner</surname>
          <given-names>Robert</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Scharfstein</surname>
          <given-names>David S</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1991</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We present a model of a financially distressed firm with outstanding bank debt and public debt. Coordination problems among public debtholders introduce investment inefficiencies in the workout process. In most cases, these inefficiencies are not mitigated by the ability of firms to buy back their public debt with cash and other securities--the only feasible way that firms can restructure their public debt. We show that Chapter 11 reorganization law increases investment and we characterize the types of corporate financial structures for which this increased investment enhances efficiency.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0103.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Financial Intermediation and Monetary Policies in the World Economy</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0104</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Grilli</surname>
          <given-names>Vittorio U</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Roubini</surname>
          <given-names>Nouriel</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1991</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Trade and Investment</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Finance and Macroeconomics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>In this paper we investigate the role of credit institutions in transmitting monetary shocks to the domestic economy and to the rest of the world output. In modeling the monetary and financial sector of the economy we distinguish between monetary injections via lump-sum transfers to individuals and those via increased credit to the commercial banking sector in the form of discount window operations. Appropriately, we distinguish between the discount rate of the central bank and the lending and borrowing interest rates of commercial banks, which, we assume, are also subject to reserves requirements. We find that a steady state increase in monetary injections via increases in domestic credit leads to an increase in domestic output. On the other hand, we find that an increase in the steady state level of monetary transfers reduces the level of output.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0104.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Confidence Intervals for the Largest Autoresgressive Root in U.S. Macroeconomic Time Series</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0105</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Stock</surname>
          <given-names>James H</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1991</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper provides asymptotic confidence intervals for the largest autoregressive root of a time series when this root is close to one. The intervals are readily constructed either graphically or using tables in the Appendix. When applied to the Nelson-Plosser (1982) data set, the main conclusion is that the confidence intervals typically are wide. The conventional emphasis on testing for whether the largest root equals one fails to convey the substantial sampling variability associated with this measure of persistence.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0105.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The Relative Importance of Permanent and Transitory Components: Identi- fication and Some Theoretical Bounds</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0106</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Quah</surname>
          <given-names>Danny</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>1991</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Much macroeconometric discussion has recently emphasized the economic significance of the size of the permanent component in GNP. Consequently, a large literature has developed that tries to estimate this magnitude measured, essentially, as the spectral density of increments in GNP at frequency zero. This paper shows that unless the permanent component is a random walk this attention has been misplaced: in general, that quantity does not identify the magnitude of the permanent component. Further, by developing bounds on reasonable measures of this magnitude, the paper shows that a random walk specification is biased towards establishing the permanent component as important.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0106.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Randomization and Social Policy Evaluation Revisited</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0107</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Heckman</surname>
          <given-names>James J</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>07</month>
       <year>1991</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper considers the evidence on the effectiveness and limitations of randomized controlled trials in economics. I revisit my previous paper "Randomization and Social Policy Evaluation" and update its message. I present a brief history of randomization in economics and identify two waves of enthusiasm for the method as "Two Awakenings" because of the near-religious zeal associated with both waves. I briefly summarize the lessons of the first wave and forecast the same lessons will be learned in the second wave.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0107.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Dividend Yields and Expected Stock Returns:  Alternative Procedures for Interference and Measurement</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0108</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hodrick</surname>
          <given-names>Robert J</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>07</month>
       <year>1991</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Alternative ways of conducting inference and measurement for long-horizon forecasting are explored with an application to dividend yields as predictors of stock returns. Monte Carlo analysis indicates that the Hansen and Hodrick (1980) procedure is biased at long horizons, but the alternatives perform better. These include an estimator derived under the null hypothesis as in Richardson and Smith (1989), a reformulation of the regression as in Jegadeesh (1990), and a vector autoregression (VAR) as in Campbell and Shiller (1988), Kandel and Stambaugh (1988), and Campbell (1991). The statistical properties of long-horizon statistics generated from the VAR indicate interesting patterns in expected stock returns.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0108.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The Independence Axiom and Asset Returns</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0109</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Epstein</surname>
          <given-names>Larry</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Zin</surname>
          <given-names>Stanley E</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>07</month>
       <year>1991</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper integrates models of atemporal risk preference that relax the independence axiom into a recursive intertemporal asset-pricing framework. The resulting models are amenable to empirical analysis using market data and standard Euler equation methods. We are thereby able to provide the first non-laboratory-based evidence regarding the usefulness of several new theories of risk preference for addressing standard problems in dynamic economics. Using both stock and bond returns data, we find that a model incorporating risk preferences that exhibit firstorder risk aversion accounts for significantly more of the mean and autocorrelation properties of the data than models that exhibit only second-order risk aversion. Unlike the latter class of models which require parameter estimates that are outside of the admissible parameter space, e.g., negative rates of time preference, the model with first-order risk aversion generates point estimates that are economically meaningful. We also examine the relationship between first-order risk aversion and models that employ exogenous stochastic switching processes for consumption growth.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0109.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The Optimality of Nominal Contracts</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0110</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Freeman</surname>
          <given-names>Scott</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Tabellini</surname>
          <given-names>Guido</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>1991</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Why do we see nominal contracts in the presence of price level risk? To answer this question, this paper studies an overlapping generations model in which the equilibrium contract form is optimal, given the contracts elsewhere in the economy. Nominal contracts turn out to be optimal in the presence of aggregate price level risk under two circumstances. First, if individuals have the same constant degree of relative risk aversion. The reason is that in this case nominal contracts (eventually coupled with equity contracts) lead to optimal risk sharing. Second, nominal contracts can be optimal, even if the first condition is not met, if the repayment of contracts is subject to a binding cash in advance constraint. The reason is that a contingent contract, while reducing purchasing power risk, also increases the cash flow risk. Under a binding cash in advance constraint on the repayment of contracts, this second risk is costly, and it is minimized by a nominal contract. Finally, the paper also identifies some symmetry conditions under which nominal contracts are optimal even in the presence of relative price risk.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0110.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Estimating Event Probabilities from Macroeconomic Models Using Stochastic Simulation</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0111</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Fair</surname>
          <given-names>Ray C</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>1991</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper shows how probability questions can be answered within the context of macroeconometric models by using stochastic simulation. One can estimate, for example, the probability of a recession occurring within some fixed period in the future. Probability estimates are presented for two recessionary events and one inflationary event. An advantage of the present procedure is that the probabilities estimated from the stochastic simulation are objective in the sense that they are based on the use of estimated distributions. They are consistent with the probability structure of the model. This paper also shows that estimated probabilities can be used in the evaluation of a model, and an example of this type of evaluation is presented.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0111.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Rational Frenzies and Crashes</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0112</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Bulow</surname>
          <given-names>Jeremy I</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Klemperer</surname>
          <given-names>Paul D</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>1991</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Most markets clear through a sequence of sales rather than through a Walrasian auctioneer. Because buyers can decide between buying now or later, rather than only now or never, buyers' current 'willingness to pay' is much more sensitive to price than is the demand curve. A consequence is that markets will be extremely sensitive to new information, leading to both 'frenzies, " where demand feeds upon itself, and "crashes," where price drops discontinuously. Although no buyer's independent reservation value reveals much about overall demand, a small increase in one such value can cause a large increase or decrease in average price.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0112.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Workings of a City: Location, Education, and Production</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0113</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Bénabou</surname>
          <given-names>Roland</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>1991</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Public Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We examine the implications of local externalities in human capital investment for the size and composition of the productive labor force. The model links residential choice, skills acquisition, and production in a city composed of several communities. Peer effects induce self-segregation by occupation, whereas efficiency may require identical communities. Even when some asymmetry is optimal, equilibrium segregation can cause entire 'ghettos" to drop out of the labor force. Underemployment is more extensive. the easier it is for high-skill workers to isolate themselves from others. When perfect segregation is feasible, individual incentives to pursue it are self-defeating, and lead instead to a shutdown of the productive sector.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0113.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Eastern Data and Western Attitudes</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0114</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Leamer</surname>
          <given-names>Edward E</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>1991</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Trade and Investment</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Finance and Macroeconomics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Most studies of the economies of Eastern Europe by Western analysts depend substantially on Western data and Western attitudes. Usually this dependence is implicit and concealed. An explicit and transparent treatment may yield better results, both for the individual analyst and for the profession overall. This article proposes and illustrates an econometric method for pooling Western and Eastern data. The pooled estimates depend on doubt about the Western attitudes, on the degree of experimental contamination in Western and Eastern data and on the similarity of Western and Eastern structures. The method is illustrated by a study of the determinants of the growth rates of developed and developing countries.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0114.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Instrumental Variables Estimation of Average Treatment Effects in Econometrics and Epidemiology</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0115</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Angrist</surname>
          <given-names>Joshua</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>11</month>
       <year>1991</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The average effect of intervention or treatment is a parameter of interest in both epidemiology and econometrics. A key difference between applications in the two fields is that epidemiologic research is more likely to involve qualitative outcomes and nonlinear models. An example is the recent use of the Vietnam era draft lottery to construct estimates of the effect of Vietnam era military service on civilian mortality. In this paper. I present necessary and sufficient conditions for linear instrumental variables. techniques to consistently estimate average treatment effects in qualitative or other nonlinear models. Most latent index models commonly applied to qualitative outcomes in econometrics fail to satisfy these conditions, and monte carlo evidence on the bias of instrumental estimates of the average treatment effect in a bivariate probit model is presented. The evidence suggests that linear instrumental variables estimators perform nearly as well as the correctly specified maximum likelihood estimator. especially in large samples. Linear instrumental variables and the normal maximum likelihood estimator are also remarkably robust to non-normality.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0115.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>A Note on the Time-Elimination Method For Solving Recursive Dynamic Economic Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0116</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Mulligan</surname>
          <given-names>Casey B</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Sala-i-Martin</surname>
          <given-names>Xavier</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>11</month>
       <year>1991</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The Time-Elimination Method for solving recursive dynamic economic models is described. By defining control-like and state-like variables, one can transform the equations of motion describing the economy's evolution through time into a system of differential equations that are independent of time. Unlike the transversality conditions, the boundary conditions for the system in the state-like variable are not asymptotic boundary conditions. In theory, this reformulation of the problem greatly facilitates numerical analysis. In practice, problems which were impossible to solve with a popular algorithm - shooting - can be solved in short order. The reader of this paper need not have any knowledge of numerical mathematics or dynamic programming or be able to draw high dimensional phase diagrams. only a familiarity with the first order conditions of the 'Hamiltonian' method for solving dynamic optimization problems is required. The most natural application of Time-Elimination is to growth models. The method is applied here to three growth models.: the Ramsey/Cass/Koopmans one sector model, Jones &amp; Manuelli's(1990) variant of the Ramsey model, and a two sector growth model in the spirit of Lucas (1988). A very simple - but complete - computer program for numerically solving the Ramsey model is provided.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0116.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Sources of Identifying Information in Evaluation Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0117</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Angrist</surname>
          <given-names>Joshua</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Imbens</surname>
          <given-names>Guido</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>12</month>
       <year>1991</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The average effect of social programs on outcomes such as earnings is a parameter of primary interest in econometric evaluations studies. New results on using exclusion restrictions to identify and estimate average treatment effects are presented. Identification is achieved given a minimum of parametric assumptions, initially without reference to a latent index framework. Most econometric analyses of evaluation models motivate identifying assumptions using models of individual behavior. Our technical conditions do not fit easily into a conventional discrete choice framework, rather they fit into a framework where the source of identifying information is institutional knowledge regarding program administration. This framework also suggests an attractive experimental design for research using human subjects, in which eligible participants need not be denied treatment. We present a simple instrumental variables estimator for the average effect of treatment on program participants, and show that the estimator attains Chamberlain's semi-parametric efficiency bound. The bias of estimators that satisfy only exclusion restrictions is also considered.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0117.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Identification and Estimation of Local Average Treatment Effects</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0118</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Angrist</surname>
          <given-names>Joshua</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Imbens</surname>
          <given-names>Guido</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>02</month>
       <year>1995</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We investigate conditions sufficient for identification of average treatment effects using instrumental variables. First we show that the existence of valid instruments is not sufficient to identify any meaningful average treatment effect. We then establish that the combination of an instrument and a condition on the relation between the instrument and the participation status is sufficient for identification of a local average treatment effect for those who can be induced to change their participation status by changing the value of the instrument. Finally we derive the probability limit of the standard IV estimator under these conditions. It is seen to be a weighted average of local average treatment effects.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0118.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Computing Markov Perfect Nash Equilibria: Numerical Implications of a Dynamic Differentiated Product Model</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0119</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Pakes</surname>
          <given-names>Ariel</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>McGuire</surname>
          <given-names>Paul</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>01</month>
       <year>1992</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Productivity, Innovation, and Entrepreneurship</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper provides an algorithm for computing Markov Perfect Nash Equilibria (Maskin and Tirole, 1988a and b) for dynamic models that allow for heterogeneity among firms and idiosyncratic (or firm specific) sources of uncertainty. It has two purposes. To illustrate the ability of such models to reproduce important aspects of reality, and to provide a tool which can be used for both descriptive and policy analysis in a framework rich enough to capture many of the features of firm level data sets (thereby enabling it to be integrated with the empirical detail in those data sets). We illustrate by computing the policy functions, and simulating the industry structures, generated by a class of dynamic differentiated product models in which the idiosyncratic uncertainty is due to the random outcomes of each firm's research process (we also allow for an autonomous aggregate demand process). The illustration focuses on comparing the effects of different regulatory and institutional arrangements on market structure and on welfare for one particular set of parameter values. The simulation results are of independent interest and can be read without delving into the technical detail of the computational algorithm The last part of the paper begins with an explicit consideration of the computational burden of the algorithm, and then introduces approximation techniques designed to make computation easier. This section provides some analytic results which dramatically reduce the computational burden of computing equilibria for industries in which a large number of firms are typically active.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0119.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Evaluating Risky Consumption Paths:  The Role of Intertemporal Substitutability</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0120</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Obstfeld</surname>
          <given-names>Maurice</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>01</month>
       <year>1995</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>In dynamic stochastic welfare comparisons, a failure clearly to distinguish between risk aversion and intertemporal substitutability can result in misleading assessments of the impact of risk aversion on the welfare costs of consumption-risk changes. The problem arises in any setting in which uncertainty is propagated over time, notably, but not exclusively, in economies with stochastic consumption trends. Regardless of the preference setup adopted, an increase in risk aversion amplifies the per-period costs of risks. The weights consumers use to cumulate the per-period costs of risks with persistent effects should, however, depend on intertemporal substitutability as well as on risk aversion. Under time-separable expected-utility preferences, an increase in the period utility function's curvature therefore alters the welfare effect of risk for reasons that in part are unrelated to risk aversion.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0120.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Deciding Between I(1) and I(0)</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0121</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Stock</surname>
          <given-names>James H</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>1992</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper proposes a class of procedures that consistently classify the stochastic component of a time series as being integrated either of order zero (l(0» or one (l(1» for general 1(0) and 1(1) processes. These procedures entail the evaluation of the asymptotic likelihoods of certain statistics under the 1(0)and 1(1) hypotheses. These likelihoods do not depend on nuisance parameters describing short-run dynamics and diverge asymptotically, so their ratio provides a consistent basis for classifying a process as 1(1) or 1(0). Bayesian inference can be performed by placing prior mass only on the point hypotheses "1(0)" and "1(1)" without needing to specify parametric priors within the classes of 1(0) and 1(1) processes; the result is posterior odds ratios for the 1(0) and 1(1) hypotheses. These procedures are developed for general polynomial and piecewise linear detrending. When applied to the Nelson-Plosser data with linear detrending, they largely support the original Nelson-Plosser inferences. With piecewise-linear detrending these data are typically uninformative, producing Bayes factors that are close to one.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0121.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Inference in Time Series Regression When the Order of Integration of a Regressor is Unknown</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0122</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Elliott</surname>
          <given-names>Graham</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Stock</surname>
          <given-names>James H</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>1992</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>It is well known that the distribution of statistics testing restrictions on the coefficients in time series regressions can depend on the order of integration of the regressors. In practice the order of integration is rarely blown. This paper examines two conventional approaches to this problem, finds them unsatisfactory, and proposes a new procedure. The two conventional approaches- simply to ignore unit root problems or to use unit root pretests to determine the critical values for second-stage inference - both often induce substantial size distortions. In the case of unit root pretests, this arises because type I and II pretest errors produce incorrect second-stage critical values and because, in many empirically plausible situations, the first stage test (the unit root test) and the second stage test (the exclusion restriction test) are dependent. Monte Carlo simulations reveal size distortions even if the regressor is stationary but has a large autoregressive root, a case that might arise for example in a regression of excess stock returns against the dividend yield. In the proposed alternative procedure, the second-stage test is conditional on a first-stage "unit root" statistic developed in Stock (1992); the second-stage critical values vary continuously with the value of the first-stage statistic. The procedure is shown to have the correct size asymptotically and to have good local asymptotic power against Granger-causality alternatives.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0122.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Specification Testing in Panel Data With Instrumental Variables</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0123</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Metcalf</surname>
          <given-names>Gilbert E</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>1996</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Public Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper shows a convenient way to test whether instrumental variables are correlated with individual effects in a panel data set. It shows that the correlated fixed effects specification tests developed by Hausman and Taylor (1981) extend in an analogous way to panel data sets with endogenous right hand side variables. In the panel data context, different sets of instrumental variables can be used to construct the test. Asymptotically, I show that the test in many cases is more efficient if an incomplete set of instruments is used. However, in small samples one is likely to do better using the complete set of instruments. Monte Carlo results demonstrate the likely gains for different assumptions about the degree of variance in the data across observations relative to variation across time.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0123.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Testing Volatility Restrictions on Intertemporal Marginal Rates of Substitution Implied by Euler Equations and Asset Returns</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0124</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Cecchetti</surname>
          <given-names>Stephen G</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Lam</surname>
          <given-names>Pok-Sang</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Mark</surname>
          <given-names>Nelson</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>07</month>
       <year>1992</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Asset Pricing</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The Euler equations derived from a broad range of intertemporal asset pricing models, together with the first two unconditional moments of asset returns, imply a lower bound on the volatility of the intertemporal marginal rate of substitution. We develop and implement statistical tests of these lower bound restrictions. We conclude that the availability of relatively short time series of consumption data undermines the ability of tests that use the restrictions implied by the volatility bound to discriminate among different utility functions.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0124.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The "Window Problem" in Studies of Children's Attainments:  A Methodological Exploration</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0125</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Wolfe</surname>
          <given-names>Barbara L</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Haveman</surname>
          <given-names>Robert</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Genther</surname>
          <given-names>Donna</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>An</surname>
          <given-names>Chong-Bum</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>07</month>
       <year>1992</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economics of Health</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Numerous studies of the determinants of children's attainments rely on observations of circumstances and events at age 14 as proxies for information over the entire childhood period. Using 21 years of panel data from the Michigan PSID on 825 children who were 14-16 years old in 1979, we evaluate the effects of using truncated or "window" (e.g., age 14) information in models of the determinants of attainments (e.g., education, nonmarital fertility) of young adults. Correlations between truncated and full-childhood variables are presented, along with 5 tests of the reliability of estimates based on "window" measurements. The tests are designed to evaluate the differential effects of data accuracy, multiple occurrence of events, duration of circumstances, and the timing of events or circumstances during childhood between "window" and full childhood information. We conclude that most of the standard truncated variables serve as weak proxies for multi-year information in such models, and draw the implications of these findings for future data-collection and research.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0125.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Seasonal Unit Roots in Aggregate U.S. Data</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0126</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Beaulieu</surname>
          <given-names>Joseph</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Miron</surname>
          <given-names>Jeffrey A</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>1992</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>In this paper we provide evidence on the presence of seasonal unit roots in aggregate U.S. data. The analysis is conducted using the approach developed by Hyllebcrg, Engle, Granger and Yoo (1990). We first derive the mechanics and asyrnptotics of the HEGY procedure for monthly data and use Monte Carlo methods to compute the finite sample critical values of the associated test statistics. We then apply quarterly and monthly HEGY procedures to aggregate U.S. data. The data reject the presence of unit roots at most seasonal frequencies in a large fraction of the series considered.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0126.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Average Causal Response with Variable Treatment Intensity</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0127</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Angrist</surname>
          <given-names>Joshua</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Imbens</surname>
          <given-names>Guido</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>1995</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>In evaluation research, an average causal effect is usually defined as the expected difference between the outcomes of the treated, and what these outcomes would have been in the absence of treatment. This definition of causal effects makes sense for binary treatments only. In this paper, we extend the definition of average causal effects to the case of variable treatments such as drug dosage, hours of exam preparation, cigarette smoking, and years of schooling. We show that given mild regularity assumptions, instrumental variables independence assumptions identify a weighted average of per-unit causal effects along the length of an appropriately defined causal response function. Conventional instrumental variables and Two-Stage Least Squares procedures can be interpreted as estimating the average causal response to a variable treatment.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0127.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>A Utility Based Comparison of Some Models of Exchange Rate Volatility</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0128</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>West</surname>
          <given-names>Kenneth D</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Edison</surname>
          <given-names>Hali</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Cho</surname>
          <given-names>Dongchul</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>11</month>
       <year>1992</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Finance and Macroeconomics</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Asset Pricing</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>When estimates of variances are used to make asset allocation decisions, underestimates of population variances lead to lower expected utility than equivalent overestimates: a utility based criterion is asymmetric, unlike standard criteria such as mean squared error. To illustrate how to estimate a utility based criterion, we use five bilateral weekly dollar exchange rates, 1973-1989, and the corresponding pair of Eurodeposit rates. Of homoskedastic, GARCH, autoregressive and nonpararnetric models for the conditional variance of each exchange rate, GARCI-J models tend to produce the highest utility, on average. A mean squared error criterion also favors GARCH, but not as sharply.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0128.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Asypmtotic Filtering Theory for Univariate Arch Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0129</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Nelson</surname>
          <given-names>Daniel B</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Foster</surname>
          <given-names>Dean P</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>04</month>
       <year>1994</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Asset Pricing</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper builds on this earlier work by deriving the asymptotic distribution of the measurement error. This allows us to approximate the measurement accuracy of ARCH conditional variance estimates and compare the efficiency achieved by different ARCH models. We are also able to characterize the relative importance of different kinds of misspecification; for example, we show that misspecifying conditional means adds only trivially (at least asymptotically) to measurement error, while other factors (for example, capturing the "leverage effect," accommodating thick tailed residuals, and correctly modelling the variability of the conditional variance process) are potentially much more important. Third, we are able to characterize a class of asymptotically optimal ARCH conditional variance estimates.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0129.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Efficient Tests for an Autoregressive Unit Root</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0130</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Elliott</surname>
          <given-names>Graham</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Rothenberg</surname>
          <given-names>Thomas J</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Stock</surname>
          <given-names>James H</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>12</month>
       <year>1992</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper derives the asymptotic power envelope for tests of a unit autoregressive root for various trend specifications and stationary Gaussian autoregressive disturbances. A family of tests is proposed, members of which are asymptotically similar under a general 1(1) null (allowing nonnormality and general dependence) and which achieve the Gaussian power envelope. One of these tests, which is asymptotically point optimal at a power of 50%, is found (numerically) to be approximately uniformly most powerful (UMP) in the case of a constant deterministic term, and approximately uniformly most powerful invariant (UMPI) in the case of a linear trend, although strictly no UMP or UMPI test exists. We also examine a modification, suggested by the expression for the power envelope, of the Dickey-Fuller (1979) t-statistic; this test is also found to be approximately UMP (constant deterministic term case) and UMPI (time trend case). The power improvement of both new tests is large: in the demeaned case, the Pitman efficiency of the proposed tests relative to the standard Dickey-Fuller t-test is 1.9 at a power of 50%. A Monte Carlo experiment indicates that both proposed tests, particularly the modified Dickey-Fuller t-test, exhibit good power and small size distortions in finite samples with dependent errors.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0130.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Measuring Asset Values for Cash Settlement in Derivative Markets: Hedonic Repeated Measures indices and Perpetual Futures</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0131</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Shiller</surname>
          <given-names>Robert J</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>12</month>
       <year>1993</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Asset Pricing</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Two proposals are made that may facilitate the creation of derivative market instruments, such as futures contracts, cash-settled based on economic indices. The first proposal concerns index number construction: indices based on infrequent measurements of nonstandardized items may control for quality change by using a hedonic repeated measures method, an index number construction method that follows individual assets or subjects through time and also takes account of measured quality variables. The second proposal is to establish markets for perpetual claims on cash flows matching indices of dividends or rents. Such markets may help us to measure the prices of the assets generating these dividends or rents even when the underlying asset prices are difficult or impossible to observe directly. A perpetual futures contract is proposed that would cash settle every day in terms of both the change in the futures price and the dividend or rent index for that day.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0131.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Filtering and Forecasting with Misspecified Arch Models II: Making the Right Forecast with the Wrong Model</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0132</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Nelson</surname>
          <given-names>Daniel B</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Foster</surname>
          <given-names>Dean P</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>12</month>
       <year>1992</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Asset Pricing</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>A companion paper (Nelson (1992)) showed that in data observed at high frequencies, an ARCH model may do a good job at estimating conditional variances, even when the ARCH model is severely misspecified. While such models may perform reasonably well at filtering (i.e., at estimating unobserved instantaneous conditional variances) they may perform disastrously at medium and long term forecasting. In this paper, we develop conditions under which a misspecified ARCH model successfully performs both tasks, filtering and forecasting. The key requirement (in addition to the conditions for consistent filtering) is that the ARCH model correctly specifies the functional form of the first two conditional moments of all state variables. We apply these results to a diffusion model employed in the options pricing literature, the stochastic volatility model of Hull and White (1987), Scott (1987), and Wiggins (1987).</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0132.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Long-memory Inflation Uncertainty:  Evidence from the Term Structure of Interest Rates</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0133</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Backus</surname>
          <given-names>David</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Zin</surname>
          <given-names>Stanley E</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>1993</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Asset Pricing</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We use a fractional difference model to reconcile two features of yields on US government bonds with modem asset pricing theory: the persistence of the short rate and variability of the long end of the yield curve. We suggest that this process might arise from the response of the heterogeneous agents to the changes in monetary policy.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0133.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Bayesian Inference and Portfolio Efficiency</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0134</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Kandel</surname>
          <given-names>Shmuel</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>McCulloch</surname>
          <given-names>Robert</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Stambaugh</surname>
          <given-names>Robert F</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1993</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Asset Pricing</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>A Bayesian approach is used to investigate a sample's information about a portfolio's degree of inefficiency. With standard diffuse priors, posterior distributions for measures of portfolio inefficiency can concentrate well away from values consistent with efficiency, even when the portfolio is exactly efficient in the sample. The data indicate that the NYSE-AMEX market portfolio is rather inefficient in the presence of a riskless asset, although this conclusion is justified only after an analysis using informative priors. Including a riskless asset significantly reduces any sample's ability to produce posterior distributions supporting small degrees of inefficiency.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0134.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>On Inflation and Output with Costly Price Changes: A Simple Unifying Result</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0135</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Bénabou</surname>
          <given-names>Roland</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Konieczny</surname>
          <given-names>Jerzy</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1993</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We analyze the effect of inflation on the average output of monopolistic firms facing a small fixed cost of changing nominal prices. Using Taylor expansions, we derive a general closed-form solution for the slope of the long-run Phillips curve. This very simple, unifying formula allows us to evaluate and clarify the role of three key factors: the asymmetry of the profit function, the convexity of the demand function, and the discount rate. These partial equilibrium effects remain important components of any general equilibrium model with (s,S) pricing.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0135.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Identification of Causal Effects Using Instrumental Variables</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0136</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Angrist</surname>
          <given-names>Joshua</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Imbens</surname>
          <given-names>Guido</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Rubin</surname>
          <given-names>Donald B</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>1993</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <self-uri xlink:href="http://www.nber.org/papers/t0136.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The Cure Can Be Worse than the Disease: A Cautionary Tale Regarding Instrumental Variables</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0137</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Bound</surname>
          <given-names>John</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Jaeger</surname>
          <given-names>David A</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Baker</surname>
          <given-names>Regina</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>1993</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>In this paper we draw attention to two problems associated with the use of instrumental variables (IV) whose importance for empirical work has not been fully appreciated. First, using potential instruments that explain little of the variation in the: endogenous explanatory variables can lead to large inconsistencies of the IV estimates even If only a weal< relationship exists between the Instruments and the error in the structural equation. Second. In finite samples. IV estimates are biased in the same direction as ordinary least squares (OLS) estimates. The magnitude of the bias of IV estimates approaches that of OLS estimates as the R[squared] between the instruments and the potentially endogenous explanatory variable approaches O. To illustrate these problems with IV estimation we reexamine the results of the recent provocative paper by Angrist and Krueger "Does Compulsory School Attendance Affect Schooling and Earnings?" and find evidence that their IV estimates of the effects of educational attainment on earnings are possibly both inconsistent and suffer from finite sample bias. To gauge the severity of both problems we suggest that both the partial R[squared] and the F statistic on the excluded instruments from the first stage estimation be reported when using IV as approximate guides to the quality of the IV estimates.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0137.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>From Each According to His Surplus: Equi-Proportionate Sharing of Commodity Tax Burdens</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0138</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hines</surname>
          <given-names>James R</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hlinko</surname>
          <given-names>John C</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Lubke</surname>
          <given-names>Theodore J.F</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>04</month>
       <year>1996</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Public Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper examines the incidence of commodity taxes, finding that, when demand and marginal cost schedules are linear. the burden of commodity taxation is distributed between buyers and sellers so that each suffers the same percentage reduction on pre-tax surplus. This equiproportionate reduction in surplus is the outcome of commodity taxes set at any rate, and is unaffected by relative demand and supply elasticities. Hence, when demand and marginal cost schedules are linear, commodity taxes resemble flat-rate taxes imposed on market surplus. Similar results apply to nonlinear schedules with certain ranges.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0138.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Some Evidence on Finite Sample Behavior of an Instrumental Variables Estimator of the Linear Quadtratic Inventory Model</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0139</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>West</surname>
          <given-names>Kenneth D</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Wilcox</surname>
          <given-names>David W</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>07</month>
       <year>1993</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We evaluate some aspects of the finite sample distribution of an instrumental variables estimator of a first order condition of the Holt et al. (1960) linear quadratic inventory model. We find that for some but not all empirically relevant data generating processes and sample sizes, asymptotic theory predicts a wide dispersion of parameter estimates, with a substantial finite sample probability of estimates with incorrect signs. For such data generating processes, simulation evidence suggests that different choices of left hand side variables often produce parameter estimates of an opposite sign. More generally, while the asymptotic theory often provides a good approximation to the finite sample distribution, sometimes it does not</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0139.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Estimating Conditional Expectations when Volatility Fluctuates</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0140</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Stambaugh</surname>
          <given-names>Robert F</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>1993</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Asset Pricing</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Asymptotic variance of estimated parameters in models of conditional expectations are calculated analytically assuming a GARCH process for conditional volatility.  Under such heteroskedasticity, OLS estimators or parameters in single-period models can posses substantially larger asymptotic variances the GMM estimators employing additional multiperiod moment conditions - an approach yielding no efficiency gain under homoskedasticity.  In estimating models of long- horizon expectations, the VAR approach provides an efficiency advantage over long-horizon regressions under homoskedasticity, but that ordering can reverse under heteroskedasticity, especially when the conditional mean and variance are both persistent.  In such cases, the VAR approach maintains a slight efficiency advantage if the OLS estimator is replaced by an alternative GMM estimator. Heteroskedasticity can increase dramatically the apparent asymptotic power advantages of long-horizon regressions to reject constant expectations against persistent alternatives.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0140.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Back to the Future: Generating Moment Implications for Continuous-Time  Markov Processes</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0141</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hansen</surname>
          <given-names>Lars P</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Scheinkman</surname>
          <given-names>José A</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>1993</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Asset Pricing</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Continuous-time Markov processes can be characterized conveniently by their infinitesimal generators.  For such processes there exist forward and reverse-time generators.  We show how to use these generators to construct moment conditions implied by stationary Markov processes.  Generalized method of moments estimators and tests can be constructed using these moment conditions.  The resulting econometric methods are designed to be applied to discrete-time data obtained by sampling continuous-time Markov processes.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0141.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Why Long Horizons: A Study of Power Against Persistent Alternatives</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0142</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Campbell</surname>
          <given-names>John Y</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>1993</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Asset Pricing</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper studies tests of predictability in regressions with a given AR(1) regressor and an asset return dependent variable measured over a short or long horizon.  The paper shows that when there is a persistent predictable component in the return, an increase in the horizon may increase the R2 statistic of the regression and the approximate slope of a predictability test.  Mone Carlo experiments show that long-horizon regression tests have serious size distortions when asymptotic critical values are used, but some versions of such tests have power advantages remaining after size is corrected.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0142.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Inventory Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0143</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>West</surname>
          <given-names>Kenneth D</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>1993</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Econometric aspects of recent research on inventory models are surveyed.  The discussion emphasizes issues relevant to instrumental variables estimation of a first order condition of the Holt et al. (1960) linear quadratic inventory model, including choice of instruments, covariance matrix estimation, methods for testing, and implications of unit root nonstationarity.  The paper also briefly discusses estimation of a decision rule implied by the model, and, finally, the impliations for inventory models of some stylized facts about inventories.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0143.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Automatic Lag Selection in Covariance Matrix Estimation</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0144</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>West</surname>
          <given-names>Kenneth D</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Newey</surname>
          <given-names>Whitney</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>02</month>
       <year>1995</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We propose a nonparametric method for automatically selecting the number of autocovariances to use in computing a heteroskedasticity and autocorrelation consistent covariance matrix.  For a given kernel for weighting the autocovariances, we prove that our procedure is asymptotically equivalent to one that is optimal under a mean squared error loss function.  Monte Carlo simulations suggest that our procedure performs tolerably well, although it does result in size distortions.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0144.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Econometric Evaluation of Asset Pricing Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0145</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hansen</surname>
          <given-names>Lars P</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Heaton</surname>
          <given-names>John C</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Luttmer</surname>
          <given-names>Erzo G.J</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>1993</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Asset Pricing</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>In this paper we provide econometric tools for the evaluation of intertemporal asset pricing models using specification-error and volatility bounds.  We formulate analog estimators of these bounds, give conditions for consistency and derive the limiting distribution of these estimators.  The analysis incorportes market frictions such as short-sale constraints and proportional transactions costs.  Among several applications we show how to use the methods to assess specific asset pricing models and to provide nonparametric characterizations of asset pricing anomalies.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0145.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>A Two-Stage Estimator for Probit Models with Structural Group Effects</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0146</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Borjas</surname>
          <given-names>George J</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Sueyoshi</surname>
          <given-names>Glenn T</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>11</month>
       <year>1993</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper outlines a two-stage technique for estimation and inference in probit models with structural group effects.  The structural group specification belongs to a broader class of random components models.  In particular, individuals in a given group share a common component in the specification of the conditional mean of a latent variable.  For a number of computational reasons, existing random-effects models are impractical for estimation and inference in this type of problem.  Our two-stage estimator provides an easily estimable alternative to the random effect specification.  In addition, we conduct a Monte Carlo simulation comparing the performance of alternative estimators, and find that the two-stage estimator is superior -- both in terms of estimation and inference -- to traditional estimators.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0146.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Econometric Methods for Fractional Response Variables with an Application to 401(k) Plan Participation Rates</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0147</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Papke</surname>
          <given-names>Leslie E</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Wooldridge</surname>
          <given-names>Jeffrey</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>11</month>
       <year>1993</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Public Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We offer simple quasi-likelihood methods for estimating regression models with a fractional dependent variable and for performing asymptotically valid inference.  Compared with log-odds type procedures, there is no difficulty in recovering the regression function for the fractional variable, and there is no need to use ad hoc transformations to handle data at the extreme values of zero and one.  We also offer some new, simple specification tests by nesting the logit or probit function in a more general functional form.  We apply these methods to a data set of employee participation rates in 401(k) pension plans.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0147.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The Mixing Problem in Program Evaluation</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0148</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Manski</surname>
          <given-names>Charles F</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>12</month>
       <year>1993</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>A common concern of evaluation studies is to learn the distribution of outcomes when a specified treatment policy or assignment rule, determines the treatment received by each member of a specified population.  Recent studies have emphasized evaluation of policies providing the same treatment to all members of the population.  In particular, experiments with randomized treatments have this objective.  Policies mandating homogenous treatment of the population are of interest, but so are ones that permit treatment to vary across the population.  This paper examines the use of empirical evidence on programs with homogenous treatments to infer the outcomes that would occur if treatment were to vary across the population. Experimental evidence from the Perry Preschool Project is used to illustrate the inferential problem and the main findings of the analysis.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0148.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Making the Most Out Of Social Experiments: Reducing the Intrinsic Uncertainty in Evidence from Randomized Trials with an Application to the JTPA Exp</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0149</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Clements</surname>
          <given-names>Nancy</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Heckman</surname>
          <given-names>James J</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Smith</surname>
          <given-names>Jeffrey A</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>01</month>
       <year>1994</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper demonstrates that even under ideal conditions, social experiments in general only uniquely determine the mean impacts of programs but not the median or the distribution of program impacts. The conventional common parameter evaluation model widely used in econometrics is one case where experiments uniquely determine joint the distribution of program impacts.  That model assumes that everyone responds to a social program in the same way.  Allowing for heterogeneous responses to programs, the data from social experiments are consistent with a wide variety of alternative impact distribution. We discuss why it is interesting to know the distribution of program impacts.  We propose and implement a variety of different ways of incorporating prior information to reduce the wide variability intrinsic in experimental data.  Robust Bayesian methods and deconvolution methods are developed and applied.  We analyze earnings and employment data on adult women from a recent social experiment. In order to produce plausible impact distributions, it is necessary to impose strong positive dependence between outcomes in the treatment and in the control distributions.  Such dependence is an outcome of certain optimizing models of the program participation decision.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0149.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Split Sample Instrumental Variables</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0150</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Angrist</surname>
          <given-names>Joshua</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Krueger</surname>
          <given-names>Alan B</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>1995</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Instrumental Variables (IV) estimates tend to be biased in the same direction as Ordinary Least Squares (OLS) in finite samples if the instruments are weak.  To address this problem we propose a new IV estimator which we call Split Sample Instrumental Variables (SSIV). SSIV works as follows: we randomly split the sample in half, and use one half of the sample to estimate parameters of the first-stage equation.  We then use these estimated first-stage parameters to construct fitted values and second-stage parameter estimates using data from the other half sample.  SSIV is biased toward zero, rather than toward the plim of the OLS estimate.  However, an unbiased estimate of the attenuation bias of SSIV can be calculated.  We us this estimate of the attenutation bias to derive an estimator that is asymptotically unbiased as the number of instruments tends to infinity, holding the number of observations per instrument fixed.  We label this new estimator Unbiased Split Sample Instrumental Variables (USSIV).  We apply SSIV and USSIV to the data used by Angrist and Krueger (1991) to estimate the payoff to education.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0150.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Instrumental Variables Regression with Weak Instruments</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0151</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Staiger</surname>
          <given-names>Douglas O</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Stock</surname>
          <given-names>James H</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>01</month>
       <year>1994</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Asset Pricing</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper develops asymptotic distribution theory for instrumental variable regression when the partial correlation between the instruments and a single included endogenous variable is weak, here modeled as local to zero.  Asymptotic representations are provided for various instrumental variable statistics, including the two-stage least squares (TSLS) and limited information maximum- likelihood (LIML) estimators and their t-statistics.  The asymptotic distributions are found to provide good approximations to sampling distributions with just 20 observations per instrument.  Even in large samples, TSLS can be badly biased, but LIML is, in many cases, approximately median unbiased.  The theory suggests concrete quantitative guidelines for applied work.  These guidelines help to interpret Angrist and Krueger's (1991) estimates of the returns to education: whereas TSLS estimates with many instruments approach the OLS estimate of 6%, the more reliable LIML and TSLS estimates with fewer instruments fall between 8% and 10%, with a typical confidence interval of (6%, 14%).</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0151.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The Predictive Ability of Several Models of Exchange Rate Volatility</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0152</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>West</surname>
          <given-names>Kenneth D</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Cho</surname>
          <given-names>Dongchul</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>01</month>
       <year>1994</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Asset Pricing</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Finance and Macroeconomics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We compare the out-of-sample forecasting performance of univariate homoskedastic, GARCH, autoregressive and nonparametric models for conditional variances, using five bilateral weekly exchange rates for the dollar, 1973-1989.  For a one week horizon, GARCH models tend to make slightly more accurate forecasts.  For longer horizons, it is difficult to find grounds for choosing between the various models.  None of the models perform well in a conventional test of forecast efficiency.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0152.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Assessing Specification Errors in Stochastic Discount Factor Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0153</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hansen</surname>
          <given-names>Lars P</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Jagannathan</surname>
          <given-names>Ravi</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>02</month>
       <year>1994</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Asset Pricing</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>In this paper we develop alternative ways to compare asset pricing models when it is understood that their implied stochastic discount factors do not price all portfolios correctly.  Unlike comparisons based on x2 statistics associated with null hypothesis that models are correct, our measures of model performance do not reward variability of discount factor proxies.  One of our measures is designed to exploit fully the implications of arbitrage-free pricing of derivative claims.  We demonstrate empirically the usefulness of methods in assessing some alternative stochastic factor models that have been proposed in asset pricing literature.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0153.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>When Are Anonymous Congestion Charges Consistent with Marginal Cost     Pricing?</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0154</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Arnott</surname>
          <given-names>Richard J</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Kraus</surname>
          <given-names>Marvin</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>04</month>
       <year>1994</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Public Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>There are constraints on pricing congestible facilities.  First, if heterogeneous users are observationally indistinguishable, then congestion charges must be anonymous.  Second, the time variation of congestion charges may be constrained.  Do these constraints undermine the feasibility of marginal cost pricing, and hence the applicability of the first-best theory of congestible facilities?  We show that if heterogeneous users behave identically when using the congestible facility and if the time variation of congestion charges is unconstrained, then marginal cost pricing is feasible with anonymous congestion charges.  If, however, the time variation of congestion charges is constrained, optimal pricing with anonymous congestion charges entails Ramsey pricing.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0154.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Small Sample Properties of Generalized Method of Moments Based Wald     Tests</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0155</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Burnside</surname>
          <given-names>Craig</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Eichenbaum</surname>
          <given-names>Martin S</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1994</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper assesses the small sample properties of Generalized Method of Moments (GMM) based Wald statistics.  The analysis is conducted assuming that the data generating process corresponds to (i) a simple vector white noise process and (ii) an equilibrium business cycle model.  Our key findings are that the small sample size of the Wald tests exceeds their asymptotic size, and that their size increases uniformly with the dimensionality of joint hypotheses.  For tests involving even moderate numbers of moment restrictions, the small sample size of the tests greatly exceeds their asymptotic size.  Relying on asymptotic distribution theory leads one to reject joint hypothesis tests far too often.  We argue that the source of the problem is the difficulty of estimating the spectral density matrix of the GMM residuals, which is needed to conduct inference in a GMM environment.  Imposing restrictions implied by the underlying economic model being investigated or the null hypothesis being tested on this spectral density matrix can lead to substantial improvements in the small sample properties of the Wald tests.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0155.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Small Sample Bias in GMM Estimation of Covariance Structures</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0156</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Altonji</surname>
          <given-names>Joseph</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Segal</surname>
          <given-names>Lewis M</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>1994</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We examine the small sample properties of the GMM estimator for models of covariance structures, where the technique is often referred to as the optimal minimum distance (OMD) estimator.  We present a variety of Monte Carlo experiments based on simulated data and on the data used by Abowd and Card (1987, 1990) in an examination of the covariance structure of hours and earnings changes.  Our main finding is that OMD is seriously biased in small samples for many distributions and in relatively large samples for poorly behaved distributions.  The bias is almost always downward in absolute value. It arises because sampling errors in the second moments are correlated with sampling errors in the weighting matrix used by OMD. Furthermore, OMD usually has a larger root mean square error and median absolute error than equally weighted minimum distance (EWMD). We also propose and investigate an alternative estimator, which we call independently weighted optimal minimum distance (IWOMD). IWOMD is a split sample estimator using separate groups of observations to estimate the moments and the weights.  IWOMD has identical large sample properties to the OMD estimator but is unbiased regardless of sample size.  However, the Monte Carlo evidence indicates that IWOMD is usually dominated by EWMD.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0156.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Econometric Mixture Models and More General Models for Unobservables in Duration Analysis</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0157</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Heckman</surname>
          <given-names>James J</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Taber</surname>
          <given-names>Christopher R</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>1994</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper considers models for unobservables in duration models. It demonstrates how cross-section and time-series variation in regressors facilitates identification of single-spell, competing risks and multiple spell duration models.  We also demonstrate the limited value of traditional identification studies by considering a case in which a model is identified in the conventional sense but cannot be consistently estimated.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0157.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Biases in Twin Estimates of the Return to Schooling: A Note on Recent   Research</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0158</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Neumark</surname>
          <given-names>David</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>1994</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Ashenfelter and Krueger's (1993) within-twin, measurement-error- corrected estimate of the return to schooling is about 13-16 percent. If their estimate is unbiased, then their results imply considerable downward measurement error bias in uncorrected within-twin estimates of the return to schooling, and considerable downward omitted ability bias in cross-section estimates.         This note points out that if there are ability differences among twins, then AK's IV estimator exacerbates the omitted ability bias in the within-twin estimate. Thus, upward omitted ability bias in within-twin estimates may provide an alternative explanation of the surprisingly high estimates of the return to schooling that AK obtain, and permit their results to be reconciled with upward, rather than downward omitted ability bias in cross-section estimates.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0158.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Interpreting Tests of the Convergence Hypothesis</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0159</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Bernard</surname>
          <given-names>Andrew B</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Durlauf</surname>
          <given-names>Steven N</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>1994</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper provides a framework for understanding the cross- section and time series approaches which have been used to test the convergence hypothesis.  First, we present two definitions of convergence which capture the implications of the neoclassical growth model for the relationship between current and future cross-country output differences.  Second, we identify how the cross-section and time series approaches relate to these definitions.  Cross-section tests are shown to be associated with a weaker notion of convergence than time series tests.  Third, we show how these alternative approaches make different assumptions on whether the data are well characterized by a limiting distribution.  As a result, the choice of an appropriate testing framework is shown to depend on both the specific null and alternative hypotheses under consideration as well as on the initial conditions characterizing the data being studied.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0159.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Reported Income in the NLSY: Consistency Checks and Methods for Cleaningthe Data</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0160</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Cole</surname>
          <given-names>Nancy</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Currie</surname>
          <given-names>Janet</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>07</month>
       <year>1994</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The National Longitudinal Survey of Youth collects information about over 20 separate components of respondent income.  These disaggregated income components provide many opportunities to verify the consistency of the data.  This note outlines procedures we have used to identify and `clean' measurement error in the disaggregated income variables.  After cleanin the income data at the disaggregated level, we reconstruct the measure of 'family income' and re-evaluate poverty status.  While people may not agree with all of our methods, we hope that they will be of some use to other researchers.  A second purpose of this note is to highlight the value of the disaggregated data, since without it, it would be impossible to improve on the reported totals.  Finally, we hope that with the advent of computerized interviewing technology, checks on the internal consistency of the data of the kind that we propose may eventually be built into interviewing software, thereby improving the quality of the data collected.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0160.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Asymptotically Optimal Smoothing with ARCH Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0161</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Nelson</surname>
          <given-names>Daniel B</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>1994</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Asset Pricing</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Suppose an observed time series is generated by a stochastic volatility model-i.e., there is an unobservable state variable controlling the volatility of the innovations in the series.  As shown by Nelson (1992), and Nelson and Foster (1994), a misspecified ARCH model will often be able to consistently (as a continuous time limit is approached) estimate the unobserved volatility process, using information in the lagged residuals.  This paper shows how to more efficiently estimate such a volatility process using information in both lagged and led residuals.  In particular, this paper expands the optimal filtering results of Nelson and Foster (1994) and Nelson (1994) to smoothing.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0161.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Asymptotic Filtering Theory for Multivariate ARCH Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0162</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Nelson</surname>
          <given-names>Daniel B</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>1994</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Asset Pricing</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>ARCH models are widely used to estimate conditional variances and covariances in financial time series models.  How successfully can ARCH models carry out this estimation when they are misspecified?  How can ARCH models be optimally constructed?  Nelson and Foster (1994) employed continuous record asymptotics to answer these questions in the univariate case.  This paper considers the general multivariate case.  Our results allow us, for example, to construct an asymptotically optimal ARCH model for estimating the conditional variance or conditional beta of a stock return given lagged returns on the stock, volume, market returns, implicit volatility from options contracts, and other relevant data.  We also allow for time-varying shapes of conditional densities (e.g., `heteroskewticity` and `heterokurticity').  Examples are provided.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0162.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Continuous Record Asymptotics for Rolling Sample Variance Estimators</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0163</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Foster</surname>
          <given-names>Dean P</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Nelson</surname>
          <given-names>Daniel B</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>1994</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Asset Pricing</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>It is widely known that conditional covariances of asset returns change over time.  Researchers adopt many strategies to accommodate conditional heteroskedasticity.  Among the most popular are: (a) chopping the data into short blocks of time and assuming homoskedasticity within the blocks, (b) performing one-sided rolling regressions, in which only data from, say, the preceding five year period is used to estimate the conditional covariance of returns at a given date, and (c) two-sided rolling regressions which use, say, five years of leads and five years of lags.  GARCH amounts to a one-sided rolling regression with exponentially declining weights.  We derive asymptotically optimal window lengths for standard rolling regressions and optimal weights for weighted rolling regressions.  An empirical model of the S&amp;P 500 stock index provides an example.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0163.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Evidence on Structural Instability in Macroeconomic Time Series Relations</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0164</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Stock</surname>
          <given-names>James H</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Watson</surname>
          <given-names>Mark W</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>1994</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>An experiment is performed to assess the prevalence of instability in univariate and bivariate macroeconomic time series relations and to ascertain whether various adaptive forecasting techniques successfully handle any such instability.  Formal tests for instability and out-of-sample forecasts from sixteen different models are computed using a sample of 76 representative U.S. monthly postwar macroeconomic time series, constituting 5700 bivariate forecasting relations.  The tests indicate widespread instability in univariate and bivariate autoregressive models.  However, adaptive forecasting models, in particular time varying parameter models, have limited success in exploiting this instability to improve upon fixed-parameter or recursive autoregressive forecasts.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0164.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Estimating Deterministic Trends in the Presence of Serially Correlated  Errors</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0165</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Canjels</surname>
          <given-names>Eugene</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Watson</surname>
          <given-names>Mark W</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>1994</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper studies the problems of estimation and inference in the linear trend model: yt=à+þt+ut, where ut follows an autoregressive process with largest root þ, and þ is the parameter of interest.  We contrast asymptotic results for the cases þþþ < 1 and þ=1, and argue that the most useful asymptotic approximations obtain from modeling þ as local-to-unity.  Asymptotic distributions are derived for the OLS, first-difference, infeasible GLS and three feasible GLS estimators. These distributions depend on the local-to-unity parameter and a parameter that governs the variance of the initial error term, þ.  The feasible Cochrane-Orcutt estimator has poor properties, and the feasible Prais-Winsten estimator is the preferred estimator unless the researcher has sharp a priori knowledge about þ and þ.  The paper develops methods for constructing confidence intervals for þ that account for uncertainty in þ and þ.  We use these results to estimate growth rates for real per capita GDP in 128 countries.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0165.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Accounting for Dropouts in Evaluations of Social Experiments</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0166</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Heckman</surname>
          <given-names>James J</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Smith</surname>
          <given-names>Jeffrey A</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Taber</surname>
          <given-names>Christopher R</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>1994</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper considers the statistical and economic justification for one widely-used method of adjusting data from social experiments to account for dropping-out behavior due to Bloom (1984).  We generalize the method to apply to distributions not just means, and present tests of the key identifying assumption in this context.  A reanalysis of the National JTPA experiment base vindicates application of Bloom's method in this context.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0166.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Optimal Prediction Under Asymmetric Loss</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0167</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Christoffersen</surname>
          <given-names>Peter</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Diebold</surname>
          <given-names>Francis X</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>1994</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Corporate Finance</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Prediction problems involving asymmetric loss functions arise routinely in many fields, yet the theory of optimal prediction under asymmetric loss is not well developed.  We study the optimal prediction problem under general loss structures and characterize the optimal predictor.  We compute the optimal predictor analytically in two leading cases.  Analytic solutions for the optimal predictor are not available in more complicated cases, so we develop numerical procedures for computing it.  We illustrate the results by forecasting the GARCH(1,1) process which, although white noise, is non-trivially forecastable under asymmetric loss.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0167.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Estimating Multiple-Discrete Choice Models: An Application to Computeri-zzation Returns</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0168</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hendel</surname>
          <given-names>Igal</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>1994</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Productivity, Innovation, and Entrepreneurship</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper develops a multiple-discrete choice model for the analysis of demand of differentiated products.  Users maximize profits by choosing the number of units of each brand they purchase. Multiple-unit as well as multiple-brand purchases are allowed.  These two features distinguish this model from classical discrete choice models which consider only a single choice among mutually exclusive alternatives.  Model parameters are estimated using the simulated method of moments technique. Both requirements - microfoundations and estimability -are imposed in order to exploit the available micro level data on personal computer purchases.  The estimated demand structure is used to assess welfare gains from computerization and technological innovation in peripherals industries.  The estimated return on investment in computers is 90%.  Moreover, a 10% increase in the performance to price ratio of microprocessors leads to a 4% gain in the estimated end user surplus.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0168.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Comparing Predictive Accuracy</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0169</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Diebold</surname>
          <given-names>Francis X</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Mariano</surname>
          <given-names>Roberto S</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>11</month>
       <year>1994</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Using research designs patterned after randomized experiments, many recent economic studies examine outcome measures for treatment groups and comparison groups that are not randomly assigned.  By using variation in explanatory variables generated by changes in state laws, government draft mechanisms, or other means, these studies obtain variation that is readily examined and is plausibly exogenous.  This paper describes the advantages of these studies and suggests how they can be improved.  It also provides aids in judging the validity of inferences they draw. Design complications such as multiple treatment and comparison groups and multiple pre- or post-intervention observations are advocated.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0169.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Natural and Quasi- Experiments in Economics</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0170</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Meyer</surname>
          <given-names>Bruce D</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>12</month>
       <year>1994</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Using research designs patterned after randomized experiments, many recent economic studies examine outcome measures for treatment groups and comparison groups that are not randomly assigned.  By using variation in explanatory variables generated by changes in state laws, government draft mechanisms, or other means, these studies obtain variation that is readily examined and is plausibly exogenous.  This paper describes the advantages of these studies and suggests how they can be improved.  It also provides aids in judging the validity of inferences they draw.  Design complications such as multiple treatment and comparison groups and multiple pre- or post-intervention observations are advocated.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0170.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Testing for Cointegration When Some of the Contributing Vectors are     Known</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0171</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Horvath</surname>
          <given-names>Michael</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Watson</surname>
          <given-names>Mark W</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>12</month>
       <year>1994</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Many economic models imply that ratios, simple differences, or `spreads' of variables are I(0).  In these models, cointegrating vectors are composed of 1's,  0's and -1's, and contain no unknown parameters.  In this paper we develop tests for cointegration that can be applied when some of the cointegrating vectors are known under the null or under the alternative hypotheses.  These tests are constructed in a vector error correction model (VECM) and are motivated as Wald tests in the version of this Gaussian model.  When all of the cointegrating vectors are known under the alternative, the tests correspond to the standard Wald tests for the inclusion of error correction terms in the VAR.  Modifications of this basic test are developed when a subset of the cointegrating vectors contains unknown parameters.  The asymptotic null distribution of the statistics are derived, critical values are determined, and the local power properties of the test are studied.  Finally, the test is applied to data on foreign exchange future and spot prices to test the stability of forward-spot premium.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0171.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Jackknife Instrumental Variables Estimation</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0172</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Angrist</surname>
          <given-names>Joshua</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Imbens</surname>
          <given-names>Guido</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Krueger</surname>
          <given-names>Alan B</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>02</month>
       <year>1995</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Two-stage-least-squares (2SLS) estimates are biased towards OLS estimates.  This bias grows with the degree of over-identification and can generate highly misleading results.  In this paper we propose two simple alternatives to 2SLS and limited-information-maximum-likelihood (LIML) estimators for models with more instruments than endogenous regressors.  These estimators can be interpreted as instrumental variables procedures using an instrument that is independent of disturbances even in finite samples.  Independence is achieved by using a `leave-one-out' jackknife-type fitted value in place of the usual first-stage equation.  The new estimators are first-order equivalent to 2SLS but with finite-sample properties superior to those of 2SLS and similar to LIML when there are many instruments. Moreover, the jackknife estimators appear to be less sensitive than LIML to deviations from the linear reduced form used in classical simultaneous equations models.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0172.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Measuring Volatility Dynamics</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0173</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Diebold</surname>
          <given-names>Francis X</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Lopez</surname>
          <given-names>Jose A</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>02</month>
       <year>1995</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Asset Pricing</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Recently there has been a great deal of interest in modeling volatility fluctuations.  ARCH models, for example, provide parsimonious approximations to volatility dynamics.  Here we provide a selective amount of certain aspects of conditional volatility modeling that are of particular relevance in macroeconomics and finance. First, we sketch the rudiments of a rather general univariate time- series model, allowing for dynamics in both the conditional mean and variance.  Second, we discuss both the economic and statistical motivation for the models, we characterize their properties, and we discuss issues related to estimation and testing.  Finally, we discuss a variety of applications and extensions of the basic models.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0173.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Dynamic Equilibrium Economies: A Framework for Comparing Models and Data</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0174</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Diebold</surname>
          <given-names>Francis X</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Ohanian</surname>
          <given-names>Lee E</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Berkowitz</surname>
          <given-names>Jeremy</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>02</month>
       <year>1995</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Many recent theoretical papers have come under attack for modeling prices as Geometric Brownian Motion.  This process can diverge over time, implying that firms facing this price process can earn infinite profits.  We explore the significance of this attack and contrast investment under Geometric Brownian Motion with investment assuming mean reversion.  While analytically more complex, mean reversion in many cases is a more plausible assumption, allowing for supply responses to increasing prices.  We show a mean reversion process rather than Geometric Brownian Motion and provide an explanation for this result.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0174.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Investment Under Alternative Return Assumptions: Comparing Random Walks and Mean Reversion</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0175</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Metcalf</surname>
          <given-names>Gilbert E</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hassett</surname>
          <given-names>Kevin A</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>1995</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Public Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Many recent theoretical papers have come under attack for modeling prices as Geometric Brownian Motion.  This process can diverge over time, implying that firms facing this price process can earn infinite profits.  We explore the significance of this attack and contrast investment under Geometric Brownian Motion with investment assuming mean reversion.  While analytically more complex, mean reversion in many cases is a more plausible assumption, allowing for supply responses to increasing prices.  We show that cumulative investment is generally unaffected by the use of a mean reversion process rather than Geometric Brownian Motion and provide an explanation for this result.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0175.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>A Comparison of Alternative Instruments Variables Estimators of a       Dynamic Linear Model</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0176</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>West</surname>
          <given-names>Kenneth D</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Wilcox</surname>
          <given-names>David W</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>1995</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Using a dynamic linear equation that has a conditionally homoskedastic moving average disturbance, we compare two parameterizations of a commonly used instrumental variables estimator (Hansen (1982)) to one that is asymptotically optimal in a class of estimators that includes the conventional one (Hansen (1985)).  We find that for some plausible data generating processes, the optimal one is distinctly more efficient asymptotically.  Simulations indicate that in samples of size typically available, asymptotic theory describes the distribution of the parameter estimates reasonably well, but that test statistics sometimes are poorly sized.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0176.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Small Sample Properties of GMM for Business Cycle Analysis</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0177</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Christiano</surname>
          <given-names>Lawrence</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>den Haan</surname>
          <given-names>Wouter J</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>1995</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We investigate, by Monte Carlo methods, the finite sample properties of GMM procedures for conducting inference about statistics that are of interest in the business cycle literature.  These statistics include the second moments of data filtered using the first difference and Hodrick-Prescott filters, and they include statistics for evaluating model fit.  Our results indicate that, for the procedures considered, the existing asymptotic theory is not a good guide in a sample the size of quarterly postwar U.S. data.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0177.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Non-Parametric Demand Analysis with an Application to the Demand for    Fish</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0178</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Angrist</surname>
          <given-names>Joshua</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Graddy</surname>
          <given-names>Kathryn</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Imbens</surname>
          <given-names>Guido</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>04</month>
       <year>1995</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Instrumental variables (IV) estimation of a demand equation using time series data is shown to produce a weighted average derivative of heterogeneous potential demand functions.  This result adapts recent work on the causal interpretation of two-stage least squares estimates to the simultaneous equations context and generalizes earlier research on average derivative estimation to models with endogenous regressors. The paper also shows how to compute the weights underlying IV estimates of average derivatives in a simultaneous equations model. These ideas are illustrated using data from the Fulton Fish market in New York City to estimate an average elasticity of wholesale demand for fresh fish.  The weighting function underlying IV estimates of the demand equation is graphed and interpreted.  The empirical example illustrates the essentially local and context-specific nature of instrumental variables estimates of structural parameters in simultaneous equations models.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0178.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>One Day in June, 1994: A Study of the Working of Reuters 2000-2 Electronic Foreign Exchange Trading System</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0179</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Goodhart</surname>
          <given-names>Charles</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Ito</surname>
          <given-names>Takatoshi</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Payne</surname>
          <given-names>Richard</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>04</month>
       <year>1995</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Asset Pricing</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Finance and Macroeconomics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The paper utilized foreign exchange data (bid, ask and transaction prices and quantities) collected from the screen of the electronic broking system (Reuter D2000-2) on June 16, 1993.  The bid and ask quotes, which are `firm' in this data set, are compared with the Reuters FXFX page, which reports only indicative bid and ask prices.  A caution is necessary due to its small samples (7 hours). The paper finds that although the bid-ask mean of indicative quotes is similar to that of `firm' quotes, the behavior of bid-ask spread and the frequency of quote entry are quite different in the two kinds of quotes.  The bid-ask spreads in the broking system are much more time- variant and dependent on the frequency of trade, while the indicative bid-ask spreads tend to cluster at round numbers.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0179.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>A La Recherche des Moments Perdus: Covariance Models for Unbalanced     Panels with Endogenous Death</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0180</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Abowd</surname>
          <given-names>John M</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Crépon</surname>
          <given-names>Bruno</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Kramarz</surname>
          <given-names>Francis</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Trognon</surname>
          <given-names>Alain</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1995</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Productivity, Innovation, and Entrepreneurship</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We develop a model for decomposing the covariance structure of panel data on firms into a part due to permanent heterogeneity, a part due to differential histories with unknown ages, and a part due to the evolution of economic shocks to the firm.  Our model allows for the endogenous death of firms and correctly handles the problems arising from the estimation of this death process.  We implement this model on an unbalanced longitudinal sample of French firms which have both known and unknown ages and histories.  For firms with unknown birthdates, we find that the structural autocorrelation in employment, compensation and capital is dominated by the part due to initial heterogeneity and random growth rates.  Serial correlation in the periodic shocks is less important.  For these firms, profitability, value-added and indebtedness have processes in which the heterogeneity components are less important.  Firms with known birthdates and histories (which are younger than the censored firms) have autocorrelation structures dominated by the heterogeneity.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0180.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Conditioning on the Probability of Selection to Control Selection Bias</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0181</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Angrist</surname>
          <given-names>Joshua</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>1995</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Problems of sample selection arise in the analysis of both experimental and non-experimental data.  In clinical trials to evaluate the impact of an intervention on health and mortality, treatment assignment is typically nonrandom in a sample of survivors even if the original assignment is random.  Similarly, randomized training interventions like National Supported Work (NSW) are not necessarily randomly assigned in the sample of working men.  A non- experimental version of this problem involves the use of instrumental variables (IV) to estimate behavioral relationships.  A sample selection rule that is related to the instruments can induce correlation between the instruments and unobserved outcomes, possibly invalidating the use of conventional IV techniques in the selected sample.  This paper shows that conditioning on the probability of selection given the instruments can provide a solution to the selection problem as long as the relationship between instruments and selection status satisfies a simple monotonicity condition.  A latent index structure is not required for this result, which is motivated as an extension of earlier work on the propensity score.  The conditioning approach to selection problems is illustrated using instrumental variables techniques to estimate the returns to schooling in a sample with positive earnings.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0181.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Refining Estimates of Marital Status Differences in Mortality at Older  Ages</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0182</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Korenman</surname>
          <given-names>Sanders</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Goldman</surname>
          <given-names>Noreen</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Fu</surname>
          <given-names>Haishan</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>07</month>
       <year>1995</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economics of Aging</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The main objective of this analysis is to demonstrate that some of the limitations that have characterized recent studies of the relationship between marital status and health outcomes may result in biased estimates of marital status differences in mortality among the elderly.  A secondary goal is to evaluate the strength of evidence in support of the excess risks of mortality associated with widowhood, once we are able to eliminate or mitigate many of the limitations experienced by other studies. Our results, based on the 1984-1990 Longitudinal Study of Aging, demonstrate that the estimated marital status effects in logit and hazard models of survival are very sensitive to whether and how marital status information is updated after the baseline interview. Refined measures of marital status that capture prospectively transitions from the married to the widowhood state result in substantially increased estimates of the relative risk of dying in the early durations of widowhood (bereavement).</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0182.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Another Heteroskedasticity and Autocorrelation Consistent Covariance    Matrix Estimator</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0183</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>West</surname>
          <given-names>Kenneth D</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>07</month>
       <year>1995</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Asset Pricing</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>A þT consistent estimator of a heteroskedasticity and autocorrelation consistent covariance matrix estimator is proposed and evaluated.  The relevant applications are ones in which the regression disturbance follows a moving average process of known order.  In a system of þ equations, this `MA-þ' estimator entails estimation of the moving average coefficients of an þ-dimensional vector.  Simulations indicate that the MA-þ estimator's finite sample performance is better than that of the estimators of Andrews and Monahan (1992) and Newey and West (1994) when cross-products of instruments and disturbances are sharply negatively autocorrelated, comparable or slightly worse otherwise.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0183.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Randomization as an Instrumental Variable</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0184</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Heckman</surname>
          <given-names>James J</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>1995</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper discusses how randomized social experiments operate as an instrumental variable.  For two types of randomization schemes, the fundamental experimental estimation equations are derived from the principle that experiments equate bias in control and experimental samples.  Using conventional econometric representations, we derive the orthogonality conditions for the fundamental estimation equations. Randomization is a multiple instrumental variable in the sense that one randomization defines the parameter of interest expressed as a function of multiple endogenous variables in the conventional usage of that term.  It orthogonalizes the treatment variable simultaneously with respect to the other regressors in the model and the disturbance term for the conditional population.  However, conventional `structural' parameters are not in general  identified by the two types of randomization schemes widely used in practice.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0184.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Instrumental Variables: A Cautionary Tale</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0185</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Heckman</surname>
          <given-names>James J</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>1995</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper considers the use of instrumental variables to estimate the mean effect of treatment on the treated.  It reviews previous work on this topic by Heckman and Robb (1985, 1986) and demonstrates that (a) unless the effect of treatment is the same for everyone (conditional on observables), or (b) treatment effects are variable across persons but the person-specific component of the variability not forecastable by observables does not determine participation in the program, widely-used instrumental variable methods produce inconsistent estimators of the parameter of interest. Neither assumption is very palatable.  The first assumes a homogeneity that is implausible.  The second assumes either very rich data available to the econometrician or that the persons being studied either do not have better information than the econometrician or that they do not use it.  Instrumental variable methods do not provide a general solution to the evaluation problem.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0185.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Information Theoretic Approaches to Inference in Moment Condition Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0186</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Imbens</surname>
          <given-names>Guido</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Johnson</surname>
          <given-names>Phillip M</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Spady</surname>
          <given-names>Richard H</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>1995</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>One-step efficient GMM estimation has been developed in the recent papers of Back and Brown (1990), Imbens (1993) and Qin and Lawless (1994).  These papers emphasized methods that correspond to using Owen's (1988) method of empirical likelihood to reweight the data so that the reweighted sample obeys all the moment restrictions at the parameter estimates.  In this paper we consider an alternative KLIC motivated weighting and show how it and similar discrete reweightings define a class of unconstrained optimization problems which includes GMM as a special case.  Such KLIC-motivated reweightings introduce M auxiliary `tilting' parameters, where M is the number of moments; parameter and overidentification hypotheses can be recast in terms of these tilting parameters.  Such tests, when appropriately conditioned on the estimates of the original parameters, are often startlingly more effective than their conventional counterparts.  This is apparently due to the local ancillarity of the original parameters for the tilting parameters.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0186.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Selection Bias Adjustment in Treatment-Effect Models as a Method of Aggregation</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0187</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Moffitt</surname>
          <given-names>Robert A</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1996</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Public Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The aim of this note is to interpret estimation of the conventional treatment-effect selection-bias model in econometrics as a method of aggregation and to draw the implications of this interpretation.  In addition, the paper notes the connection of this interpretation with an older style of analysis using grouped data and illustrates the aggregation analogy with examples from the literature. The estimation technique used to illustrate the points is the method of instrumental variables.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0187.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>A CES Indirect Production Function</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0188</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Jovanovic</surname>
          <given-names>Boyan</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>1995</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Productivity, Innovation, and Entrepreneurship</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper derives an indirect production function that is, in a special case, of a constant elasticity of substitution form.  This is not a contribution to the theory of aggregation generally.  Instead it is a microfoundation for a specific but popular production function -- the CES -- that helps us express the important concept of the elasticity of substitution in terms of more primitive, and more intuitive concepts of the returns to scale.  The paper presents a simple lemma, and then shows that several and diverse applications have a common logical structure: the production function often used in growth theory, the utility function when there is household production, human capital theory, and the concept of the aggregate technology shock.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0188.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>On the Validity of Using Census Geocode Characteristics to Proxy        Individual Socioeconomic Characteristics</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0189</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Geronimus</surname>
          <given-names>Arline T</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Bound</surname>
          <given-names>John</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Neidert</surname>
          <given-names>Lisa J</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>12</month>
       <year>1995</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Investigators of social differentials in health outcomes commonly augment incomplete micro data by appending socioeconomic characteristics of residential areas (such as median income in a zip code) to proxy for individual characteristics.  However, little empirical attention has been paid to how well this aggregate information serves as a proxy for the individual characteristics of interest.  We build on recent work addressing the biases inherent in proxies and consider two health-related examples within a statistical framework that illuminate the nature and sources of biases.  Data from the Panel Study of Income Dynamics and the National Maternal and Infant Health Survey are linked to census data.  We assess the validity of using the aggregate census information as a proxy for individual information when estimating main effects, and when controlling for potential confounding between socioeconomic and sociodemographic factors in measures of general health status and infant mortality.  We find a general, but not universal, tendency for aggregate proxies to exaggerate the effects of micro-level variables and to do more poorly than micro-level variables at controlling for confounding.  The magnitude and direction of these biases, however, vary across samples.  Our statistical framework and empirical findings suggest the difficulties in and limits to interpreting proxies derived from aggregate census data as if they were micro-level variables.  The statistical framework we outline for our study of health outcomes should be generally applicable to other situations where researchers have merged aggregate data with micro data samples.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0189.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Existence of Equilibrium and Stratification in Local and Hierarchical   Tiebout Economies with Property Taxes and Voting</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0190</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Nechyba</surname>
          <given-names>Thomas J</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>01</month>
       <year>1996</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Public Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We present the first fully closed general equilibrium model of hierarchical and local public goods economies with the following features: (i) multiple agent types who are endowed with both some amount of private good (income) and a house, who are mobile between houses and jurisdictions, and who vote in local and national elections; (ii) multiple communities that finance a local public good through property taxes which are set in accordance with absolute majority rule; and (iii) a national government that produces a national public good financed through an income tax whose level is determined through majority rule voting.  In contrast to previous models, no overly restrictive assumptions on preferences and technologies are required to prove the existence of an equilibrium in the presence of property taxation and voting.  Thus, the existence of an equilibrium is proved without any of the major restrictions used in the past, and sufficient conditions for stratification of agents into communities based on their public good preferences and their wealth levels are found.  This model lays the groundwork for a positive applied analysis of local public finance and intergovernmental relations.  It furthermore builds the foundation for a parameterized computable general equilibrium model of local public goods and fiscal federalism that is used elsewhere to analyze a variety of policy issues.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0190.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>On Biases in Tests of the Expecations Hypothesis of the Term Structure  Of Interest Rates</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0191</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Bekaert</surname>
          <given-names>Geert</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hodrick</surname>
          <given-names>Robert J</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Marshall</surname>
          <given-names>David</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>01</month>
       <year>1996</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Asset Pricing</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We document extreme bias and dispersion in the small sample distributions of five standard regression tests of the expectations hypothesis of the term structure of interest rates.  These biases derive from the extreme persistence in short interest rates.  We derive approximate analytic expressions for these biases, and we characterize the small-sample distributions of these test statistics under a simple first-order autoregressive data generating process for the short rate.  The biases are also present when the short rate is modeled with a more realistic regime-switching process.  The differences between the small-sample distributions of test statistics and the asymptotic distributions partially reconcile the different inferences drawn when alternative tests are used to evaluate the expectations hypothesis.  In general, the test statistics reject the expectations hypothesis more strongly and uniformly when they are evaluated using the small-sample distributions, as compared to the asymptotic distributions.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0191.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Forecast Evaluation and Combination</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0192</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Diebold</surname>
          <given-names>Francis X</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Lopez</surname>
          <given-names>Jose A</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>1996</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>It is obvious that forecasts are of great importance and widely used in economics and finance.  Quite simply, good forecasts lead to good decisions.  The importance of forecast evaluation and combination techniques follows immediately -- forecast users naturally have a keen interest in monitoring and improving forecast performance.  More generally, forecast evaluation figures prominently in many questions in empirical economics and finance.  We provide selective account of forecast evaluation and combination methods.  First we discuss evaluation of a single forecast, and in particular, evaluation of whether and how it may be improved.  Second, we discuss the evaluation and comparison of the accuracy of competing forecasts.  Third, we discuss whether and how a set of forecasts may be combined to produce a superior composite forecast.  Fourth, we describe a number of forecast evaluation topics of particular relevance in economics and finance, including methods for evaluating direction-of-change forecasts, probability forecasts and volatility forecasts.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0192.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Instrument Relevance in Multivariate Linear Models: A Simple Measure</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0193</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Shea</surname>
          <given-names>John</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>1996</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The correlation between instruments and explanatory variables is a key determinant of the performance of the instrumental variables estimator.  The R-squared from regressing the explanatory variable on the instrument vector is a useful measure of relevance in univariate models, but can be misleading when there are multiple endogenous variables.  This paper proposes a computationally simple partial R- squared measure of instrument relevance for multivariate models.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0193.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Exact Maximum Likelihood Estimation of Observation-Driven Econometric   Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0194</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Diebold</surname>
          <given-names>Francis X</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Schuermann</surname>
          <given-names>Til</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>04</month>
       <year>1996</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The possibility of exact maximum likelihood estimation of many observation-driven models remains an open question.  Often only approximate maximum likelihood estimation is attempted, because the unconditional density needed for exact estimation is not known in closed form.  Using simulation and nonparametric density estimation techniques that facilitate empirical likelihood evaluation, we develop an exact maximum likelihood procedure.  We provide an illustrative application to the estimation of ARCH models, in which we compare the sampling properties of the exact estimator to those of several competitors.  We find that, especially in situations of small samples and high persistence, efficiency gains are obtained.  We conclude with a discussion of directions for future research, including application of our methods to panel data models.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0194.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Inferences from Parametric and Non-Parametric Covariance Matrix Estimation Procedures</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0195</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>den Haan</surname>
          <given-names>Wouter J</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Levin</surname>
          <given-names>Andrew T</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1996</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>In this paper, we propose a parametric spectral estimation procedure for constructing heteroskedasticity and autocorrelation consistent (HAC) covariance matrices.  We establish the consistency of this procedure under very general conditions similar to those considered in previous research, and we demonstrate that the parametric estimator converges at a faster rate than the kernel-based estimators proposed by Andrews and Monahan (1992) and Newey and West (1994).  In finite samples, our Monte Carlo experiments indicate that the parametric estimator matches, and in some cases greatly exceeds, the performance of the prewhitened kernel estimator proposed by Andrews and Monahan (1992).  These simulation experiments illustrate several important limitations of non-parametric HAC estimation procedures, and highlight the advantages of explicitly modeling the temporal properties of the error terms. Wouter J. den Haan                                   Andrew Levin Depa</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0195.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Generating Non-Standard Multivariate Distributions with an Application to Mismeasurement in the CPI</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0196</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Shapiro</surname>
          <given-names>Matthew D</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Wilcox</surname>
          <given-names>David W</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1996</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Productivity, Innovation, and Entrepreneurship</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper shows how to generate the joint distribution of correlated random variables with specified marginal distributions. For cases where the marginal distributions are either normal or lognormal, it shows how to calculate analytically the correlation of the underlying normal distributions to induce the desired correlation between the variables.  It also provides a method for calculating the joint distribution in the case of arbitrary marginal distributions. The paper applies the technique to calculating the distribution of the overall bias in the consumer price index.  The technique should also be applicable to estimation by simulated moments or simulated likelihoods and to Monte Carlo analysis.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0196.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>A Practitioner's Guide to Robust Covariance Matrix Estimation</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0197</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>den Haan</surname>
          <given-names>Wouter J</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Levin</surname>
          <given-names>Andrew T</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>1996</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper develops asymptotic distribution theory for generalized method of moments (GMM) estimators and test statistics when some of the parameters are well identified, but others are poorly identified because of weak instruments.  The asymptotic theory entails applying empirical process theory to obtain a limiting representation of the (concentrated) objective function as a stochastic process.  The general results are specialized to two leading cases, linear instrumental variables regression and GMM estimation of Euler equations obtained from the consumption-based capital asset pricing model with power utility. Numerical results of the latter model confirm that finite sample distributions can deviate substantially from normality, and indicate that these deviations are captured by the weak instruments asymptotic approximations.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0197.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Asymptotics for GMM Estimators with Weak Instruments</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0198</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Stock</surname>
          <given-names>James H</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Wright</surname>
          <given-names>Jonathan H</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>07</month>
       <year>1996</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Asset Pricing</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper develops asymptotic distribution theory for generalized method of moments (GMM) estimators and test statistics when some of the parameters are well identified, but others are poorly identified because of weak instruments.  The asymptotic theory entails applying empirical process theory to obtain a limiting representation of the (concentrated) objective function as a stochastic process.  The general results are specialized to two leading cases, linear instrumental variables regression and GMM estimation of Euler equations obtained from the consumption-based capital asset pricing model with power utility.  Numerical results of the latter model confirm that finite sample distributions can deviate substantially from normality, and indicate that these deviations are captured by the weak instrument asymptotic approximations.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0198.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>An Introduction to Applicable Game Theory</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0199</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Gibbons</surname>
          <given-names>Robert S</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>07</month>
       <year>1997</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper offers an introduction to game theory for applied economists.  I try to give simple definitions and intuitive examples of the basic kinds of games and their solution concepts. There are four kinds of games:  static or dynamic, and complete or incomplete information.  ( Complete information  means there is no private information.)  The corresponding solution concepts are:  Nash equilibrium in static games of complete information; backwards induction (or subgame-perfect Nash equilibrium) in dynamic games of complete information; Bayesian Nash equilibrium in static games with incomplete information; and perfect Bayesian (or sequential) equilibrium in dynamic games with incomplete information.  The main theme of the paper is that these solution concepts are closely linked. As we consider progressively richer games, we progressively strengthen the solution concept, to rule out implausible equilibria in the richer games that would survive if we applied solution concepts available for simpler games.  In each case, the stronger solution concept differs from the weaker concept only for the richer games, not for the simpler games.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0199.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Nonparametric Applications of Bayesian Inference</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0200</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Chamberlain</surname>
          <given-names>Gary</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Imbens</surname>
          <given-names>Guido</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>1996</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The paper evaluates the usefulness of a nonparametric approach to Bayesian inference by presenting two applications.  The approach is due to Ferguson (1973, 1974) and Rubin (1981).  Our first application considers an educational choice problem.  We focus on obtaining a predictive distribution for earnings corresponding to various levels of schooling.  This predictive distribution incorporates the parameter uncertainty, so that it is relevant for decision making under uncertainty in the expected utility framework of microeconomics.  The second application is to quantile regression.  Our point here is to examine the potential of the nonparametric framework to provide inferences without making asymptotic approximations.  Unlike in the first application, the standard asymptotic normal approximation turns out to not be a good guide.  We also consider a comparison with a bootstrap approach.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0200.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Asymptotically Median Unbiased Estimation of Coefficient Variance in a Time Varying Parameter Model</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0201</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Stock</surname>
          <given-names>James H</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Watson</surname>
          <given-names>Mark W</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>1996</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper considers the estimation of the variance of coefficients in time varying parameter models with stationary regressors.  The maximum likelihood estimator has large point mass at zero.  We therefore develop asymptotically median unbiased estimators and confidence intervals by inverting median functions of regression-based parameter stability test statistics, computed under the constant-parameter null.  These estimators have good asymptotic relative efficiencies for small to moderate amounts of parameter variability.  We apply these results to an unobserved components model of trend growth in postwar U.S.  GDP:  the MLE implies that there has been no change in the trend rate, while the upper range of the median-unbiased point estimates imply that the annual trend growth rate has fallen by 0.7 percentage points over the postwar period.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0201.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Imposing Moment Restrictions from Auxiliary Data by Weighting</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0202</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Imbens</surname>
          <given-names>Guido</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hellerstein</surname>
          <given-names>Judith K</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>1996</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>In this paper we analyze estimation of coefficients in regression models under moment restrictions where the moment restrictions are derived from auxiliary data.  Our approach is similar to those that have been used in statistics for analyzing contingency tables with known marginals.  These methods are useful in cases where data from a small, potentially non-representative data set can be supplemented with auxiliary information from another data set which may be larger and/or more representative of the target population. The moment restrictions yield weights for each observation that can subsequently be used in weighted regression analysis. We discuss the interpretation of these weights both under the assumption that the target population and the sampled population are the same, as well as under the assumption that these popula- tions differ.  We present an application based on omitted ability bias in estimation of wage regressions.  The National Longitudinal Survey Young Men's Cohort (NLS), as well as containing information for each observation on earn- ings, education and experience, records data on two test scores that may be considered proxies for ability.  The NLS is a small data set, however, with a high attrition rate.  We investigate how to mitigate these problems in the NLS by forming moments from the joint distribution of education, experience and earnings in the 1% sample of the 1980 U.S. Census and using these moments to construct weights for weighted regression analysis of the NLS.  We analyze the impacts of our weighted regression techniques on the estimated coefficients and standard errors on returns to education and experience in the NLS control- ling for ability, with and without assuming that the NLS and the Census samples are random samples from the same population.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0202.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Statistical Mechanics Approaches to Socioeconomic Behavior</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0203</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Durlauf</surname>
          <given-names>Steven N</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>1996</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper provides a unified framework for interpreting a wide range of interactions models which have appeared in the economics literature.  A formalization taken from the statistical mechanics literature is shown to encompass a number of socioeconomic phenomena ranging from out of wedlock births to aggregate output to crime.  The framework bears a close relationship to econometric models of discrete choice and therefore holds the potential for rendering interactions models estimable.  A number of new applications of statistical mechanics to socioeconomic problems are suggested.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0203.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Hierarchical Bayes Models with Many Instrumental Variables</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0204</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Chamberlain</surname>
          <given-names>Gary</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Imbens</surname>
          <given-names>Guido</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>1996</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>In this paper, we explore Bayesian inference in models with many instrumental variables that are potentially weakly correlated with the endogenous regressor.  The prior distribution has a hierarchical (nested) structure.  We apply the methods to the Angrist-Krueger (AK, 1991) analysis of returns to schooling using instrumental variables formed by interacting quarter of birth with state/year dummy variables.  Bound, Jaeger, and Baker (1995) show that randomly generated instrumental variables, designed to match the AK data set, give two-stage least squares results that look similar to the results based on the actual instrumental variables.  Using a hierarchical model with the AK data, we find a posterior distribution for the parameter of interest that is tight and plausible.  Using data with randomly generated instruments, the posterior distribution is diffuse.  Most of the information in the AK data can in fact be extracted with quarter of birth as the single instrumental variable.  Using artificial data patterned on the AK data, we find that if all the information had been in the interactions between quarter of birth and state/year dummies, then the hierarchical model would still have led to precise inferences, whereas the single instrument model would have suggested that there was no information in the data.  We conclude that hierarchical modeling is a conceptually straightforward way of efficiently combining many weak instrumental variables.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0204.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The NBER Manufacturing Productivity Database</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0205</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Bartlesman</surname>
          <given-names>Eric</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Gray</surname>
          <given-names>Wayne B</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>1996</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Productivity, Innovation, and Entrepreneurship</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper provides technical documentation to accompany the NBER manufacturing productivity (MP) database.  The database contains information on 450 4-digit manufacturing industries for the period 1958 through 1991. The data are compiled from various official sources, most notably the Annual Survey of Manufactures and Census of Manufactures.  Also provided are estimates of total factor productivity (TFP) growth for each industry.  The paper further discusses alternate methods of deflation and aggregation and their impact on TFP calculations.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0205.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Further Investigation of the Uncertain Unit Root in GNP</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0206</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Cheung</surname>
          <given-names>Yin-Wong</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Chinn</surname>
          <given-names>Menzie D</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>11</month>
       <year>1996</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Finance and Macroeconomics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>A more powerful version of the ADF test and a test that has trend stationarity as the null are applied to U.S. GNP.  Simulated critical values generated from plausible trend and difference stationary models are used in order to minimize possible finite sample biases.  The discriminatory power of the two tests is evaluated using alternative-specific rejection frequencies.  For post-War quarterly data, these two tests do not provide a definite conclusion.  However, when analyzing annual data over the 1869-1986 period, the unit root null is rejected, while the trend stationary null is not.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0206.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Solving Large Scale Rational Expectations Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0207</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Gaspar</surname>
          <given-names>Jess</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Judd</surname>
          <given-names>Kenneth L</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>02</month>
       <year>1997</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Public Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We explore alternative approaches to numerical solutions of large rational expectations models.  We discuss and compare several current alternatives, focussing on the tradeoffs in accuracy, space, and speed.  The models range from representative agent models with many goods and capital stocks, to models of heterogeneous agents with complete or incomplete asset markets. The methods discussed include perturbation and projection methods.  We show that these methods are capable of analyzing moderately large models even when we use only elementary, general purpose numerical methods.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0207.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Computational Economics and Economic Theory: Substitutes or Complements</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0208</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Judd</surname>
          <given-names>Kenneth L</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>02</month>
       <year>1997</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Public Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This essay examines the idea and potential of a  computational approach to theory,' discusses methodological issues raised by such computational methods, and outlines the problems associated with the dissemination of computational methods and the exposition of computational results.  We argue that the study of a theory need not be confined to proving theorems, that current and future computer technologies create new possibilities for theoretical analysis, and that by resolving these issues we can create an intellectual atmosphere in which computational methods will make substantial contributions to economic analysis.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0208.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The Significance of the Market Portfolio</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0209</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Athanasoulis</surname>
          <given-names>Stefano</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Shiller</surname>
          <given-names>Robert J</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>02</month>
       <year>1997</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Asset Pricing</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The market portfolio is in one sense the least important portfolio to provide to investors.  In an J-agent one-period stochastic endowment economy, where preferences are quadratic, a social-welfare-minded contract designer would never create a contract that would allow trading the market portfolio.  Even the complete set of contracts, all J   1 of them, which achieve a first best solution, never span the market portfolio.  These conclusions rely on the assumption that the contract designer has perfect information about agents' utilities.  We also show that as the contract designer's information about agents' utilities becomes more imperfect, the optimal contracts approach contracts that weight individual endowments in proportion to elements of eigenvectors of the variance matrix of endowments.  Then, if there is a strong enough market component to endowments, a portfolio approximating the market portfolio may be the most important portfolio.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0209.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Observational Agency and Supply-Side Econometrics</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0210</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Philipson</surname>
          <given-names>Tomas</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>02</month>
       <year>1997</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economics of Health</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>A central problem in applied empirical work is to separate out the patterns in the data that are due to poor production of the data, such as e.g. non-response and measurement errors, from the patterns attributable to the economic phenomena studied. This paper interprets this inference problem as being an agency problem in the market for observations and suggests ways in which using incentives may be useful to overcome it.  The paper discusses how wage discrimination may be used for identification of economic parameters of interest taking into account the responses in survey supply by sample members to that discrimination.  Random wage discrimination alters the supply behavior of sample members across the same types of populations in terms of outcomes and thereby allows for separating out poor supply from the population parameters of economic interest. Empirical evidence for a survey of US physicians suggests that survey supply even for this wealthy group is affected by the types of wage discrimination schemes discussed in a manner that makes the schemes useful for identification purposes.  Using such schemes to correct mean estimates of physician earnings increases those earnings by about one third.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0210.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Time-to-Build and Cycles</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0211</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Asea</surname>
          <given-names>Patrick</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Zak</surname>
          <given-names>Paul J</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1997</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Finance and Macroeconomics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We analyze the dynamics of a simple growth model in which production occurs with a delay while new capital is installed (time-to-build).  The time-to-build technology is shown to yield a system of functional (delay) differential equations with a unique steady state.  We demonstrate that the steady state, though typically a saddle, may exhibit Hopf cycles on a measurable set of the parameter space.  Furthermore, the optimal path to the steady state is oscillatory.  A counter-example to the claim that intrinsically oscillatory on the central technical apparatus   the mathematics of functional differential equations.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0211.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>An Efficient Generalized Discrete-Time Approach to Poisson-Gaussian Bond Option Pricing in the Heath-Jarrow-Morton Model</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0212</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Das</surname>
          <given-names>Sanjiv</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>1997</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Asset Pricing</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Term structure models employing Poisson-Gaussian processes may be used to accommodate the observed skewness and kurtosis of interest rates. This paper extends the discrete-time, pure-Gaussian version of the Heath-Jarrow-Morton model to the pricing" of American-type bond options when the underlying term structure of interest rates follows a Poisson-Gaussian process.  The Poisson-Gaussian process is specified using a hexanomial tree (six nodes emanating from each node), and the tree is shown to be recombining. The scheme is parsimonious and convergent.  This model extends the class of HJM models by (i) introducing a more generalized volatility specification than has been used so far, and (ii) inducting jumps, yet retaining lattice recombination, thus making the model useful for practical applications.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0212.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Measuring Predictability: Theory and Macroeconomic Applications</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0213</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Diebold</surname>
          <given-names>Francis X</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Kilian</surname>
          <given-names>Lutz</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>1997</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We propose a measure of predictability based on the ratio of the expected loss of a short-run forecast to the expected loss of a long-run forecast.  This predictability measure can be tailored to the forecast horizons of interest, and it allows for general loss functions, univariate or multivariate information sets, and stationary or nonstationary data.  We propose a simple estimator, and we suggest resampling methods for inference.  We then provide several macroeconomic applications.  First, based on fitted parametric models, we assess the predictability of a variety of macroeconomic series.  Second, we analyze the internal propagation mechanism of a standard dynamic macroeconomic model by comparing predictability of model inputs and model outputs. Third, we use predictability as a metric for assessing the similarity of data simulated from the model and actual data. Finally, we sketch several promising directions for future research.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0213.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Moment Estimation with Attrition</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0214</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Abowd</surname>
          <given-names>John M</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Crépon</surname>
          <given-names>Bruno</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Kramarz</surname>
          <given-names>Francis</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>1997</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We present a method that accommodates missing data in longitudinal datasets of the type usually encountered in economic and social applications.  The technique uses various extensions of  missing at random' assumptions that we customize for dynamic models.  Our method, applicable to longitudinal data on persons or firms, is implemented using the Generalized Method of Moments with reweighting that appropriately corrects for the attrition bias caused by the missing data.  We apply the method to the estimation of dynamic labor demand models.  The results demonstrate that the correction is extremely important.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0214.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Evaluating Density Forecasts</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0215</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Diebold</surname>
          <given-names>Francis X</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Gunther</surname>
          <given-names>Todd A</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Tay</surname>
          <given-names>Anthony</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>1997</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We propose methods for evaluating density forecasts.  We focus primarily on methods" that are applicable regardless of the particular user's loss function.  We illustrate the methods" with a detailed simulation example, and then we present an application to density forecasting of" daily stock market returns.  We discuss extensions for improving suboptimal density forecasts multi-step-ahead density forecast evaluation, multivariate density forecast evaluation for structural change and its relationship to density forecasting, and density forecast evaluation" with known loss function.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0215.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Horizon Length and Portfolio Risk</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0216</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Gollier</surname>
          <given-names>Christian</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Zeckhauser</surname>
          <given-names>Richard J</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>1997</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Asset Pricing</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>In this paper, we compare the attitude towards current risk of two expected-utility-maximizing investors that are identical except that the first investor will live longer than the" second one.  In one of the models under consideration, there are two assets at every period.  The" first asset has a zero sure return, whereas the second asset is risky without serial correlation of" yields.  It is often suggested that the young investor should purchase more of the risky asset than" the old investor in such circumstances.  We show that a necessary and sufficient condition to get" this property is that the Arrow-Pratt index of absolute tolerance (Tu) be convex.  If we allow for a" positive risk-free rate, the necessary and sufficient condition is Tu convex extends the well-known result that investors are myopic in this model if and only if the utility" function exhibits constant relative risk aversion.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0216.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Cointegration and Long-Horizon Forecasting</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0217</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Christoffersen</surname>
          <given-names>Peter</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Diebold</surname>
          <given-names>Francis X</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>1997</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We consider the forecasting of cointegrated variables, and we show that at long horizons" nothing is lost by ignoring cointegration when forecasts are evaluated using standard multivariate" forecast accuracy measures. In fact, simple univariate Box-Jenkins forecasts are just as accurate. " Our results highlight a potentially important deficiency of standard forecast accuracy" measures they fail to value the maintenance of cointegrating relationships among" variables and we suggest alternatives that explicitly do so.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0217.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Algorithms for Solving Dynamic Models with Occasionally Binding Constraints</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0218</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Christiano</surname>
          <given-names>Lawrence</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Fisher</surname>
          <given-names>Jonas</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>1997</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We describe and compare several algorithms for approximating the solution to a model in" which inequality constraints occasionally bind.  Their performance is evaluated and compared" using various parameterizations of the one sector growth model with irreversible investment.  We" develop parameterized expectation algorithms which, on the basis of speed convenience of implementation, appear to dominate the other algorithms."</p>
</abstract>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Anchoring Effects in the HRS: Experimental and Nonexperimental Evidence</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0219</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hurd</surname>
          <given-names>Michael D</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>02</month>
       <year>1998</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>The Health and Retirement Study (HRS) and a number of other major household surveys use unfolding brackets to reduce item nonresponse.  However, the initial entry point into a bracketing sequence is likely to act as an anchor or point of reference to the respondent: The distribution of responses among those bracketed would be influenced by the entry point.  For example, when the initial entry point is high the distribution will be shifted to the right one to believe that holdings of the particular asset are greater than they truly are.  This paper has two goals.  The first is to analyze some experimental data on housing value from HRS wave 3 for anchoring effects.  The second is to compare the distributions of assets in HRS waves 1 and 2 for evidence about any anchoring effects that may have been caused by changes in the entry points between the waves.  Both the experimental data on housing values and the nonexperimental data from HRS waves 1 and 2 on assets show anchoring effects.  The conclusion is that to estimate accurately wealth change in panel data sets, we need a method of correcting for anchoring effects such as random entry into the bracketing sequence.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0219.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>An Analysis of Sample Attrition in Panel Data: The Michigan Panel Study of Income Dynamics</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0220</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Fitzgerald</surname>
          <given-names>John</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Gottschalk</surname>
          <given-names>Peter</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Moffitt</surname>
          <given-names>Robert A</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>02</month>
       <year>1998</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>By 1989 the Michigan Panel Study on Income Dynamics (PSID) had experienced approximately 50 percent sample loss from cumulative attrition from its initial 1968 membership.  We study the effect of this attrition on the unconditional distributions of several socioeconomic variables and on the estimates of several sets of regression coefficients.  We provide a statistical framework for conducting tests for attrition bias that draws a sharp distinction between selection on unobservables and on observables and that shows that weighted least squares can generate consistent parameter estimates when selection is based on observables, even when they are endogenous.  Our empirical analysis shows that attrition is highly selective and is concentrated among lower socioeconomic status individuals.  We also show that attrition is concentrated among those with more unstable earnings, marriage, and migration histories. Nevertheless, we find that these variables explain very little of the attrition in the sample, and that the selection that occurs is moderated by regression-to-the-mean effects from selection on transitory components that fade over time.  Consequently, despite the large amount of attrition, we find no strong evidence that attrition has seriously distorted the representativeness of the PSID through 1989, and considerable evidence that its cross-sectional representativeness has remained roughly intact.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0220.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>A Research Assistant's Guide to Random Coefficients Discrete Choice Models of Demand</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0221</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Nevo</surname>
          <given-names>Aviv</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>02</month>
       <year>1998</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>The study of differentiated-products markets is a central part of empirical industrial organization.  Questions regarding market power, mergers, innovation, and valuation of new brands are addressed using cutting-edge econometric methods and relying on economic theory.  Unfortunately, difficulty of use and computational costs have limited the scope of application of recent developments in one of the main methods for estimating demand for differentiated products: random coefficients discrete choice models.  As our understanding of these models of demand has increased, both the difficulty and costs have been greatly reduced.  This paper carefully discusses the latest innovations in these methods with the hope of (1) increasing the understanding, and therefore the trust, among researchers who never used these methods, and (2) reducing the difficulty of use, and therefore aiding in realizing the full potential of these methods.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0221.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Maximum Likelihood Estimation of Discretely Sampled Diffusions: A Closed-Form Approach</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0222</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Aït-Sahalia</surname>
          <given-names>Yacine</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>02</month>
       <year>1998</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>When a continuous-time diffusion is observed only at discrete dates, not necessarily close together, the likelihood function of the observations is in most cases not explicitly computable.  Researchers have relied on simulations of sample paths in between the observations points, or numerical solutions of partial differential equations, to obtain estimates of the function to be maximized.  By contrast, we construct a sequence of fully explicit functions which we show converge under very general conditions, including non-ergodicity, to the true (but unknown) likelihood function of the discretely-sampled diffusion. We document that the rate of convergence of the sequence is extremely fast for a number of examples relevant in finance.  We then show that maximizing the sequence instead of the true function results in an estimator which converges to the true maximum-likelihood estimator and shares its asymptotic properties of consistency, asymptotic normality and efficiency. Applications to the valuation of derivative securities are also discussed.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0222.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Overidentification Tests with Grouped Data</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0223</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hoxby</surname>
          <given-names>Caroline M</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Paserman</surname>
          <given-names>M. Daniele</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>02</month>
       <year>1998</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>This paper examines the validity of overidentification tests and exogeneity tests in the presence of grouped data.  We find that even a small intra-group correlation, when instruments do not vary within groups, may generate a substantial bias in the standard overidentification tests described in textbooks.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0223.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Monotone Instrumental Variables with an Application to the Returns to Schooling</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0224</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Manski</surname>
          <given-names>Charles F</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Pepper</surname>
          <given-names>John</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>02</month>
       <year>1998</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>Econometric analyses of treatment response commonly use instrumental variable (IV) assumptions to identify treatment effects. Yet the credibility of IV assumptions is often a matter of considerable disagreement, with much debate about whether some covariate is or is not a "valid instrument" in an application of interest. There is therefore good reason to consider weaker but more credible assumptions. assumptions. To this end, we introduce monotone instrumental variable (MIV) A particularly interesting special case of an MIV assumption is monotone treatment selection (MTS). IV and MIV assumptions may be imposed alone or in combination with other assumptions. We study the identifying power of MIV assumptions in three informational settings: MIV alone; MIV combined with the classical linear response assumption; MIV combined with the monotone treatment response (MTR) assumption. We apply the results to the problem of inference on the returns to schooling. We analyze wage data reported by white male respondents to the National Longitudinal Survey of Youth (NLSY) and use the respondent's AFQT score as an MIV. We find that this MIV assumption has little identifying power when imposed alone. However combining the MIV assumption with the MTR and MTS assumptions yields fairly tight bounds on two distinct measures of the returns to schooling.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0224.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Solving Dynamic Equilibrium Models by a Method of Undetermined Coefficients</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0225</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Christiano</surname>
          <given-names>Lawrence</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>02</month>
       <year>1998</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>I present an undetermined coefficients method for obtaining a linear approximating to the solution of a dynamic, rational expectations model.  I also show how that solution can be used to compute the model's implications for impulse response functions and for second moments.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0225.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Regression-Based Tests of Predictive Ability</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0226</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>West</surname>
          <given-names>Kenneth D</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>McCracken</surname>
          <given-names>Michael W</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>1998</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>We develop regression-based tests of hypotheses about out of sample prediction errors.  Representative tests include ones for zero mean and zero correlation between a prediction error and a vector of predictors.  The relevant environments are ones in which predictions depend on estimated parameters.  We show that standard regression statistics generally fail to account for error introduced by estimation of these parameters.  We propose computationally convenient test statistics that properly account for such error.  Simulations indicate that the procedures can work well in samples of size typically available, although there sometimes are substantial size distortions.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0226.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Net Health Benefits: A New Framework for the Analysis of Uncertainty in Cost-Effectiveness Analysis</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0227</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Stinnett</surname>
          <given-names>Aaron A</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Mullahy</surname>
          <given-names>John</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>1998</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>In recent years, considerable attention has been devoted to the development of statistical methods for the analysis of uncertainty in cost-effectiveness analysis, with a focus on situations in which the analyst has patient-level data on the costs and health effects of alternative interventions.  To date, discussions have focused almost exclusively on addressing the practical challenges involved in estimating confidence intervals for CE ratios.  However, the general approach of using confidence intervals to convey information about uncertainty around CE ratio estimates suffers from theoretical limitations that render it inappropriate in many situations.  We present an alternative framework for analyzing uncertainty in the economic evaluation of health interventions (termed the  net health benefits' approach) that is more broadly applicable and that avoids some problems of prior methods.  This approach offers several practical and theoretical advantages over the analysis of CE ratios, is straightforward to apply, and highlights some important principles in the theoretical underpinnings of CEA.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0227.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Much Ado About Two: Reconsidering Retransformation and the Two-Part Model in Health Economics</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0228</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Mullahy</surname>
          <given-names>John</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>1998</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>In health economics applications involving outcomes (y) and covariates (x), it is often the case that the central inferential problems of interest involve E[y|x] and its associated partial effects or elasticities.  Many such outcomes have two fundamental statistical properties: yò0; and the outcome y=0 is observed with sufficient frequency that the zeros cannot be ignored econometrically.  Common approaches to estimation in such instances include Tobit, selection, and two-part models.  This paper (1) describes circumstances where the standard two-part model with homoskedastic retransformation will fail to provide consistent inferences about important policy parameters; and (2) demonstrates some alternative approaches that are likely to prove helpful in applications.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0228.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Instrumental Variables Estimation of Quantile Treatment Effects</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0229</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Abadie</surname>
          <given-names>Alberto</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Angrist</surname>
          <given-names>Joshua</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname></surname>
          <given-names></given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>1998</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>This paper introduces an instrumental variables estimator for the effect of a binary treatment on the quantiles of potential outcomes.  The quantile treatment effects (QTE) estimator accommodates exogenous covariates and reduces to quantile regression as a special case when treatment status is exogenous. Asymptotic distribution theory and computational methods are derived.  QTE minimizes a piecewise linear objective function for which a local minimum can be obtained using a modified Barrodale-Roberts algorithm.  The QTE estimator is illustrated by estimating the effect of childbearing on the distribution of family income.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0229.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Combining Panel Data Sets with Attrition and Refreshment Samples</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0230</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hirano</surname>
          <given-names>Keisuke</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Imbens</surname>
          <given-names>Guido</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Ridder</surname>
          <given-names>Geert</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Rebin</surname>
          <given-names>Donald B</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>04</month>
       <year>1998</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>In many fields researchers wish to consider statistical models that allow for more complex relationships than can be inferred using only cross-sectional data. Panel or longitudinal data where the same units are observed repeatedly at different points in time can often provide the richer data needed for such models.  Although such data allows researchers to identify more complex models than cross-sectional data, missing data problems can be more severe in panels. In particular, even units who respond in initial waves of the panel may drop out in subsequent waves, so that the subsample with complete data for all waves of the panel can be less representative of the population than the original sample. Sometimes, in the hope of mitigating the effects of attrition without losing the advantages of panel data over cross-sections, panel data sets are augmented by replacing units who have dropped out with new units randomly sampled from the original population. Following Ridder (1992), who used these replacement units to test some models for attrition, we call such additional samples refreshment samples.  We explore the benefits of these samples for estimating models of attrition. We describe the manner in which the presence of refreshment samples allows the researcher to test various models for attrition in panel data, including models based on the assumption that missing data are missing at random (MAR, Rubin, 1976; Little and Rubin, 1987).  The main result in the paper makes precise the extent to which refreshment samples are informative about the attrition process; a class of non-ignorable missing data models can be identified without making strong distributional or functional form assumptions if refreshment samples are available.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0230.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Efficient Intertemporal Allocations with Recursive Utility</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0231</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Dumas</surname>
          <given-names>Bernard</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Uppal</surname>
          <given-names>Raman</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Wang</surname>
          <given-names>Tan</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>04</month>
       <year>1998</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>In this article, our objective is to determine efficient allocations in economies with multiple agents having recursive utility functions.  Our main result is to show that in a multiagent economy, the problem of determining efficient allocations can be characterized in terms of a single value function (that of a social planner), rather than multiple functions (one for each investor), as has been proposed thus far (Duffie, Geoffard and Skiadas (1994)).  We then show how the single value function can be identified using the familiar technique of stochastic dynamic programming.  We achieve these goals by first extending to a stochastic environment Geoffard's (1996) concept of variational utility and his result that variational utility is equivalent to recursive utility, and then using these results to characterize allocations in a multiagent setting.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0231.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Solutions to Linear Rational Expectations Models: A Compact Exposition</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0232</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>McCallum</surname>
          <given-names>Bennett T</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>04</month>
       <year>1998</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>An elementary exposition is presented of a convenient and practical solution procedure for a broad class of linear rational expectations models.  The undetermined-coefficient approach utilized keeps the mathematics very simple and permits consideration of alternative solution criteria.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0232.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>An Optimization-Based Econometric Framework for the Evaluation of Monetary Policy: Expanded Version</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0233</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Rotemberg</surname>
          <given-names>Julio J</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Woodford</surname>
          <given-names>Michael</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1998</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>This paper considers a simple quantitative model of output, interest rate and inflation determination in the United States, and uses it to evaluate alternative rules by which the Fed may set interest rates.  The model is derived from optimizing behavior under rational expectations, both on the part of the purchasers of goods and upon that of the sellers.  The model matches the estimates responses to a monetary policy shock quite well and, once due account is taken of other disturbances, can account for our data nearly as well as an unrestricted VAR.  The monetary policy rule that most reduces inflation variability (and is best on this account) requires very variable interest rates, which in turn is possible only in the case of a high average inflation rate.  But even in the case of a constrained-optimal policy, that takes into account some of the costs of average inflation and constrains the variability of interest rates so as to keep average inflation low, inflation would be stabilized considerably more and output stabilized considerably less than under our estimates of current policy.   Moreover, this constrained-optimal policy also allows average inflation to be much smaller. This version contains additional details of our derivations and calculations, including three technical appendices, not included in the version published in NBER Macroeconomics Annual 1997.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0233.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>A Simple Framework for Nonparametric Specification Testing</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0234</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Ellison</surname>
          <given-names>Glenn</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Ellison</surname>
          <given-names>Sara Fisher</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>1998</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>This paper presents a simple framework for testing the specification of parametric conditional means.  The test statistics are based on quadratic forms in the residuals of the null model.  Under general assumptions the test statistics are asymptotically normal under the null.  With an appropriate choice of the weight matrix, the tests are shown to be consistent and to have good local power.  Specific implementations involving matrices of bin and kernel weights are discussed.  Finite sample properties are explored in simulations and an application to some parametric models of gasoline demand is presented.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0234.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Sorting Out Sorts</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0235</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Berk</surname>
          <given-names>Jonathan B</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>1998</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>In this paper we analyze the theoretical implications of sorting data into groups and then running asset pricing tests within each group.  We show that the way this procedure is implemented introduces a severe bias in favor of rejecting the model under consideration.  By simply picking enough groups to sort into even the true asset pricing model can be shown to have no explanatory power within each group.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0235.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Approximation Bias in Linearized Euler Equations</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0236</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Ludvigson</surname>
          <given-names>Sydney C</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Paxson</surname>
          <given-names>Christina</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>1999</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>A wide range of empirical applications rely on linear approximations to dynamic Euler equations.  Among the most notable of these is the large and growing literature on precautionary saving that examines how consumption growth and saving behavior are affected by uncertainty and prudence.  Linear approximations to Euler equations imply a linear relationship between expected consumption growth and uncertainty in consumption growth, with a slope coefficient that is a function of the coefficient of relative prudence.  This literature has produced puzzling results: Estimates of the coefficient of relative prudence (and the coefficient of relative risk aversion) from regressions of consumption growth on uncertainty in consumption growth imply estimates of prudence and risk aversion that are unrealistically low.  Using numerical solutions to a fairly standard intertemporal optimization problem, our results show that the actual relationship between expected consumption growth and uncertainty in consumption growth differs substantially from the relationship implied by a linear approximation.  We also present Monte Carlo evidence that shows that the instrumental variables methods commonly used to estimate the parameters correct some, but not all, of the approximation bias.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0236.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The Role of the Propensity Score in Estimating Dose-Response Functions</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0237</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Imbens</surname>
          <given-names>Guido</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>04</month>
       <year>1999</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>Estimation of average treatment effects in observational, or non-experimental in pre-treatment variables. If the number of pre-treatment variables is large, and their distribution varies substantially with treatment status, standard adjustment methods such as covariance adjustment are often inadequate. Rosenbaum and Rubin (1983) propose an alternative method for adjusting for pre-treatment variables based on the propensity score conditional probability of receiving the treatment given pre-treatment variables. They demonstrate that adjusting solely for the propensity score removes all the bias associated with differences in pre-treatment variables between treatment and control groups. The Rosenbaum-Rubin proposals deal exclusively with the case where treatment takes on only two values. In this paper an extension of this methodology is proposed that allows for estimation of average causal effects with multi-valued treatments while maintaining the advantages of the propensity score approach.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0237.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Predicting the Efficacy of Future Training Programs Using Past          Experiences</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0238</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hotz</surname>
          <given-names>V. Joseph</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Imbens</surname>
          <given-names>Guido</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Mortimer</surname>
          <given-names>Julie Holland</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1999</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>We investigate the problem of predicting the average effect of a new training program using experiences with previous implementations. There are two principal complications in doing so. First, the population in which the new program will be implemented may differ from the population in which the old program was implemented. Second, the two programs may differ in the mix of their components. With sufficient detail on characteristics of the two populations and sufficient overlap in their distributions, one may be able to adjust for differences due to the first complication. Dealing with the second difficulty requires data on the exact treatments the individuals received. However even in the presence of differences in the mix of components across training programs comparisons of controls in both populations who were excluded from participating in any of the programs should not be affected. To investigate the empirical importance of these issues, we compare four job training pro-grams implemented in the mid-eighties in different parts of the U.S. We find that adjusting for pre-training earnings and individual characteristics removes most of the differences between control units, but that even after such adjustments, post-training earnings for trainees are not comparable. We surmise that differences in treatment components across training programs are the likely cause, and that more details on the specific services provided by these programs are necessary to predict the effect of future programs. We also conclude that effect heterogeneity, it is essential, even in experimental evaluations of training programs record pre-training earnings and individual characteristics in order to render the extrapolation of the results to different locations more credible.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0238.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Toll Competition Among Congested Roads</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0239</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Engel</surname>
          <given-names>Eduardo</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Fischer</surname>
          <given-names>Ronald</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Galetovic</surname>
          <given-names>Alexander</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1999</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>A growing number of roads are currently financed by the private sector via Build-Operate-and -Transfer (BOT) schemes.  When the franchised road has no close substitute, the government must regulate tolls.  Yet when there are many ways of getting from one point to another, regulation may be avoided by allowing competition between several franchise owners.  This paper studies toll competition among private roads with congestion. The paper derives two main results.  First, we find sufficient conditions for the existence of an equilibrium in pure strategies with strictly positive tolls. Equilibrium congestion is less than optimal, which runs counter to what is expected form price competition.  While a lower toll reduces the out-of-pocket cost paid by a user, it increases the congestion cost thereby reducing the drivers' willingness to pay for using the road.  Franchise holders partially internalize congestion costs when setting tolls, which softens price competition. Second, when demand and the number of roads increase at the same rate, tolls converge to the socially optimal level -- that is, in the limit equilibrium tolls are just enough to make each driver internalize the congestion externality.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0239.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Predictive Regressions</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0240</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Stambaugh</surname>
          <given-names>Robert F</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1999</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>When a rate of return is regressed on a lagged stochastic regressor, such as a dividend yield, the regression disturbance is correlated with the regressor's innovation.  The OLS estimator's finite-sample properties, derived here, can depart substantially from the standard regression setting.  Bayesian posterior distributions for the regression parameters are obtained under specifications that differ with respect to (i) prior beliefs about the autocorrelation of the regressor and (ii) whether the initial observation of the regressor is specified as fixed or stochastic.  The posteriors differ across such specifications asset allocations in the presence of estimation risk exhibit sensitivity to those differences.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0240.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>When to Control for Covariates?  Panel-Asymptotic Results for Estimates of Treatment Effects</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0241</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Angrist</surname>
          <given-names>Joshua</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hahn</surname>
          <given-names>Jinyong</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1999</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>The problem of how to control for covariates is endemic in evaluation research.  Covariate-matching provides an appealing control strategy, but with continuous or high-dimensional covariate vectors, exact matching may be impossible or involve small cells.  Matching observations that have the same propensity score produces unbiased estimates of causal effects whenever covariate-matching does, and also has an attractive dimension-reducing property.  On the other hand, conventional asymptotic arguments show that covariate-matching is (asymptotically) more efficient that propensity score-matching.  This is because the usual asymptotic sequence has cell sizes growing to infinity, with no benefit from reducing the number of cells.  Here, we approximate the large sample behavior of difference matching estimators using a panel-style asymptotic sequence with fixed cell sizes and the number of cells increasing to infinity.  Exact calculations in simple examples and Monte Carlo evidence suggests this generates a substantially improved approximation to actual finite-sample distributions.  Under this sequence, propensity-score-matching is most likely to dominate exact matching when cell sizes are small, the explanatory power of the covariates conditional on the propensity score is low, and/or the probability of treatment is close to zero or one.  Finally, we introduce a random-effects type combination estimator that provides finite-sample efficiency gains over both covariate-matching and propensity-score-matching.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0241.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Statistical Treatment Rules for Heterogeneous Populations:  With Application to Randomized Experiments</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0242</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Manski</surname>
          <given-names>Charles F</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>1999</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>This paper uses Wald's concept of the risk of a statistical decision function to address the question: How should sample data on treatment response be used to guide treatment choices in a heterogeneous population?  Statistical treatment rules (STRs) are statistical decision functions that map observed covariates of population members and sample data on treatment response into treatment choices.  I propose evaluation of STRs by their expected welfare (negative risk in Wald's terms), and I apply this criterion to compare two STRs when the sample data are generated by a classical randomized experiment. The rules compared both embody the reasonable idea that persons should be assigned the treatment with the best empirical success rate, but they differ in their use of covariate information.  The conditional success (CS) rule selects treatments with the best empirical success rates conditional on specified covariates and the unconditional success (US) rule selects a treatment with the best unconditional empirical success rate.  The main finding is a proposition giving finite-sample bounds on expected welfare under the two rules.  The bounds, which rest on a large-deviations theorem of Hoeffding,  yield explicit sample-size and distributional conditions under which the CS Rule is superior to the US rule.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0242.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Environmental Policy and Firm Behavior:  Abatement Investment and Location Decisions Under Uncertainty and Irreversibility</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0243</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Xepapadeas</surname>
          <given-names>Anastasios</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>1999</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>This paper explores abatement investment and location responses to environmental policy, which takes the form of emission taxes or tradeable emission permits and subsidies against the costs of abatement investment, under uncertainty and irreversibility. Uncertainty is associated with output price, environmental policy parameters, or technological parameters.  Irreversibility is related to abatement expenses and movements to a new location. Uncertainty is modeled by Itô stochastic differential equations, and the problem is analyzed by using optimal stopping methodologies.  Continuation intervals during which firms do not engage in abatement investment or relocate and intervals during which firms take the irreversible decision of undertaking abatement expenses or relocating are defined. Free boundaries are characterized for a variety of cases that include output price uncertainty expressed both in terms of continuous fluctuations of permit prices and unpredictable policy changes, and combined policy and technological uncertainty.  An optimal environmental policy is defined as the combination of policy parameters that makes the free boundary corresponding to the profit maximization problem coincide with the free boundary corresponding to a social optimization problem.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0243.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>News About News:  Information Arrival and Irreversible Investment</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0244</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Drazen</surname>
          <given-names>Allan</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Sakellaris</surname>
          <given-names>Plutarchos</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>1999</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>We analyze how uncertainty about when information about future returns to a project may be revealed affects investment.  While 'good news' about future returns boosts investment, 'good news about news' (that is news that information may arrive sooner) is shown to depress investment.  We show that early revelation increases the value of an irreversible investment project to a risk-neutral investor.  We relate our results on preference for early revelation to results in non-expected utility theory.  Our framework allows us to study irreversible investment projects whose value has a time-variable volatility.  We also consider how heterogeneity of revelation information across firms may induce a better-informed firm to share its information with competitors.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0244.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Interaction Effects and Difference-in-Difference Estimation in Loglinear Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0245</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Mullahy</surname>
          <given-names>John</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>11</month>
       <year>1999</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>In applied econometric work, analysts are concerned often with estimation of and inferences about interaction effects, e.g. 'Does the magnitude of the effect of z1 on y depend on z2? ' This paper develops tests for and proper interpretation of various forms of interaction effects in one prominent class of regression models   loglinear models   for which the nature of estimated interaction effects has not always been given due attention.  The results obtained here have a direct bearing on the interpretation of so-called difference-in-difference estimates when these are obtained using loglinear models.  An empirical example of the impacts of health insurance and chronic illness on prescription drug utilization underscores the importance of these issues in practical settings.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0245.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Estimating Log Models: To Transform or Not to Transform?</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0246</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Manning</surname>
          <given-names>Willard G</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Mullahy</surname>
          <given-names>John</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>11</month>
       <year>1999</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Data on health care expenditures, length of stay, utilization of health services, consumption of unhealthy commodities, etc. are typically characterized by: (a) nonnegative outcomes; (b) nontrivial fractions of zero outcomes in the population (and sample); and (c) positively-skewed distributions of the nonzero realizations.  Similar data structures are encountered in labor economics as well.  This paper provides simulation-based evidence on the finite-sample behavior of two sets of estimators designed to look at the effect of a set of covariates x on the expected outcome, E(y|x), under a range of data problems encountered in every day practice: generalized linear models (GLM), a subset of which can simply be viewed as differentially weighted nonlinear least-squares estimators, and those derived from least-squares estimators for the ln(y).  We consider the first- and second- order behavior of these candidate estimators under alternative assumptions on the data generating processes.  Our results indicate that the choice of estimator for models of ln(E(x|y)) can have major implications for empirical results if the estimator is not designed to deal with the specific data generating mechanism.  Garden-variety statistical problems - skewness, kurtosis, and heteroscedasticity - can lead to an appreciable bias for some estimators or appreciable losses in precision for others.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0246.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>A Note on Longitudinally Matching Current Population Survey (CPS) Respondents</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0247</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Madrian</surname>
          <given-names>Brigitte C</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Lefgren</surname>
          <given-names>Lars</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>11</month>
       <year>1999</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>In this paper, we propose an approach for evaluating the trade-offs inherent in different approaches used to match Current Population Survey (CPS) respondents across various CPS surveys. Because there is some measurement error in both the variables used to identify individuals over time and in the characteristics of individuals at any point in time, any procedure used to match CPS respondents has the possibility of both generating incorrect matches and failing to generate potentially valid matches.  We propose using the information contained in the variable on whether an individual lived in the same house on March 1 of the previous year as a way to gauge these trade-offs.  We find that as measured by reported residence one year ago, increasing the fraction of 'invalid' merges that are rejected usually comes at a cost of decreasing the fraction of 'valid' merges that are retained.  However, there are clearly some approaches that are superior to others in the sense that they result in both a higher fraction of 'invalid' merges being rejected and a higher fraction of 'valid' merges being retained.</p>
<p></p>
<p><p>The programs to implement CPS matching across years in this paper are</p>
<p>  <a href="http://www.nber.org/data/cps_match.html"></p>
<p>  available </a>.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0247.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Estimation of Limited-Dependent Variable Models with Dummy Endogenous Regressors: Simple Strategies for Empirical Practice</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0248</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Angrist</surname>
          <given-names>Joshua</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>01</month>
       <year>2000</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Applied economists have long struggled with the question of how to accommodate binary endogenous regressors in models with binary and non-negative outcomes.  I argue here that much of the difficulty with limited-dependent variables comes from a focus on structural parameters, such as index coefficients, instead of causal effects.  Once the object of estimation is taken to be the causal effect of treatment, a number of simple strategies is available.  These include conventional two-stage least squares, multiplicative models for conditional means, linear approximation of nonlinear causal models, models for distribution effects, and quantile regression with an endogenous binary regressor.  The estimation strategies discussed in the paper are illustrated by using multiple births to estimate the effect of childbearing on employment status and hours of work.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0248.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>On Optimal Instrumental Variables Estimation of Stationary Time Series Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0249</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>West</surname>
          <given-names>Kenneth D</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>01</month>
       <year>2000</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>In many time series models, an infinite number of moments can be used for estimation in a large sample.  I supply a technically undemanding proof of a condition for optimal instrumental variables use of such moments in a parametric model.  I also illustrate application of the condition in estimation of a linear model with a conditionally heteroskedastic disturbance.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0249.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>External Treatment Effects and Program Implementation Bias</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0250</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Philipson</surname>
          <given-names>Tomas</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>01</month>
       <year>2000</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper discusses the definition and identification of external treatment effects and experimental designs capable of detecting these effects.  External effects occur when the outcome of a given individual is affected by the treatment assignments of other individuals.  The paper argues that two-stage randomization schemes, which randomize allocation of treatments across communities and randomizes the treatments themselves within communities, are useful for identifying private and external treatment effects.  The importance of external treatment effects are illustrated in the context of several health economics applications: the impact of R&amp;D subsidies, smoking prevention programs for youth, and the evaluation of HIV-prevention programs currently taking place in Africa.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0250.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Efficient Estimation of Average Treatment Effects Using the Estimated Propensity Score</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0251</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hirano</surname>
          <given-names>Keisuke</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Imbens</surname>
          <given-names>Guido</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Ridder</surname>
          <given-names>Geert</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>2000</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We are interested in estimating the average effect of a binary treatment on a scalar outcome.  If assignment to the treatment is independent of the potential outcomes given pretreatment variables, biases associated with simple treatment-control average comparisons can be removed by adjusting for differences in the pre-treatment variables.  Rosenbaum and Rubin (1983, 1984) show that adjusting solely for differences between treated and control units in a scalar function of the pre-treatment, the propensity score, also removes the entire bias associated with differences in pre-treatment variables.  Thus it is possible to obtain unbiased estimates of the treatment effect without conditioning on a possibly high-dimensional vector of pre-treatment variables.  Although adjusting for the propensity score removes all the bias, this can come at the expense of efficiency. We show that weighting with the inverse of a nonparametric estimate of the propensity score, rather than the true propensity score, leads to efficient estimates of the various average treatment effects.  This result holds whether the pre-treatment variables have discrete or continuous distributions.  We provide intuition for this result in a number of ways.  First we show that with discrete covariates, exact adjustment for the estimated propensity score is identical to adjustment for the pre-treatment variables.  Second, we show that weighting by the inverse of the estimated propensity score can be interpreted as an empirical likelihood estimator that efficiently incorporates the information about the propensity score.  Finally, we make a connection to results to other results on efficient estimation through weighting in the context of variable probability sampling.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0251.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Local Instrumental Variables</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0252</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Heckman</surname>
          <given-names>James J</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Vytlacil</surname>
          <given-names>Edward J</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>04</month>
       <year>2000</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper unites the treatment effect literature and the latent variable literature.  The economic questions answered by the commonly used treatment effect parameters are considered. We demonstrate how the marginal treatment effect parameter can be used in a latent variable framework to generate the average treatment effect, the effect of treatment on the treated and the local average treatment effect, thereby establishing a new relationship among these parameters.  The method of local instrumental variables directly estimates the marginal treatment effect parameters, and thus can be used to estimate all of the conventional treatment effect parameters  when the index condition holds and the parameters are identified.  When they are not, the method of local instrumental variables can be used to produce bounds on the parameters with the width of the bounds depending on the width of the support for the index generating the choice of the observed potential outcome.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0252.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Estimating Euler Equations</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0253</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Attanasio</surname>
          <given-names>Orazio</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Low</surname>
          <given-names>Hamish</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>2000</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>In this paper we consider conditions under which the estimation of a log-linearized Euler equation for consumption yields consistent estimates of the preference parameters. When utility is isoelastic and a sample covering a long time period is available, consistent estimates are obtained from the log-linearized Euler equation when the innovations to the conditional variance of consumption growth are uncorrelated with the instruments typically used in estimation. We perform a Montecarlo experiment, consisting in solving and simulating a simple life cycle model under uncertainty, and show that in most situations, the estimates obtained from the log-linearized equation are not systematically biased. This is true even when we introduce heteroscedasticity in the process generating income. The only exception is when discount rates are very high (47% per year). This problem arises because consumers are nearly always close to the maximum borrowing limit: the estimation bias is unrelated to the linearization. Finally, we plot life cycle profiles for the variance of consumption growth, which, except when the discount factor is very high, is remarkably flat. This implies that claims that demographic variables in log-linearized Euler equations capture changes in the variance of consumption growth are unwarranted.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0253.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Direct Estimation of Policy Impacts</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0254</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Ichimura</surname>
          <given-names>Hidehiko</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Taber</surname>
          <given-names>Christopher R</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>2000</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper specifies a general set of conditions under which the impacts of a policy can be identified using data generated under a different policy regime.  We show that some of the policy impacts can be identified under relatively weak conditions on the data and structure of a model.  Based on the identification results we develop estimators of policy impacts.  We discuss a nonparametric method to implement the estimation but also discuss semiparametric methods in order to reduce the conditioning dimension.  We then provide an empirical example of the impact of tuition subsidies using the ideas.  While the framework used in this paper is fairly narrow, we believe this approach can be applied to a broad set of problems.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0254.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Robust Covariance Matrix Estimation with Data-Dependent VAR Prewhitening Order</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0255</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>den Haan</surname>
          <given-names>Wouter J</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Levin</surname>
          <given-names>Andrew T</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>2000</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper analyzes the performance of heteroskedasticity-and-autocorrelation-consistent (HAC) covariance matrix estimators in which the residuals are prewhitened using a vector autoregressive (VAR) filter. We highlight the pitfalls of using an arbitrarily fixed lag order for the VAR filter, and we demonstrate the benefits of using a model selection criterion (either AIC or BIC) to determine its lag structure. Furthermore, once data-dependent VAR prewhitening has been utilized, we find negligible or even counter-productive effects of applying standard kernel-based methods to the prewhitened residuals; that is, the performance of the prewhitened kernel estimator is virtually indistinguishable from that of the VARHAC estimator.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0255.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Encompassing Tests When No Model Is Encompassing</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0256</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>West</surname>
          <given-names>Kenneth D</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>2000</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper considers regression-based tests for encompassing, when none of the models under consideration encompasses all the other models.  For both in- and out-of-sample applications, I derive asymptotic distributions and propose feasible procedures to construct confidence intervals and test statistics. Procedures that are asymptotically valid under the null of encompassing (e.g., Davidson and MacKinnon (1981)) can have large asymptotic and finite sample distortions.  Simulations indicate that the proposed procedures can work well in samples of size typically available, though the divergence between actual and nominal confidence interval coverage sometimes is large.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0256.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Bias from Classical and Other Forms of Measurement Error</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0257</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hyslop</surname>
          <given-names>Dean R</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Imbens</surname>
          <given-names>Guido</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>2000</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We consider the implications of a specific alternative to the classical measurement error model, in which the data are optimal predictions based on some information set.  One motivation for this model is that if respondents are aware of their ignorance they may interpret the question  what is the value of this variable?' as  what is your best estimate of this variable?', and provide optimal predictions of the variable of interest given their information set.  In contrast to the classical measurement error model, this model implies that the measurement error is uncorrelated with the reported value and, by necessity, correlated with the true value of the variable. In the context of the linear regression framework, we show that measurement error can lead to over- as well as under-estimation of the coefficients of interest.  Critical for determining the bias is the model for the individual reporting the mismeasured variables, the individual's information set, and the correlation structure of the errors.  We also investigate the implications of instrumental variables methods in the presence of measurement error of the optimal prediction error form and show that such methods may in fact introduce bias.  Finally, we present some calculations indicating that the range of estimates of the returns to education consistent with amounts of measurement error found in previous studies.  This range can be quite wide, especially if one allows for correlation between the measurement errors.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0257.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Interactions-Based Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0258</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Brock</surname>
          <given-names>William</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Durlauf</surname>
          <given-names>Steven N</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>2000</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper describes a range of methods which have been proposed to study interactions in economic and social contexts.  By interactions, we refer to interdependences between individual decisions which are not mediated by markets.  These types of models have been employed to understand phenomena ranging from the effect of neighborhoods on the life prospects of children to the evolution of political party platforms.  We provide a general choice-based framework for modeling such interactions which subsumes a number of specific models which have been studied. This framework illustrates the relationship between interactions-based models and models in statistical mechanics.  Our analysis is then extended to the econometrics of these models, with an emphasis on the identification of group-level influences on individual behavior.  Finally, we review some of the empirical work on interactions which has appeared in the social science literature.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0258.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Instrumental Variables, Selection Models, and Tight Bounds on the Average Treatment Effect</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0259</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Heckman</surname>
          <given-names>James J</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Vytlacil</surname>
          <given-names>Edward J</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>2000</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper exposits and relates two distinct approaches to bounding the average treatment effect.  One approach, based on instrumental variables, is due to Manski (1990, 1994), who derives tight bounds on the average treatment effect under a mean independence form of the instrumental variables (IV) condition. The second approach, based on latent index models, is due to Heckman and Vytlacil (1999, 2000a), who derive bounds on the average treatment effect that exploit the assumption of a nonparametric selection model with an exclusion restriction. Their conditions imply the instrumental variable condition studied by Manski, so that their conditions are stronger than the Manski conditions.  In this paper, we study the relationship between the two sets of bounds implied by these alternative conditions.  We show that: (1) the Heckman and Vytlacil bounds are tight given their assumption of a nonparametric selection model; (2) the Manski bounds simplify to the Heckman and Vytlacil bounds under the nonparametric selection  model assumption.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0259.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Semiparametric Estimation of Instrumental Variable Models for Causal Effects</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0260</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Abadie</surname>
          <given-names>Alberto</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>2000</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Public Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This article introduces a new class of instrumental variable (IV) estimators of causal treatment effects for linear and nonlinear models with covariates. The rationale for focusing on nonlinear models is to improve the approximation to the causal response function of interest. For example, if the dependent variable is binary or limited, or if the effect of the treatment varies with covariates, a nonlinear model is likely to be appropriate. However, identification is not attained through functional form restrictions. This paper shows how to estimate a well-defined approximation to a nonlinear causal response function of unknown functional form using simple parametric models. As an important special case, I introduce a linear model that provides the best linear approximation to an underlying causal relation. It is shown that Two Stage Least Squares (2SLS) does not always have this property and some possible interpretations of 2SLS coefficients are brie y studied. The ideas and estimators in this paper are illustrated using instrumental variables to estimate the effects of 401(k) retirement programs on savings.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0260.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Bootstrap Tests for the Effect of a Treatment on the Distribution of an Outcome Variable</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0261</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Abadie</surname>
          <given-names>Alberto</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>2000</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper considers the problem of assessing the distributional consequences of a treatment on some outcome variable of interest when treatment intake is (possibly) non-randomized but there is a binary instrument available for the researcher. Such scenario is common in observational studies and in randomized experiments with imperfect compliance. One possible approach to this problem is to compare the counterfactual cumulative distribution functions of the outcome with and without the treatment. Here, it is shown how to estimate these distributions using instrumental variable methods and a simple bootstrap procedure is proposed to test distributional hypotheses, such as equality of distributions, first-order stochastic dominance and second order stochastic dominance. These tests and estimators are applied to the study of the effects of veteran status on the distribution of civilian earnings. The results show a negative effect of military service in Vietnam that appears to be concentrated on the lower tail of the distribution of earnings. First order stochastic dominance cannot be rejected by the data.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0261.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Treatment Effects for Discrete Outcomes when Responses to Treatment Vary Among Observationally Identical Persons: An Application to Norwegian ...</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0262</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Aakvik</surname>
          <given-names>Arild</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Heckman</surname>
          <given-names>James J</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Vytlacil</surname>
          <given-names>Edward J</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>2000</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper formulates an econometric framework for studying the impact of interventions on discrete outcomes when responses to treatment vary among observationally identical persons.  Using a latent variable model that can be linked to well-posed economic models, we show how to define and interpret the average treatment effects, the average effect of treatment on the treated, the marginal treatment effect and the distribution of treatment effects for discrete outcomes.  To estimate these parameters and the distribution of treatment effects, we formulate and estimate a discrete choice model with unobservables generated by a factor structure model. We apply our methods to evaluate the effect of Norwegian Vocational Rehabilitation training programs on employment outcomes for women.  We find that applicants to these programs who participate in active training have a 4.6% higher employment rate than nonparticipants.  When we control for the observable characteristics of applicants, we find that the average treatment effects falls to 4.1%.  When we control for the unobservables characteristics of applicants, the average treatment effect falls to -1.4% and effect of treatment on the treated is -11%.  We also find evidence of substantial heterogeneity in response to training.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0262.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Using Studies of Treatment Response to Inform Treatment Choice in Heterogeneous Populations</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0263</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Manski</surname>
          <given-names>Charles F</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>11</month>
       <year>2000</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>An important practical objective of empirical studies of treatment response is to provide decision makers with information useful in choosing treatments.  Often the decision maker is a planner who must choose treatments for the members of a heterogeneous population; for example, a physician may choose medical treatments for a population of patients.  Studies of treatment response cannot provide all the information that planners would like to have as they choose treatments, but researchers can be of service by addressing several questions: How should studies be designed in order to be most informative? How should studies report their findings so as to be most useful in decision making?  How should planners utilize the information that studies provide?  This paper addresses aspects of these broad questions, focusing on pervasive problems of identification and statistical inference that arise when studying treatment response.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0263.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Long Memory and Regime Switching</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0264</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Diebold</surname>
          <given-names>Francis X</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Inoue</surname>
          <given-names>Atsushi</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>11</month>
       <year>2000</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The theoretical and empirical econometric literatures on long memory and regime switching have evolved largely independently, as the phenomena appear distinct.  We argue, in contrast, that they are intimately related, and we substantiate our claim in several environments, including a simple mixture model, Engle and Lee's (1999) stochastic permanent break model, and Hamilton's (1989) Markov switching model.  In particular, we show analytically that stochastic regime switching is easily confused with long memory, even asymptotically, so long as only a  small' amount of regime switching occurs, in a sense that we make precise.  A Monte Carlo analysis supports the relevance of the theory and produces additional insights.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0264.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Time Use and Population Representation in the Sloan Study of Adolescents</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0265</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Mulligan</surname>
          <given-names>Casey B</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Schneider</surname>
          <given-names>Barbara L</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Wolfe</surname>
          <given-names>Rurtin</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>11</month>
       <year>2000</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Do studies of time use interfere too much in the lives of the subjects?  As a result are those who agree to participate a biased sample of the population?  We examine the characteristics of the Experience Sampling Method (ESM) adolescent sample from the Alfred P. Sloan Study of Youth and Social Development in order to detect and quantify instances of sampling and nonresponse bias.  According to available proxies for time use and standard demographic variables, the Sloan ESM sample is nearly representative in terms of teen employment rates, parental employment rates, a student's grade point average, and TV watching.  Work hours are slightly undercounted in the study because of slightly higher nonresponse rates by teenagers working long hours.  The sample is less representative in terms of the time of week and gender; nonresponse is relatively common on school nights and (to a lesser extent) on weekends, and among boys.  We offer some suggestions regarding general implications of our findings for the measurement of time use.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0265.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The Conventional Treatment of Seasonality in Business Cycle Analysis: Does it Create Distortions?</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0266</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Christiano</surname>
          <given-names>Lawrence</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Todd</surname>
          <given-names>Richard</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>12</month>
       <year>2000</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>'No.'  So says one model that is broadly consistent with postwar U.S. seasonal and business cycle data.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0266.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Panel Data Estimators for Nonseparable Models with Endogenous Regressors</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0267</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Altonji</surname>
          <given-names>Joseph</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Matzkin</surname>
          <given-names>Rosa</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>2001</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We propose two new estimators for a wide class of panel data models with nonseparable error terms and endogenous explanatory variables. The first estimator covers qualitative choice models and both estimators cover models with continuous dependent variables. The first estimator requires the existence of a vector z such that the density of the error term does not depend on the explanatory variables once one conditions on z. In some panel data cases we may find z by making the assumption that the distribution of the error term conditional on the vector of the explanatory variables for each  cross-section' unit in the panel is exchangeable in the values of those explanatory variables. This situation may be realistic, in particular, when each unit is a group of individuals, so that the observations are across groups and for different individuals in each group. The basic idea is to first estimate the slope of the mean of the dependent variable conditional on both the explanatory variable and z and then undo the effect of conditioning on z by taking the average of the slope over the distribution of z conditional on a particular value of the explanatory variable. We also extend the procedure to the case in which the explanatory variable is endogenous conditional on z but an instrumental variable is available. The second estimator is based on the assumption that the error distribution is exchangeable in the explanatory variables of each unit. It applies to models that are monotone in the error term. A shift in the value of an explanatory variable for member 1 of a group has both a direct effect on the distribution of the dependent variable for member 1 and an indirect effect through the distribution of the error. A shift in the explanatory variable has an indirect effect on the dependent variable for other members of the panel but no direct effect.  We isolate the direct effect by comparing the effect of the explanatory variable on the distribution of the dependent variable for member 1 to its effect on the distribution for the other panel members.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0267.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>A Graphical Analysis of Some Basic Results in Social Choice</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0268</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Cantillon</surname>
          <given-names>Estelle</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>2001</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Public Economics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We use a simple graphical approach to represent Social Welfare Functions that satisfy Independence of Irrelevant Alternatives and Anonymity.  This approach allows us to provide simple and illustrative proofs of May's Theorem, of variants of classic impossibility results, and of a recent result on the robustness of Majority Rule due to Maskin (1995).  In each case, geometry provides new insights on the working and interplay of the axioms, and suggests new results including a new characterization of the entire class of Majority Rule SWFs, a strengthening of May's Theorem, and a new version of Maskin's Theorem.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0268.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Empirical Bayes Forecasts of One Time Series Using Many Predictors</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0269</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Knox</surname>
          <given-names>Thomas</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Stock</surname>
          <given-names>James H</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Watson</surname>
          <given-names>Mark W</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>2001</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We consider both frequentist and empirical Bayes forecasts of a single time series using a linear model with T observations and K orthonormal predictors. The frequentist formulation considers estimators that are equivariant under permutations (reorderings) of the regressors. The empirical Bayes formulation (both parametric and nonparametric) treats the coefficients as i.i.d. and estimates their prior. Asymptotically, when K is proportional to T the empirical Bayes estimator is shown to be: (i) optimal in Robbins' (1955, 1964) sense; (ii) the minimum risk equivariant estimator; and (iii) minimax in both the frequentist and Bayesian problems over a class of nonGaussian error distributions. Also, the asymptotic frequentist risk of the minimum risk equivariant estimator is shown to equal the Bayes risk of the (infeasible subjectivist) Bayes estimator in the Gaussian case, where the 'prior' is the weak limit of the empirical cdf of the true parameter values. Monte Carlo results are encouraging. The new estimators are used to forecast monthly postwar U.S. macroeconomic time series using the first 151 principal components from a large panel of predictors.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0269.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The Bias of the RSR Estimator and the Accuracy of Some Alternatives</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0270</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Goetzmann</surname>
          <given-names>William N</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Peng</surname>
          <given-names>Liang</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>04</month>
       <year>2001</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper analyzes the implications of cross-sectional heteroskedasticity in repeat sales regression (RSR). RSR estimators are essentially geometric averages of individual asset returns because of the logarithmic transformation of price relatives. We show that the cross sectional variance of asset returns affects the magnitude of bias in the average return estimate for that period, while reducing the bias for the surrounding periods. It is not easy to use an approximation method to correct the bias problem. We suggest a maximum-likelihood alternative to the RSR that directly estimates index returns that are analogous to the RSR estimators but are arithmetic averages of individual returns. Simulations show that these estimators are robust to time-varying cross-sectional variance and may be more accurate than RSR and some alternative methods of RSR.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0270.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Estimating Hedonic Models: Implications of the Theory</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0271</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Tauchen</surname>
          <given-names>Helen V</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Witte</surname>
          <given-names>Ann Dryden</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>07</month>
       <year>2001</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>In this paper we consider the conditions under which instrumental variables methods are required in estimating a hedonic price function and its accompanying demand and supply relations. We assume simple functional forms that permit an explicit solution for the equilibrium hedonic price function. The principles are the same for models in which no analytic solution exists, but having the solutions makes the issues far more transparent. The need for instrumental variables estimation is directly analogous for the classical demand and supply model with undifferentiated products and for the hedonic model with differentiated products. In estimating individual demand and supply functions, instrumental variables estimation is required if the consumer and firm unobservables, which give rise to the error terms in the demand and supply functions, are correlated across consumers/firms within a community. In estimating inverse demand/supply functions, which are referred to as bid/offer functions in the hedonic model, instrumental variables estimation is required even if the unobservables are not correlated across agents within a community. If the unobservables are not correlated across agents within a community, then community binaries or the means of observable consumer and firm characteristics can be used as instruments. If the unobservables are correlated then only the latter can be used. The error term in the hedonic price function is often assumed to be uncorrelated with the chosen attributes. This assumption may be reasonable if consumers have quasilinear preferences. If not, then the error term in the price function may affect the utility-maximizing amounts of the attributes. The feasible instruments again depend upon whether the error term is correlated for agents within a community. If not, then community binaries or observed individual characteristics may be used as instruments. If so, then the community binaries are correlated with the error terms and cannot serve as instruments.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0271.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Demand Estimation With Heterogeneous Consumers and Unobserved Product Characteristics: A Hedonic Approach</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0272</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Bajari</surname>
          <given-names>Patrick L</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Benkard</surname>
          <given-names>C. Lanier</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>07</month>
       <year>2001</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We study the identification and estimation of preferences in hedonic discrete choice models of demand for differentiated products. In the hedonic discrete choice model, products are represented as a finite dimensional bundle of characteristics, and consumers maximize utility subject to a budget constraint. Our hedonic model also incorporates product characteristics that are observed by consumers but not by the economist. We demonstrate that, unlike the case where all product characteristics are observed, it is not in general possible to uniquely recover consumer preferences from data on a consumer's choices. However, we provide several sets of assumptions under which preferences can be recovered uniquely, that we think may be satisfied in many applications. Our identification and estimation strategy is a two stage approach in the spirit of Rosen (1974). In the first stage, we show under some weak conditions that price data can be used to nonparametrically recover the unobserved product characteristics and the hedonic pricing function. In the second stage, we show under some weak conditions that if the product space is continuous and the functional form of utility is known, then there exists an inversion between a consumer's choices and her preference parameters. If the product space is discrete, we propose a Gibbs sampling algorithm to simulate the population distribution of consumers' taste coefficients.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0272.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>A New Use of Importance Sampling to Reduce Computational Burden in Simulation Estimation</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0273</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Ackerberg</surname>
          <given-names>Daniel</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>07</month>
       <year>2001</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Method of Simulated Moments (MSM) estimators introduced by McFadden (1989)and Pakes and Pollard (1989) are of great use to applied economists. They are relatively easy to use even for estimating very complicated economic models. One simply needs to generate simulated data according to the model and choose parameters that make moments of this simulated data as close as possible to moments of the true data. This paper uses importance sampling techniques to address a significant computational caveat regarding these MSM estimators - that often one's economic model is hard to solve. Examples include complicated equilibrium models and dynamic programming problems. We show that importance sampling can reduce he number of times a particular model needs to be solved in an estimation procedure, significantly decreasing computational burden.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0273.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Simulated Likelihood Estimation of Diffusions with an Application to Exchange Rate Dynamics in Incomplete Markets</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0274</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Brandt</surname>
          <given-names>Michael W</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Santa-Clara</surname>
          <given-names>Pedro</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>2001</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We present an econometric method for estimating the parameters of a diffusion model from discretely sampled data.  The estimator is transparent, adaptive, and inherits the asymptotic properties of the generally unattainable maximum likelihood estimator.  We use this method to estimate a new continuous-time model of the Joint dynamics of interest rates in two countries and the exchange rate between the two currencies.  The model allows financial markets to be incomplete and specifies the degree of incompleteness as a stochastic process.  Our empirical results offer several new insights into the dynamics of exchange rates.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0274.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Using Weights to Adjust for Sample Selection When Auxiliary Information is Available</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0275</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Nevo</surname>
          <given-names>Aviv</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>11</month>
       <year>2001</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>In this paper I analyze GMM estimation when the sample is not a random draw from the population of interest.  I exploit auxiliary information, in the form of moments from the population of interest, in order to compute weights that are proportional to the inverse probability of selection. The essential idea is to construct weights, for each observation in the primary data, such that the moments of the weighted data are set equal to the additional moments. The estimator is applied to the Dutch Transportation Panel, in which refreshment draws were taken from the population of interest in order to deal with heavy attrition of the original panel.  I show how these additional samples can be used to adjust for sample selection.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0275.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The Effects of Random and Discrete Sampling When Estimating Continuous-Time Diffusions</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0276</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Aït-Sahalia</surname>
          <given-names>Yacine</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Mykland</surname>
          <given-names>Per</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>04</month>
       <year>2002</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>High-frequency financial data are not only discretely sampled in time but the time separating successive observations is often random. We analyze the consequences of this dual feature of the data when estimating a continuous-time model. In particular, we measure the additional effects of the randomness of the sampling intervals over and beyond those due to the discreteness of the data. We also examine the effect of simply ignoring the sampling randomness. We find that in many situations the randomness of the sampling has a larger impact than the discreteness of the data.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0276.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Trimming for Bounds on Treatment Effects with Missing Outcomes</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0277</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Lee</surname>
          <given-names>David S</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>2002</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Empirical researchers routinely encounter sample selection bias whereby 1) the regressor of interest is assumed to be exogenous, 2) the dependent variable is missing in a potentially non-random manner, 3) the dependent variable is characterized by an unbounded (or very large) support, and 4) it is unknown which variables directly affect sample selection but not the outcome. This paper proposes a simple and intuitive bounding procedure that can be used in this context. The proposed trimming procedure yields the tightest bounds on average treatment effects consistent with the observed data. The key assumption is a monotonicity restriction on how the assignment to treatment effects selection -- a restriction that is implicitly assumed in standard formulations of the sample selection problem.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0277.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Econometric Methods for Endogenously Sampled Time Series: The Case of Commodity Price Speculation in the Steel Market</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0278</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hall</surname>
          <given-names>George J</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Rust</surname>
          <given-names>John P</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>2002</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper studies the econometric problems associated with estimation of a stochastic process  that is endogenously sampled. Our interest is to infer the law of motion of a discrete-time stochastic process {pt} that is observed only at a subset of times {t1,..., tn} that depend on the outcome of a probabilistic sampling rule that depends on the history of the process as well as other observed covariates xt . We focus on a particular example where pt denotes the daily wholesale price of a standardized steel product. However there are no formal exchanges or centralized markets where steel is traded and pt can be observed. Instead nearly all steel transaction prices are a result of private bilateral negotiations between buyers and sellers, typically intermediated by middlemen known as steel service centers. Even though there is no central record of daily transactions prices in the steel market, we do observe transaction prices for a particular firm -- a steel service center that purchases large quantities of steel in the wholesale market for subsequent resale in the retail market. The endogenous sampling problem arises from the fact that the firm only records pt on the days that it purchases steel. We present a parametric analysis of this problem under the assumption that the timing of steel purchases is part of an optimal trading strategy that maximizes the firm's expected discounted trading profits. We derive a parametric partial information maximum likelihood (PIML) estimator that solves the endogenous sampling problem and efficiently estimates the unknown parameters of a Markov transition probability that determines the law of motion for the underlying {pt} process. The PIML estimator also yields estimates of the structural parameters that determine the optimal trading rule. We also introduce an alternative consistent, less efficient, but computationally simpler simulated minimum distance (SMD) estimator that avoids high dimensional numerical integrations required by the PIML estimator. Using the SMD estimator, we provide estimates of a truncated lognormal AR(1) model of the wholesale price processes for particular types of steel plate. We use this to infer the share of the middleman's discounted profits that are due to markups paid by its retail customers, and the share due to price speculation. The latter measures the firm's success in forecasting steel prices and in timing its purchases in order to  buy low and sell high'. The more successful the firm is in speculation (i.e. in strategically timing its purchases), the more serious are the potential biases that would result from failing to account for the endogeneity of the sampling process.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0278.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Parametric and Nonparametric Volatility Measurement</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0279</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Andersen</surname>
          <given-names>Torben G</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Bollerslev</surname>
          <given-names>Tim</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Diebold</surname>
          <given-names>Francis X</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>2002</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Volatility has been one of the most active areas of research in empirical finance and time series econometrics during the past decade.  This chapter provides a unified continuous-time, frictionless, no-arbitrage framework for systematically categorizing the various volatility concepts, measurement procedures, and modeling procedures.  We define three different volatility concepts: (i) the notional volatility corresponding to the ex-post sample-path return variability over a fixed time interval, (ii) the ex-ante expected volatility over a fixed time interval, and (iii) the instantaneous volatility corresponding to the strength of the volatility process at a point in time.  The parametric procedures rely on explicit functional form assumptions regarding the expected and/or instantaneous volatility.  In the discrete-time ARCH class of models, the expectations are formulated in terms of directly observable variables, while the discrete- and continuous-time stochastic volatility models involve latent state variable(s).  The nonparametric procedures are generally free from such functional form assumptions and hence afford  estimates of notional volatility that are flexible yet consistent (as the sampling frequency of the underlying returns increases).  The nonparametric procedures include ARCH filters and smoothers designed to measure the volatility over infinitesimally short horizons, as well as the recently-popularized realized volatility measures for (non-trivial) fixed-length time intervals.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0279.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Identification and Inference in Nonlinear Difference-In-Differences Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0280</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Athey</surname>
          <given-names>Susan</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Imbens</surname>
          <given-names>Guido</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>2002</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper develops an alternative approach to the widely used Difference-In-Difference (DID) method for evaluating the effects of policy changes. In contrast to the standard approach, we introduce a nonlinear model that permits changes over time in the effect of unobservables (e.g., there may be a time trend in the level of wages as well as the returns to skill in the labor market). Further, our assumptions are independent of the scaling of the outcome. Our approach provides an estimate of the entire counterfactual distribution of outcomes that would have been experienced by the treatment group in the absence of the treatment, and likewise for the untreated group in the presence of the treatment. Thus, it enables the evaluation of policy interventions according to criteria such as a mean-variance tradeoff. We provide conditions under which the model is nonparametrically identified and propose an estimator. We consider extensions to allow for covariates and discrete dependent variables. We also analyze inference, showing that our estimator is root-N consistent and asymptotically normal. Finally, we consider an application.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0280.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Affine Processes and Application in Finance</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0281</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Duffie</surname>
          <given-names>Darrell</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Filipovic</surname>
          <given-names>Damir</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Schachermayer</surname>
          <given-names>Walter</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>2002</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We provide the definition and a complete characterization of regular affine processes. This type of process unifies the concepts of continuousstate branching processes with immigration and Ornstein-Uhlenbeck type processes. We show, and provide foundations for, a wide range of financial applications for regular affine processes.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0281.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Solving Dynamic General Equilibrium Models Using a Second-Order Approximation to the Policy Function</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0282</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Schmitt-Grohé</surname>
          <given-names>Stephanie</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Uribe</surname>
          <given-names>Martín</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>2002</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper derives a second-order approximation to the solution of a general class of discrete- time rational expectations models. The main theoretical contribution of the paper is to show that for any model belonging to the general class considered, the coefficients on the terms linear and quadratic in the state vector in a second-order expansion of the decision rule are independent of the volatility of the exogenous shocks.  In other words, these coefficients must be the same in the stochastic and the deterministic versions of the model. Thus, up to second order, the presence of uncertainty affects only the constant term of the decision rules. In addition, the paper presents a set of MATLAB programs designed to compute the coefficients of the second-order approximation. The validity and applicability of the proposed method is illustrated by solving the dynamics of a number of model economies.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0282.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Simple and Bias-Corrected Matching Estimators for Average Treatment Effects</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0283</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Abadie</surname>
          <given-names>Alberto</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Imbens</surname>
          <given-names>Guido</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>2002</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Matching estimators for average treatment effects are widely used in evaluation research despite the fact that their large sample properties have not been established in many cases. In this article, we develop a new framework to analyze the properties of matching estimators and establish a number of new results. First, we show that matching estimators include a conditional bias term which may not vanish at a rate faster than root-N when more than one continuous variable is used for matching. As a result, matching estimators may not be root-N-consistent. Second, we show that even after removing the conditional bias, matching estimators with a fixed number of matches do not reach the semiparametric efficiency bound for average treatment effects, although the efficiency loss may be small. Third, we propose a bias-correction that removes the conditional bias asymptotically, making matching estimators root-N-consistent. Fourth, we provide a new estimator for the conditional variance that does not require consistent nonparametric estimation of unknown functions. We apply the bias-corrected matching estimators to the study of the effects of a labor market program previously analyzed by Lalonde (1986). We also carry out a small simulation study based on Lalonde's example where a simple implementation of the biascorrected matching estimator performs well compared to both simple matching estimators and to regression estimators in terms of bias and root-mean-squared-error. Software for implementing the proposed estimators in STATA and Matlab is available from the authors on the web.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0283.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Testing for Weak Instruments in Linear IV Regression</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0284</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Stock</surname>
          <given-names>James H</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Yogo</surname>
          <given-names>Motohiro</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>11</month>
       <year>2002</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Weak instruments can produce biased IV estimators and hypothesis tests with large size distortions. But what, precisely, are weak instruments, and how does one detect them in practice? This paper proposes quantitative definitions of weak instruments based on the maximum IV estimator bias, or the maximum Wald test size distortion, when there are multiple endogenous regressors. We tabulate critical values that enable using the first-stage F-statistic (or, when there are multiple endogenous regressors, the Cragg-Donald (1993) statistic) to test whether given instruments are weak. A technical contribution is to justify sequential asymptotic approximations for IV statistics with many weak instruments.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0284.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Identification and Estimation of Triangular Simultaneous Equations Models Without Additivity</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0285</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Imbens</surname>
          <given-names>Guido</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Newey</surname>
          <given-names>Whitney</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>11</month>
       <year>2002</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper investigates identification and inference in a nonparametric structural model with instrumental variables and non-additive errors. We allow for non-additive errors because the unobserved heterogeneity in marginal returns that often motivates concerns about endogeneity of choices requires objective functions that are non-additive in observed and unobserved components. We formulate several independence and monotonicity conditions that are sufficient for identification of a number of objects of interest, including the average conditional response, the average structural function, as well as the full structural response function. For inference we propose a two-step series estimator. The first step consists of estimating the conditional distribution of the endogenous regressor given the instrument. In the second step the estimated conditional distribution function is used as a regressor in a nonlinear control function approach. We establish rates of convergence, asymptotic normality, and give a consistent asymptotic variance estimator.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0285.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Estimating Affine Multifactor Term Structure Models Using Closed-Form Likelihood Expansions</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0286</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Aït-Sahalia</surname>
          <given-names>Yacine</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Kimmel</surname>
          <given-names>Robert</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>12</month>
       <year>2002</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We develop and implement a technique for closed-form maximum likelihood estimation (MLE) of multifactor affine yield models. We derive closed-form approximations to likelihoods for nine Dai and Singleton (2000) affine models. Simulations show our technique very accurately approximates true (but infeasible) MLE. Using US Treasury data, we estimate nine affine yield models with different market price of risk specifications. MLE allows non-nested model comparison using likelihood ratio tests; the preferred model depends on the market price of risk. Estimation with simulated and real data suggests our technique is much closer to true MLE than Euler and quasi-maximum likelihood (QML) methods. </p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0286.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Cointegration Vector Estimation by Panel DOLS and Long-Run Money Demand</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0287</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Mark</surname>
          <given-names>Nelson</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Sul</surname>
          <given-names>Donggyu</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>12</month>
       <year>2002</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We study the panel DOLS estimator of a homogeneous cointegration vector for a balanced panel of N individuals observed over T time periods. Allowable heterogeneity across individuals include individual-specific time trends, individual-specific fixed effects and time-specific effects. The estimator is fully parametric, computationally convenient, and more precise than the single equation estimator. For fixed N as T approaches infinity, the estimator converges to a function of Brownian motions and the Wald statistic for testing a set of linear constraints has a limiting chi-square distribution. The estimator also has a Gaussian sequential limit distribution that is obtained first by letting T go to infinity then letting N go to infinity. In a series of Monte Carlo experiments, we find that the asymptotic distribution theory provides a reasonably close approximation to the exact finite sample distribution. We use panel dynamic OLS to estimate coefficients of the long-run money demand function from a panel of 19 countries with annual observations that span from 1957 to 1996. The estimated income elasticity is 1.08 (asymptotic s.e.=0.26) and the estimated interest rate semi-elasticity is -0.02 (asymptotic s.e.=0.01).</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0287.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Multinomial Choice with Social Interactions</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0288</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Brock</surname>
          <given-names>William</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Durlauf</surname>
          <given-names>Steven N</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>2003</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper develops a model of individual decisionmaking in the presence of social interactions when the number of available choices is finite.  We show how a multinomial logit model framework may be used to model such decisions in a way that permits a tight integration of theory and econometrics. Conditions are given under which aggregate choice behavior in a population exhibits multiple self-consistent equilibria.  An econometric version of the model is shown to be identified under relatively weka conditions.  That analysis is extended to allow for general error distributions and some preliminary ways to account for the endogeneity of group memberships are developed.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0288.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Iatrogenic Specification Error: A Cautionary Tale of Cleaning Data</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0289</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Bollinger</surname>
          <given-names>Christopher R</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Chandra</surname>
          <given-names>Amitabh</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>2003</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>It is common in empirical research to use what appear to be sensible rules of thumb for cleaning data. Measurement error is often the justification for removing (trimming) or recoding (winsorizing) observations whose values lie outside a specified range. This paper considers identification in a linear model when the dependent variable is mismeasured. The results examine the common practice of trimming and winsorizing to address the identification failure. In contrast to the physical and laboratory sciences, measurement error in social science data is likely to be more complex than simply additive white noise. We consider a general measurement error process which nests many processes including the additive white noise process and a contaminated sampling process. Analytic results are only tractable under strong distributional assumptions, but demonstrate that winsorizing and trimming are only solutions for a particular class of measurement error processes. Indeed, trimming and winsorizing may induce or exacerbate bias. We term this source of bias  Iatrogenic' (or econometrician induced) error. The identification results for the general error process highlight other approaches which are more robust to distributional assumptions. Monte Carlo simulations demonstrate the fragility of trimming and winsorizing as solutions to measurement error in the dependent variable.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0289.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The (Interesting) Dynamic Properties of the Neoclassical Growth Model with CES Production</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0290</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Smetters</surname>
          <given-names>Kent</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>2003</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Despite being the standard growth model for several decades, little is actually known analytically about the dynamic properties of the neoclassical Ramsey-Cass-Koopmans growth model. This paper derives analytically the properties of the endogenous saving rate when technology takes the Constant Elasticity of Substitution (CES) form. For a factor substitution elasticity between capital and labor less than unity, the saving rate decreases along the transition path after the capital stock reaches a critical value identified analytically herein. But before reaching this critical value, the saving rate might increase and so, taken as a whole, the saving rate path might manifest  overshooting.' Similarly, for a factor substitution elasticity greater than unity, the saving rate increases along the transition path after the capital stock reaches a critical value. Before reaching this critical value, the saving rate might decrease and so the saving rate path might manifest undershooting.' A simulation illustrating these interesting dynamics is presented.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0290.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Generalized Moments Estimation for Panel Data</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0291</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Druska</surname>
          <given-names>Viliam</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Horrace</surname>
          <given-names>William</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>2003</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper considers estimation of a panel data model with disturbances that are autocorrelated across cross-sectional units. It is assumed that the disturbances are spatially correlated, based on some geographic or economic proximity measure. If the time dimension of the data is large, feasible and efficient estimation proceeds by using the time dimension to estimate spatial dependence parameters. For the case where the time dimension is small (the usual panel data case), we develop a generalized moments estimation approach that is a straight-forward generalization of a cross-sectional model due to Kelejian and Prucha. We apply this approach in a stochastic frontier framework to a panel of Indonesian rice farms where spatial correlations are based on geographic proximity, altitude and weather. The correlations represent productivity shock spillovers across the rice farms in different villages on the island of Java. Test statistics indicate that productivity shock spillovers may exist in this (and perhaps other) data sets, and that these spillovers have effects on technical efficiency estimation and ranking.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0291.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Dynamic Seemingly Unrelated Cointegrating Regression</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0292</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Mark</surname>
          <given-names>Nelson</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Ogaki</surname>
          <given-names>Masao</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Sul</surname>
          <given-names>Donggyu</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>04</month>
       <year>2003</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Multiple cointegrating regressions are frequently encountered in empirical work as, for example, in the analysis of panel data. When the equilibrium errors are correlated across equations, the seemingly unrelated regression estimation strategy can be applied to cointegrating regressions to obtain asymptotically ecient estimators. While non-parametric methods for seemingly unrelated cointegrating regressions have been proposed in the literature, in practice, specification of the estimation problem is not always straightforward. We propose Dynamic Seemingly Unrelated Regression (DSUR) estimators which can be made fully parametric and are computationally straightforward to use. We study the asymptotic and small sample properties of the DSUR estimators both for heterogeneous and homogenous cointegrating vectors. The estimation techniques are then applied to analyze two long-standing problems in international economics. Our first application revisits the issue of whether the forward exchange rate is an unbiased predictor of the future spot rate. Our second application revisits the problem of estimating long-run correlations between national investment and national saving.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0292.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Generalized Modeling Approaches to Risk Adjustment of Skewed Outcomes Data</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0293</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Manning</surname>
          <given-names>Willard G</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Basu</surname>
          <given-names>Anirban</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Mullahy</surname>
          <given-names>John</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>2003</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>There are two broad classes of models used to address the econometric problems caused by skewness in data commonly encountered in health care applications: (1) transformation to deal with skewness (e.g., OLS on ln(y)); and (2) alternative weighting approaches based on exponential conditional models (ECM) and generalized linear model (GLM) approaches. In this paper, we encompass these two classes of models using the three parameter generalized gamma (GGM) distribution, which includes several of the standard alternatives as special cases    OLS with a normal error, OLS for the log normal, the standard gamma and exponential with a log link, and the Weibull. Using simulation methods, we find the tests of identifying distributions to be robust. The GGM also provides a potentially more robust alternative estimator to the standard alternatives. An example using inpatient expenditures is also analyzed.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0293.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Nonparametric Estimation of Average Treatment Effects under Exogeneity: A Review</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0294</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Imbens</surname>
          <given-names>Guido</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>2003</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Recently there has been a surge in econometric work focusing on estimating average treatment effects under various sets of assumptions. One strand of this literature has developed methods for estimating average treatment effects for a binary treatment under assumptions variously described as exogeneity, unconfoundedness, or selection on observables. The implication of these assumptions is that systematic (e.g., average or distributional) differences in outcomes between treated and control units with the same values for the covariates are attributable to the treatment. Recent analysis has considered estimation and inference for average treatment effects under weaker assumptions than typical of the earlier literature by avoiding distributional and functional form assumptions. Various methods of semiparametric estimation have been proposed, including estimating the unknown regression functions, matching, methods using the propensity score such as weighting and blocking, and combinations of these approaches. In this paper I review the state of this literature and discuss some of its unanswered questions, focusing in particular on the practical implementation of these methods, the plausibility of this exogeneity assumption in economic applications, the relative performance of the various semiparametric estimators when the key assumptions (unconfoundedness and overlap) are satisfied, alternative estimands such as quantile treatment effects, and alternate methods such as Bayesian inference.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0294.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The Role of Randomized Field Trials in Social Science Research: A Perspective from Evaluations of Reforms of Social Welfare Programs</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0295</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Moffitt</surname>
          <given-names>Robert A</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>2003</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>One of the areas of policy research where randomized field trials have been utilized most intensively is welfare reform. Starting in the late 1960s with experimental tests of a negative income tax and continuing through current experimental tests of recent welfare reforms, randomized evaluations have played a strong and increasing role in informing policy. This paper reviews the record of these experiments and assesses the implications of that record for the use of randomization. The review demonstrates that, while randomized field trials in the area of welfare reform have been professionally conducted and well-run, and have yielded much valuable and credible information, their usefulness has been limited by a number of weaknesses, some of which are inherent in the method and some of which result from constraints imposed by the political process. The conclusion is that randomized field trials have an important but limited role to play in future welfare reform evaluations, and that it is essential that they be supplemented by nonexperimental research.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0295.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>A Monte Carlo Study of Growth Regressions</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0296</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hauk</surname>
          <given-names>William R</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Wacziarg</surname>
          <given-names>Romain</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>01</month>
       <year>2004</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Using Monte Carlo simulations, this paper evaluates the bias properties of common estimators used in growth regressions derived from the Solow model. We explicitly allow for measurement error in the right-hand side variables, as well as country-specific effects that are correlated with the regressors. Our results suggest that using an OLS estimator applied to a single cross-section of variables averaged over time (the between estimator) performs best in terms of the extent of bias on each of the estimated coefficients. The fixed-effects estimator and the Arellano-Bond estimator greatly overstate the speed of convergence under a wide variety of assumptions concerning the type and extent of measurement error, while between understates it somewhat. Finally, fixed effects and Arellano-Bond bias towards zero the slope estimates on the human and physical capital accumulation variables.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0296.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>On the Relationship Between Determinate and MSV Solutions in Linear RE Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0297</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>McCallum</surname>
          <given-names>Bennett T</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>07</month>
       <year>2004</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper considers the possibility that, in linear rational expectations (RE) models, all determinate (uniquely non-explosive) solutions coincide with the minimum state variable (MSV) solution, which is unique by construction. In univariate specifications of the form y(t) = AE(t)y(t+1) + Cy(t-1) + u(t) that result holds: if a RE solution is unique and non-explosive, then it is the same as the MSV solution. Also, this result holds for multivariate versions if the A and C matrices commute and a certain regularity condition holds. More generally, however, there are models of this form that possess unique non-explosive solutions that differ from their MSV solutions. Examples are provided and a strategy for easily constructing others is outlined.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0297.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The Use of Predictive Regressions at Alternative Horizons in Finance and Economics</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0298</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Mark</surname>
          <given-names>Nelson</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Sul</surname>
          <given-names>Donggyu</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>2004</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>When a k period future return is regressed on a current variable such as the log  dividend yield, the marginal significance level of the t-test that the return is unpredictable typically increases over some range of future return horizons, k. Local asymptotic power analysis shows that the power of the long-horizon predictive regression test dominates that of the short-horizon test over a nontrivial region of the admissible parameter space. In practice, small sample OLS bias, which differs under the null and the alternative, can distort the size and reduce the power gains of long-horizon tests. To overcome these problems, we suggest a moving block recursive Jackknife estimator of the predictive regression slope coefficient and test statistics that is appropriate under both the null and the alternative. The methods are applied to testing whether future stock returns are predictable. Consistent evidence in favor of return predictability shows up at the 5 year horizon.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0298.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Optimal Invariant Similar Tests for Instrumental Variables Regression</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0299</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Andrews</surname>
          <given-names>Donald</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Moreira</surname>
          <given-names>Marcelo</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Stock</surname>
          <given-names>James H</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>2004</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper considers tests of the parameter on endogenous variables in an instrumental variables regression model. The focus is on determining tests that have certain optimal power properties. We start by considering a model with normally distributed errors and known error covariance matrix. We consider tests that are similar and satisfy a natural rotational invariance condition. We determine tests that maximize weighted average power (WAP) for arbitrary weight functions among invariant similar tests. Such tests include point optimal (PO) invariant similar tests. The results yield the power envelope for invariant similar tests. This allows one to assess and compare the power properties of existing tests, such as the Anderson-Rubin, Lagrange multiplier (LM), and conditional likelihood ratio (CLR) tests, and new optimal WAP and PO invariant similar tests. We find that the CLR test is quite close to being uniformly most powerful invariant among a class of two-sided tests. A new unconditional test, P*, also is found to have this property. For one-sided alternatives, no test achieves the invariant power envelope, but a new test. the one-sided CLR test. is found to be fairly close. The finite sample results of the paper are extended to the case of unknown error covariance matrix and possibly non-normal errors via weak instrument asymptotics. Strong instrument asymptotic results also are provided because we seek tests that perform well under both weak and</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0299.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Volatility Comovement: A Multifrequency Approach</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0300</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Calvet</surname>
          <given-names>Laurent E</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Fisher</surname>
          <given-names>Adlai</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Thompson</surname>
          <given-names>Samuel</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>2004</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We implement a multifrequency volatility decomposition of three exchange rates and show that components with similar durations are strongly correlated across series. This motivates a bivariate extension of the Markov-Switching Multifractal (MSM) introduced in Calvet and Fisher (2001, 2004). Bivariate MSM is a stochastic volatility model with a closed-form likelihood. Estimation can proceed by ML for state spaces of moderate size, and by simulated likelihood via a particle filter in high-dimensional cases. We estimate the model and confirm its main assumptions in likelihood ratio tests. Bivariate MSM compares favorably to a standard multivariate GARCH both in- and out-of-sample. We extend the model to multivariate settings with a potentially large number of assets by proposing a parsimonious multifrequency factor structure.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0300.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Identification and Estimation of Discrete Games of Complete Information</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0301</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Bajari</surname>
          <given-names>Patrick L</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hong</surname>
          <given-names>Han</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Ryan</surname>
          <given-names>Stephen P</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>2004</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We discuss the identification and estimation of discrete games of complete information. Following Bresnahan and Reiss (1990, 1991), a discrete game is a generalization of a standard discrete choice model where utility depends on the actions of other players. Using recent algorithms to compute all  of the Nash equilibria to a game, we propose simulation-based estimators for static, discrete games. With appropriate exclusion restrictions about how covariates enter into payoffs and influence equilibrium selection, the model is identified with only weak parametric assumptions. Monte Carlo evidence demonstrates that the estimator can perform well in moderately-sized samples. As an application, we study the strategic decision of firms in spatially-separated markets to establish a presence on the Internet.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0301.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Bootstrap and Higher-Order Expansion Validity When Instruments May Be Weak</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0302</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Moreira</surname>
          <given-names>Marcelo</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Porter</surname>
          <given-names>Jack R</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Suarez</surname>
          <given-names>Gustavo</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>11</month>
       <year>2004</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>It is well-known that size-adjustments based on Edgeworth expansions for the t-statistic perform poorly when instruments are weakly correlated with the endogenous explanatory variable. This paper shows, however, that the lack of Edgeworth expansions and bootstrap validity are not tied to the weak instrument framework, but instead depends on which test statistic is examined. In particular, Edgeworth expansions are valid for the score and conditional likelihood ratio approaches, even when the instruments are uncorrelated with the endogenous explanatory variable. Furthermore, there is a belief that the bootstrap method fails when instruments are weak, since it replaces parameters with inconsistent estimators. Contrary to this notion, we provide a theoretical proof that guarantees the validity of the bootstrap for the score test, as well as the validity of the conditional bootstrap for many conditional tests. Monte Carlo simulations show that the bootstrap actually decreases size distortions in both cases.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0302.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Optimal Inference in Regression Models with Nearly Integrated Regressors</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0303</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Jansson</surname>
          <given-names>Michael</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Moreira</surname>
          <given-names>Marcelo</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>11</month>
       <year>2004</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper considers the problem of conducting inference on the regression coefficient in a bivariate regression model with a highly persistent regressor. Gaussian power envelopes are obtained for a class of testing procedures satisfying a conditionality restriction. In addition, the paper proposes feasible testing procedures that attain these Gaussian power envelopes whether or not the innovations of the regression model are normally distributed.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0303.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Avoiding the Curse of Dimensionality in Dynamic Stochastic Games</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0304</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Doraszelski</surname>
          <given-names>Ulrich</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Judd</surname>
          <given-names>Kenneth L</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>01</month>
       <year>2005</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Continuous-time stochastic games with a finite number of states have substantial computational and conceptual advantages over the more common discrete-time model. In particular, continuous time avoids a curse of dimensionality and speeds up computations by orders of magnitude in games with more than a few state variables. The continuous-time approach opens the way to analyze more complex and realistic stochastic games than is feasible in discrete-time models.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0304.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Using Out-of-Sample Mean Squared Prediction Errors to Test the Martingale Difference</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0305</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Clark</surname>
          <given-names>Todd</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>West</surname>
          <given-names>Kenneth D</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>01</month>
       <year>2005</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Asset Pricing</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>International Finance and Macroeconomics</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We consider using out-of-sample mean squared prediction errors (MSPEs) to evaluate the null that a given series follows a zero mean martingale difference against the alternative that it is linearly predictable. Under the null of no predictability, the population MSPE of the null "no change" model equals that of the linear alternative. We show analytically and via simulations that despite this equality, the alternative model's sample MSPE is expected to be greater than the null's. For rolling regression estimators of the alternative model's parameters, we propose and evaluate an asymptotically normal test that properly accounts for the upward shift of the sample MSPE of the alternative model. Our simulations indicate that our proposed procedure works well.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0305.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Structural Equations, Treatment Effects and Econometric Policy Evaluation</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0306</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Heckman</surname>
          <given-names>James J</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Vytlacil</surname>
          <given-names>Edward J</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>04</month>
       <year>2005</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper uses the marginal treatment effect (MTE) to unify the nonparametric literature on treatment effects with the econometric literature on structural estimation using a nonparametric analog of a policy invariant parameter; to generate a variety of treatment effects from a common semiparametric functional form; to organize the literature on alternative estimators; and to explore what policy questions commonly used estimators in the treatment effect literature answer. A fundamental asymmetry intrinsic to the method of instrumental variables is noted. Recent advances in IV estimation allow for heterogeneity in responses but not in choices, and the method breaks down when both choice and response equations are heterogeneous in a general way.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0306.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Threshold Crossing Models and Bounds on Treatment Effects: A Nonparametric Analysis</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0307</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Shaikh</surname>
          <given-names>Azeem</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Vytlacil</surname>
          <given-names>Edward J</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>2005</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper considers the evaluation of the average treatment effect of a binary endogenous regressor on a binary outcome when one imposes a threshold crossing model on both the endogenous regressor and the outcome variable but without imposing parametric functional form or distributional assumptions. Without parametric restrictions, the average effect of the binary endogenous variable is not generally point identified. This paper constructs sharp bounds on the average effect of the endogenous variable that exploit the structure of the threshold crossing models and any exclusion restrictions. We also develop methods for inference on the resulting bounds.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0307.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>A, B, C's (and D)'s for Understanding VARs</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0308</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Fernández-Villaverde</surname>
          <given-names>Jesús</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Rubio-Ramírez</surname>
          <given-names>Juan</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Sargent</surname>
          <given-names>Thomas J</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>2005</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The dynamics of a linear (or linearized) dynamic stochastic economic model can be expressed in terms of matrices (A,B,C,D) that define a state space system. An associated state space system (A,K,C,Sigma) determines a vector autoregression for observables available to an econometrician. We review circumstances under which the impulse response of the VAR resembles the impulse response associated with the economic model. We give four examples that illustrate a simple condition for checking whether the mapping from VAR shocks to economic shocks is invertible. The condition applies when there are equal numbers of VAR and economic shocks.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0308.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The Method of Endogenous Gridpoints for Solving Dynamic Stochastic Optimization Problems</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0309</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Carroll</surname>
          <given-names>Christopher D</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>2005</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper introduces a method for solving numerical dynamic stochastic optimization problems that avoids rootfinding operations. The idea is applicable to many microeconomic and macroeconomic problems, including life cycle, buffer-stock, and stochastic growth problems. Software is provided.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0309.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>A Portmanteau Test for Serially Correlated Errors in Fixed Effects Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0310</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Inoue</surname>
          <given-names>Atsushi</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Solon</surname>
          <given-names>Gary</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>2005</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We propose a portmanteau test for serial correlation of the error term in a fixed effects model. The test is derived as a conditional Lagrange multiplier test, but it also has a straightforward Wald test interpretation. In Monte Carlo experiments, the test displays good size and power properties.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0310.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Two-Sample Instrumental Variables Estimators</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0311</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Inoue</surname>
          <given-names>Atsushi</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Solon</surname>
          <given-names>Gary</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>2005</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Following an influential article by Angrist and Krueger (1992) on two-sample instrumental variables (TSIV) estimation, numerous empirical researchers have applied a computationally convenient two-sample two-stage least squares (TS2SLS) variant of Angrist and Krueger's estimator. In the two-sample context, unlike the single-sample situation, the IV and 2SLS estimators are numerically distinct. Our comparison of the properties of the two estimators demonstrates that the commonly used TS2SLS estimator is more asymptotically efficient than the TSIV estimator and also is more robust to a practically relevant type of sample stratification.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0311.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Inference with "Difference in Differences" with a Small Number of Policy Changes</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0312</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Conley</surname>
          <given-names>Timothy G</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Taber</surname>
          <given-names>Christopher R</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>07</month>
       <year>2005</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Difference in differences methods have become very popular in applied work. This paper provides a new method for inference in these models when there are a small number of policy changes. This situation occurs in many implementations of these estimators. Identification of the key parameter typically arises when a group "changes" some particular policy. The asymptotic approximations that are typically employed assume that the number of cross sectional groups, N, times the number of time periods, T, is large. However, even when N or T is large, the number of actual policy changes observed in the data is often very small. In this case, we argue that point estimators of treatment effects should not be thought of as being consistent and that the standard methods that researchers use to perform inference in these models are not appropriate. We develop an alternative approach to inference under the assumption that there are a finite number of policy changes in the data, using asymptotic approximations as the number of non-changing groups gets large. In this situation we cannot obtain a consistent point estimator for the key treatment effect parameter. However, we can consistently estimate the finite-sample distribution of the treatment effect estimator, up to the unknown parameter itself. This allows us to perform hypothesis tests and construct confidence intervals. For expositional and motivational purposes, we focus on the difference in differences case, but our approach should be appropriate more generally in treatment effect models which employ a large number of controls, but a small number of treatments. We demonstrate the use of the approach by analyzing the effect of college merit aide programs on college attendance. We show that in some cases the standard approach can give misleading results.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0312.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Inference with Weak Instruments</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0313</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Andrews</surname>
          <given-names>Donald</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Stock</surname>
          <given-names>James H</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>2005</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>This paper reviews recent developments in methods for dealing with weak instruments (IVs) in IV regression models. The focus is more on tests and confidence intervals derived from tests than on estimators. The paper also presents new testing results under "many weak IV asymptotics," which are relevant when the number of IVs is large and the coefficients on the IVs are relatively small. Asymptotic power envelopes for invariant tests are established. Power comparisons of the conditional likelihood ratio (CLR), Anderson- Rubin, and Lagrange multiplier tests are made. Numerical results show that the CLR test is on the asymptotic power envelope. This holds no matter what the relative magnitude of the IV strength to the number of IVs.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0313.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Instrumental Variables Methods in Experimental Criminological Research: What, Why, and How?</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0314</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Angrist</surname>
          <given-names>Joshua</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>2005</year>
    </pub-date>
    <custom-meta-wrap>
    </custom-meta-wrap>
    <abstract>
<p>Quantitative criminology focuses on straightforward causal questions that are ideally addressed with randomized experiments. In practice, however, traditional randomized trials are difficult to implement in the untidy world of criminal justice. Even when randomized trials are implemented, not everyone is treated as intended and some control subjects may obtain experimental services. Treatments may also be more complicated than a simple yes/no coding can capture. This paper argues that the instrumental variables methods (IV) used by economists to solve omitted variables bias problems in observational studies also solve the major statistical problems that arise in imperfect criminological experiments. In general, IV methods estimate the causal effect of treatment on subjects that are induced to comply with a treatment by virtue of the random assignment of intended treatment. The use of IV in criminology is illustrated through a re-analysis of the Minneapolis Domestic Violence Experiment.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0314.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Convergence Properties of the Likelihood of Computed Dynamic Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0315</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Fernández-Villaverde</surname>
          <given-names>Jesús</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Rubio</surname>
          <given-names>Juan</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Santos</surname>
          <given-names>Manuel</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>2005</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper studies the econometrics of computed dynamic models. Since these models generally lack a closed-form solution, their policy functions are approximated by numerical methods. Hence, the researcher can only evaluate an approximated likelihood associated with the approximated policy function rather than the exact likelihood implied by the exact policy function. What are the consequences for inference of the use of approximated likelihoods? First, we find conditions under which, as the approximated policy function converges to the exact policy, the approximated likelihood also converges to the exact likelihood. Second, we show that second order approximation errors in the policy function, which almost always are ignored by researchers, have first order effects on the likelihood function. Third, we discuss convergence of Bayesian and classical estimates. Finally, we propose to use a likelihood ratio test as a diagnostic device for problems derived from the use of approximated likelihoods.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0315.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Dynamic Discrete Choice and Dynamic Treatment Effects</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0316</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Heckman</surname>
          <given-names>James J</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Navarro</surname>
          <given-names>Salvador</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>2005</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper considers semiparametric identification of structural dynamic discrete choice models and models for dynamic treatment effects. Time to treatment and counterfactual outcomes associated with treatment times are jointly analyzed. We examine the implicit assumptions of the dynamic treatment model using the structural model as a benchmark. For the structural model we show the gains from using cross equation restrictions connecting choices to associated measurements and outcomes. In the dynamic discrete choice model, we identify both subjective and objective outcomes, distinguishing ex post and ex ante outcomes. We show how to identify agent information sets.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0316.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Generalized Stochastic Gradient Learning</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0317</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Evans</surname>
          <given-names>George W</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Honkapohja</surname>
          <given-names>Seppo</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Williams</surname>
          <given-names>Noah M</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>2005</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We study the properties of generalized stochastic gradient (GSG) learning in forward-looking models. We examine how the conditions for stability of standard stochastic gradient (SG) learning both differ from and are related to E-stability, which governs stability under least squares learning. SG algorithms are sensitive to units of measurement and we show that there is a transformation of variables for which E-stability governs SG stability. GSG algorithms with constant gain have a deeper justification in terms of parameter drift, robustness and risk sensitivity.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0317.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Solving General Equilibrium Models with Incomplete Markets and Many Assets</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0318</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Evans</surname>
          <given-names>Martin</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hnatkovska</surname>
          <given-names>Viktoria</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>2005</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper presents a new numerical method for solving general equilibrium models with many assets. The method can be applied to models where there are heterogeneous agents, time-varying investment opportunity sets, and incomplete markets. It also can be used to study models where the equilibrium dynamics are non-stationary. We illustrate how the method is used by solving a one-- and two-sector versions of a two--country general equilibrium model with production. We check the accuracy of our method by comparing the numerical solution to the one-sector model against its known analytic properties. We then apply the method to the two-sector model where no analytic solution is available.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0318.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Edgeworth Expansions for Realized Volatility and Related Estimators</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0319</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Zhang</surname>
          <given-names>Lan</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Mykland</surname>
          <given-names>Per</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Aït-Sahalia</surname>
          <given-names>Yacine</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>2005</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper shows that the asymptotic normal approximation is often insufficiently accurate for volatility estimators based on high frequency data. To remedy this, we compute Edgeworth expansions for such estimators. Unlike the usual expansions, we have found that in order to obtain meaningful terms, one needs to let the size of the noise to go zero asymptotically. The results have application to Cornish-Fisher inversion and bootstrapping.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0319.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Semiparametric Estimation of a Dynamic Game of Incomplete Information</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0320</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Bajari</surname>
          <given-names>Patrick L</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hong</surname>
          <given-names>Han</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>02</month>
       <year>2006</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Recently, empirical industrial organization economists have proposed estimators for dynamic games of incomplete information. In these models, agents choose from a finite number actions and maximize expected discounted utility in a Markov perfect equilibrium. Previous econometric methods estimate the probability distribution of agents' actions in a first stage. In a second step, a finite vector of parameters of the period return function are estimated. In this paper, we develop semiparametric estimators for dynamic games allowing for continuous state variables and a nonparametric first stage. The estimates of the structural parameters are T1/2 consistent (where T is the sample size) and asymptotically normal even though the first stage is estimated nonparametrically.  We also propose sufficient conditions for identification of the model.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0320.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Estimating Macroeconomic Models: A Likelihood Approach</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0321</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Fernández-Villaverde</surname>
          <given-names>Jesús</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Rubio-Ramírez</surname>
          <given-names>Juan</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>02</month>
       <year>2006</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper shows how particle filtering allows us to undertake likelihood-based inference in dynamic macroeconomic models. The models can be nonlinear and/or non-normal. We describe how to use the output from the particle filter to estimate the structural parameters of the model, those characterizing preferences and technology, and to compare different economies. Both tasks can be implemented from either a classical or a Bayesian perspective. We illustrate the technique by estimating a business cycle model with investment-specific technological change, preference shocks, and stochastic volatility.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0321.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Regression Discontinuity Inference with Specification Error</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0322</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Lee</surname>
          <given-names>David S</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Card</surname>
          <given-names>David</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>2006</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>A regression discontinuity (RD) research design is appropriate for program evaluation problems in which treatment status (or the probability of treatment) depends on whether an observed covariate exceeds a fixed threshold. In many applications the treatment-determining covariate is discrete. This makes it impossible to compare outcomes for observations "just above" and "just below" the treatment threshold, and requires the researcher to choose a functional form for the relationship between the treatment variable and the outcomes of interest. We propose a simple econometric procedure to account for uncertainty in the choice of functional form for RD designs with discrete support. In particular, we model deviations of the true regression function from a given approximating function -- the specification errors -- as random. Conventional standard errors ignore the group structure induced by specification errors and tend to overstate the precision of the estimated program impacts. The proposed inference procedure that allows for specification error also has a natural interpretation within a Bayesian framework.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0322.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Heteroskedasticity-Robust Standard Errors for Fixed Effects Panel Data Regression</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0323</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Stock</surname>
          <given-names>James H</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Watson</surname>
          <given-names>Mark W</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>2006</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The conventional heteroskedasticity-robust (HR) variance matrix estimator for cross-sectional regression (with or without a degrees of freedom adjustment), applied to the fixed effects estimator for panel data with serially uncorrelated errors, is inconsistent if the number of time periods T is fixed (and greater than two) as the number of entities n increases. We provide a bias-adjusted HR estimator that is (nT)1/2 -consistent under any sequences (n, T) in which n and/or T increase to ∞.The conventional heteroskedasticity-robust (HR) variance matrix estimator for cross-sectional regression (with or without a degrees of freedom adjustment), applied to the fixed effects estimator for panel data with serially uncorrelated errors, is inconsistent if the number of time periods T is fixed (and greater than two) as the number of entities n increases. We provide a bias-adjusted HR estimator that is (nT)1/2 -consistent under any sequences (n, T) in which n and/or T increase to ∞.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0323.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Nonparametric Tests for Treatment Effect Heterogeneity</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0324</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Crump</surname>
          <given-names>Richard K</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hotz</surname>
          <given-names>V. Joseph</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Imbens</surname>
          <given-names>Guido</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Mitnik</surname>
          <given-names>Oscar A</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>2006</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>A large part of the recent literature on program evaluation has focused on estimation of the average effect of the treatment under assumptions of unconfoundedness or ignorability following the seminal work by Rubin (1974) and Rosenbaum and Rubin (1983). In many cases however, researchers are interested in the effects of programs beyond estimates of the overall average or the average for the subpopulation of treated individuals. It may be of substantive interest to investigate whether there is any subpopulation for which a program or treatment has a nonzero average effect, or whether there is heterogeneity in the effect of the treatment. The hypothesis that the average effect of the treatment is zero for all subpopulations is also important for researchers interested in assessing assumptions concerning the selection mechanism. In this paper we develop two nonparametric tests. The first test is for the null hypothesis that the treatment has a zero average effect for any subpopulation defined by covariates. The second test is for the null hypothesis that the average effect conditional on the covariates is identical for all subpopulations, in other words, that there is no heterogeneity in average treatment effects by covariates. Sacrificing some generality by focusing on these two specific null hypotheses we derive tests that are straightforward to implement.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0324.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>On the Failure of the Bootstrap for Matching Estimators</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0325</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Abadie</surname>
          <given-names>Alberto</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Imbens</surname>
          <given-names>Guido</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>06</month>
       <year>2006</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Matching estimators are widely used for the evaluation of programs or treatments. Often researchers use bootstrapping methods for inference. However, no formal justification for the use of the bootstrap has been provided. Here we show that the bootstrap is in general not valid, even in the simple case with a single continuous covariate when the estimator is root-N consistent and asymptotically normally distributed with zero asymptotic bias. Due to the extreme non-smoothness of nearest neighbor matching, the standard conditions for the bootstrap are not satisfied, leading the bootstrap variance to diverge from the actual variance. Simulations  confirm the difference between actual and nominal coverage rates for bootstrap confidence intervals predicted by the theoretical calculations. To our knowledge, this is the first example of a root-N consistent and asymptotically normal estimator for which the bootstrap fails to work.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0325.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Approximately Normal Tests for Equal Predictive Accuracy in Nested Models</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0326</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>West</surname>
          <given-names>Kenneth D</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Clark</surname>
          <given-names>Todd</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>2006</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Asset Pricing</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Forecast evaluation often compares a parsimonious null model to a larger model that nests the null model.  Under the null that the parsimonious model generates the data, the larger model introduces noise into its forecasts by estimating parameters whose population values are zero.  We observe that the mean squared prediction error (MSPE) from the parsimonious model is therefore expected to be smaller than that of the larger model.  We describe how to adjust MSPEs to account for this noise.  We propose applying standard methods (West (1996)) to test whether the adjusted mean squared error difference is zero.  We refer to nonstandard limiting distributions derived in Clark and McCracken (2001, 2005a) to argue that use of standard normal critical values will yield actual sizes close to, but a little less than, nominal size.  Simulation evidence supports our recommended procedure.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0326.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Robust Inference with Multi-way Clustering</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0327</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Cameron</surname>
          <given-names>A. Colin</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Gelbach</surname>
          <given-names>Jonah</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Miller</surname>
          <given-names>Douglas L</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>2006</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>In this paper we propose a new variance estimator for OLS as well as for nonlinear estimators such as logit, probit and GMM, that provcides cluster-robust inference when there is two-way or multi-way clustering that is non-nested. The variance estimator extends the standard cluster-robust variance estimator or sandwich estimator for one-way clustering (e.g. Liang and Zeger (1986), Arellano (1987)) and relies on similar relatively weak distributional assumptions.  Our method is easily implemented in statistical packages, such as Stata and SAS, that already offer cluster-robust standard errors when there is one-way clustering.  The method is demonstrated by a Monte Carlo analysis for a two-way random effects model; a Monte Carlo analysis of a placebo law that extends the state-year effects example of Bertrand et al. (2004) to two dimensions; and by application to two studies in the empirical public/labor literature where two-way clustering is present.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0327.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Non-response in the American Time Use Survey:  Who Is Missing from the Data and How Much Does It Matter?</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0328</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Abraham</surname>
          <given-names>Katharine G</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Maitland</surname>
          <given-names>Aaron</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Bianchi</surname>
          <given-names>Suzanne M</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>2006</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Children and Families</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper examines non-response in a large government survey.  The response rate for the American Time Use Survey (ATUS) has been below 60 percent for the first two years of its existence, raising questions about whether the results can be generalized to the target population.  The paper begins with an analysis of the types of non-response encountered in the ATUS.  We find that non-contact accounts for roughly 60 percent of ATUS non-response, with refusals accounting for roughly 40 percent. Next, we examine two hypotheses about the causes of this non-response.  We find little support for the hypothesis that busy people are less likely to respond to the ATUS, but considerable support for the hypothesis that people who are weakly integrated into their communities are less likely to respond, mostly because they are less likely to be contacted.  Finally, we compare aggregate estimates of time use calculated using the ATUS base weights without any adjustment for non-response to estimates calculated using the ATUS final weights with a non-response adjustment and to estimates calculated using weights that incorporate our own non-response adjustments based on a propensity model.  While there are some modest differences, the three sets of estimates are broadly similar.  The paper ends with a discussion of survey design features, their effect on the types and level of non-response, and the tradeoffs associated with different design choices.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0328.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Researcher Incentives and Empirical Methods</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0329</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Glaeser</surname>
          <given-names>Edward L</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>2006</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Economists are quick to assume opportunistic behavior in almost every walk of life other than our own.  Our empirical methods are based on assumptions of human behavior that would not pass muster in any of our models.  The solution to this problem is not to expect a mass renunciation of data mining, selective data cleaning or opportunistic methodology selection, but rather to follow Leamer's lead in designing and using techniques that anticipate the behavior of optimizing researchers.  In this essay, I make ten points about a more economic approach to empirical methods and suggest paths for methodological progress.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0329.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Moving the Goalposts: Addressing Limited Overlap in the Estimation of Average Treatment Effects by Changing the Estimand</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0330</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Crump</surname>
          <given-names>Richard K</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hotz</surname>
          <given-names>V. Joseph</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Imbens</surname>
          <given-names>Guido</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Mitnik</surname>
          <given-names>Oscar</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>2006</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Estimation of average treatment effects under unconfoundedness or exogenous treatment assignment is often hampered by lack of overlap in the covariate distributions. This lack of overlap can lead to imprecise estimates and can make commonly used estimators sensitive to the choice of specification. In such cases researchers have often used informal methods for trimming the sample. In this paper we develop a systematic approach to addressing such lack of overlap. We characterize optimal subsamples for which the average treatment effect can be estimated most precisely, as well as optimally weighted average treatment effects. Under some conditions the optimal selection rules depend solely on the propensity score. For a wide range of distributions a good approximation to the optimal rule is provided by the simple selection rule to drop all units with estimated propensity scores outside the range [0.1,0.9].</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0330.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Vector Multiplicative Error Models:  Representation and Inference</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0331</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Cipollini</surname>
          <given-names>Fabrizio</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Engle</surname>
          <given-names>Robert F</given-names>
          <suffix>III</suffix>
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Gallo</surname>
          <given-names>Giampiero M</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>11</month>
       <year>2006</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Asset Pricing</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>The Multiplicative Error Model introduced by Engle (2002) for positive valued processes is specified as the product of a (conditionally autoregressive) scale factor and an innovation process with positive support.  In this paper we propose a multi-variate extension of such a model, by taking into consideration the possibility that the vector innovation process be contemporaneously correlated.  The estimation procedure is hindered by the lack of probability density functions for multivariate positive valued random variables.  We suggest the use of copulafunctions and of estimating equations to jointly estimate the parameters of the scale factors and of the correlations of the innovation processes.  Empirical applications on volatility indicators are used to illustrate the gains over the equation by equation procedure.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0331.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>DSGE Models in a Data-Rich Environment</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0332</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Boivin</surname>
          <given-names>Jean</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Giannoni</surname>
          <given-names>Marc P</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>12</month>
       <year>2006</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Monetary Economics</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Standard practice for the estimation of dynamic stochastic general equilibrium (DSGE) models maintains the assumption that economic variables are properly measured by a single indicator, and that all relevant information for the estimation is summarized by a small number of data series. However, recent empirical research on factor models has shown that information contained in large data sets is relevant for the evolution of important macroeconomic series. This suggests that conventional model estimates and inference based on estimated DSGE models might be distorted. In this paper, we propose an empirical framework for the estimation of DSGE models that exploits the relevant information from a data-rich environment. This framework provides an interpretation of all information contained in a large data set, and in particular of the latent factors, through the lenses of a DSGE model. The estimation involves Markov-Chain Monte-Carlo (MCMC) methods. We apply this estimation approach to a state-of-the-art DSGE monetary model. We find evidence of imperfect measurement of the model's theoretical concepts, in particular for inflation. We show that exploiting more information is important for accurate estimation of the model's concepts and shocks, and that it implies different conclusions about key structural parameters and the sources of economic fluctuations.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0332.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Using Randomization in Development Economics Research: A Toolkit</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0333</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Duflo</surname>
          <given-names>Esther</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Glennerster</surname>
          <given-names>Rachel</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Kremer</surname>
          <given-names>Michael</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>12</month>
       <year>2006</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Children and Families</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper is a practical guide (a toolkit) for researchers, students and practitioners wishing to introduce randomization as part of a research design in the field. It first covers the rationale for the use of randomization, as a solution to selection bias and a partial solution to publication biases. Second, it discusses various ways in which randomization can be practically introduced in a field settings. Third, it discusses designs issues such as sample size requirements, stratification, level of randomization and data collection methods. Fourth, it discusses how to analyze data from randomized evaluations when there are departures from the basic framework. It reviews in particular how to handle imperfect compliance and externalities. Finally, it discusses some of the issues involved in drawing general conclusions from randomized evaluations, including the necessary use of theory as a guide when designing evaluations and interpreting results.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0333.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Manipulation of the Running Variable in the Regression Discontinuity Design: A Density Test</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0334</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>McCrary</surname>
          <given-names>Justin</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>01</month>
       <year>2007</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Standard sufficient conditions for identification in the regression discontinuity design are continuity of the conditional expectation of counterfactual outcomes in the running variable.  These continuity assumptions may not be plausible if agents are able to manipulate the running variable.  This paper develops a test of manipulation related to continuity of the running variable density function.  The methodology is applied to popular elections to the House of Representatives, where sorting is neither expected nor found, and to roll-call voting in the House, where sorting is both expected and found.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0334.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Synthetic Control Methods for Comparative Case Studies: Estimating the Effect of California's Tobacco Control Program</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0335</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Abadie</surname>
          <given-names>Alberto</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Diamond</surname>
          <given-names>Alexis</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Hainmueller</surname>
          <given-names>Jens</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>01</month>
       <year>2007</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Building on an idea in Abadie and Gardeazabal (2003), this article investigates the application of synthetic control methods to comparative case studies. We discuss the advantages of these methods and apply them to study the effects of Proposition 99, a large-scale tobacco control program that California implemented in 1988. We demonstrate that following Proposition 99 tobacco consumption fell markedly in California relative to a comparable synthetic control region. We estimate that by the year 2000 annual per-capita cigarette sales in California were about 26 packs lower than what they would have been in the absence of Proposition 99. Given that many policy interventions and events of interest in social sciences take place at an aggregate level (countries, regions, cities, etc.) and affect a small number of aggregate units, the potential applicability of synthetic control methods to comparative case studies is very large, especially in situations where traditional regression methods are not appropriate. The methods proposed in this article produce informative inference regardless of the number of available comparison units, the number of available time periods, and whether the data are individual (micro) or aggregate (macro). Software to compute the estimators proposed in this article is available at the authors' web-pages.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0335.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Internal Increasing Returns to Scale and Economic Growth</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0336</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>List</surname>
          <given-names>John A</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Zhou</surname>
          <given-names>Haiwen</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>03</month>
       <year>2007</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Economic Fluctuations and Growth</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This study develops a model of endogenous growth based on increasing returns due to firms' technology choices.  Particular attention is paid to the implications of these choices, combined with the substitution of capital for labor, on economic growth in a general equilibrium model in which the R&amp;D sector produces machines to be used for the sector producing final goods.  We show that incorporating oligopolistic competition in the sector producing finals goods into a general equilibrium model with endogenous technology choice is tractable, and we explore the equilibrium path analytically.  The model illustrates a novel manner in which sustained per capita growth of consumption can be achieved?through continuous adoption of new technologies featuring the substitution between capital and labor.  Further insights of the model are that during the growth process, the size of firms producing final goods increases over time, the real interest rate is constant, and the real wage rate increases over time.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0336.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Regression Discontinuity Designs: A Guide to Practice</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0337</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Imbens</surname>
          <given-names>Guido</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Lemieux</surname>
          <given-names>Thomas</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>04</month>
       <year>2007</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>In Regression Discontinuity (RD) designs for evaluating causal effects of interventions, assignment to a treatment is determined at least partly by the value of an observed covariate lying on either side of a fixed threshold. These designs were first introduced in the evaluation literature by Thistlewaite and Campbell (1960). With the exception of a few unpublished theoretical papers, these methods did not attract much attention in the economics literature until recently. Starting in the late 1990s, there has been a large number of studies in economics applying and extending RD methods. In this paper we review some of the practical and theoretical issues involved in the implementation of RD methods.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0337.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Instrumental Variables Estimation of Heteroskedastic Linear Models Using All Lags of Instruments</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0338</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>West</surname>
          <given-names>Kenneth D</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Wong</surname>
          <given-names>Ka-fu</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Anatolyev</surname>
          <given-names>Stanislav</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>05</month>
       <year>2007</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We propose and evaluate a technique for instrumental variables estimation of linear models with conditional heteroskedasticity.  The technique uses approximating parametric models for the projection of right hand side variables onto the instrument space, and for conditional heteroskedasticity and serial correlation of the disturbance.  Use of parametric models allows one to exploit information in all lags of instruments, unconstrained by degrees of freedom limitations.  Analytical calculations and simulations indicate that there sometimes are large asymptotic and finite sample efficiency gains relative to conventional estimators (Hansen (1982)), and modest gains or losses depending on data generating process and sample size relative to quasi-maximum likelihood.  These results are robust to minor misspecification of the parametric models used by our estimator.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0338.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Unconditional Quantile Regressions</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0339</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Firpo</surname>
          <given-names>Sergio</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Fortin</surname>
          <given-names>Nicole</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Lemieux</surname>
          <given-names>Thomas</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>07</month>
       <year>2007</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Labor Studies</meta-value>
		       </custom-meta>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We propose a new regression method to estimate the impact of explanatory variables on quantiles of the unconditional (marginal) distribution of an outcome variable. The proposed method consists of running a regression of the (recentered) influence function (RIF) of the unconditional quantile on the explanatory variables.  The influence function is a widely used tool in robust estimation that can easily be computed for each quantile of interest.  We show how standard partial effects, as well as policy effects, can be estimated using our regression approach.  We propose three different regression estimators based on a standard OLS regression (RIF-OLS), a logit regression (RIF-Logit), and a nonparametric logit regression (RIF-OLS).  We also discuss how our approach can be generalized to other distributional statistics besides quantiles.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0339.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>The Identification and Economic Content of Ordered Choice Models with Stochastic Thresholds</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0340</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Cunha</surname>
          <given-names>Flavio</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Heckman</surname>
          <given-names>James J</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Navarro</surname>
          <given-names>Salvador</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>07</month>
       <year>2007</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper extends the widely used ordered choice model by introducing stochastic thresholds and interval-specific outcomes. The model can be interpreted as a generalization of the GAFT (MPH) framework for discrete duration data that jointly models durations and outcomes associated with different stopping times. We establish conditions for nonparametric identification. We interpret the ordered choice model as a special case of a general discrete choice model and as a special case of a dynamic discrete choice model.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0340.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Deterministic and Stochastic Prisoner's Dilemma Games: Experiments in Interdependent Security</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0341</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Kunreuther</surname>
          <given-names>Howard</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Silvasi</surname>
          <given-names>Gabriel</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Bradlow</surname>
          <given-names>Eric T</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Small</surname>
          <given-names>Dylan</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>08</month>
       <year>2007</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>This paper examines experiments on interdependent security prisoner's dilemma games with repeated play.  By utilizing a Bayesian hierarchical model, we examine how subjects make investment decisions as a function of their previous experience and their treatment condition.  Our main findings are that individuals have differing underlying propensities to invest that vary across time, are affected by both the stochastic nature of the game and even more so by an individual's ability to learn about his or her counterpart's choices.  Implications for individual decisions and the likely play of a person's counterpart are discussed in detail.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0341.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Rank-1/2: A Simple Way to Improve the OLS Estimation of Tail Exponents</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0342</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Gabaix</surname>
          <given-names>Xavier</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Ibragimov</surname>
          <given-names>Rustam</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>2007</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Despite the availability of more sophisticated methods, a popular way to estimate a Pareto exponent is still to run an OLS regression: log(Rank)=a-b log(Size), and take b as an estimate of the Pareto exponent. The reason for this popularity is arguably the simplicity and robustness of this method. Unfortunately, this procedure is strongly biased in small samples. We provide a simple practical remedy for this bias, and propose that, if one wants to use an OLS regression, one should use the Rank-1/2, and run log(Rank-1/2)=a-b log(Size). The shift of 1/2 is optimal, and reduces the bias to a leading order. The standard error on the Pareto exponent zeta is not the OLS standard error, but is asymptotically (2/n)^(1/2) zeta. Numerical results demonstrate the advantage of the proposed approach over the standard OLS estimation procedures and indicate that it performs well under dependent heavy-tailed processes exhibiting deviations from power laws. The estimation procedures considered are illustrated using an empirical application to Zipf's law for the U.S. city size distribution.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0342.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Do Instrumental Variables Belong in Propensity Scores?</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0343</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Bhattacharya</surname>
          <given-names>Jay</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Vogt</surname>
          <given-names>William B</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>2007</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Propensity score matching is a popular way to make causal inferences about a binary treatment in observational data.  The validity of these methods depends on which variables are used to predict the propensity score.  We ask: "Absent strong ignorability, what would be the effect of including an instrumental variable in the predictor set of a propensity score matching estimator?"  In the case of linear adjustment, using an instrumental variable as a predictor variable for the propensity score yields greater inconsistency than the naive estimator.  This additional inconsistency is increasing in the predictive power of the instrument.  In the case of stratification, with a strong instrument, propensity score matching yields greater inconsistency than the naive estimator.  Since the propensity score matching estimator with the instrument in the predictor set is both more biased and more variable than the naive estimator, it is conceivable that the confidence intervals for the matching estimator would have greater coverage rates.  In a Monte Carlo simulation, we show that this need not be the case.  Our results are further illustrated with two empirical examples: one, the Tennessee STAR experiment, with a strong instrument and the other, the Connors' (1996) Swan-Ganz catheterization dataset, with a weak instrument.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0343.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Bootstrap-Based Improvements for Inference with Clustered Errors</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0344</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Cameron</surname>
          <given-names>A. Colin</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Gelbach</surname>
          <given-names>Jonah</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Miller</surname>
          <given-names>Douglas L</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>09</month>
       <year>2007</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>Researchers have increasingly realized the need to account for within-group dependence in estimating standard errors of regression parameter estimates.  The usual solution is to calculate cluster-robust standard errors that permit heteroskedasticity and within-cluster error correlation, but presume that the number of clusters is large. Standard asymptotic tests can over-reject, however, with few (5-30) clusters. We investigate inference using cluster bootstrap-t procedures that provide asymptotic refinement.  These procedures are evaluated using Monte Carlos, including the example of Bertrand, Duflo and Mullainathan (2004). Rejection rates of ten percent using standard methods can be reduced to the nominal size of five percent using our methods.</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0344.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>

  <article>
    <front>
      <publisher>
        <publisher-name>National Bureau of Economic Research</publisher-name>
        <publisher-loc>Cambridge, Mass., USA</publisher-loc>
      </publisher>
      <article-meta>
        <title-group>
          <article-title>Computing Stochastic Dynamic Economic Models with a Large Number of State Variables: A Description and Application of a Smolyak-Collocation Method</article-title>
        </title-group>
        <article-id pub-id-type="publisher-id">t0345</article-id>                
        <contrib-group>
    
      <contrib contrib-type="author">
        <name>
          <surname>Malin</surname>
          <given-names>Benjamin</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Krueger</surname>
          <given-names>Dirk</given-names>
          
        </name>
      </contrib>
    
      <contrib contrib-type="author">
        <name>
          <surname>Kubler</surname>
          <given-names>Felix</given-names>
          
        </name>
      </contrib>
    </contrib-group>
    <pub-date pub-type="pub">
       <month>10</month>
       <year>2007</year>
    </pub-date>
    <custom-meta-wrap>
        <custom-meta>
		       <meta-name>NBER Program</meta-name>
		       <meta-value>Technical Working Papers</meta-value>
		       </custom-meta>
    </custom-meta-wrap>
    <abstract>
<p>We describe a sparse grid collocation algorithm to compute recursive solutions of dynamic economies with a sizable number of state variables. We show how powerful this method may be in applications by computing the nonlinear recursive solution of an international real business cycle model with a substantial number of countries, complete insurance markets and frictions that impede frictionless international capital flows. In this economy the aggregate state vector includes the distribution of world capital across different countries as well as the exogenous country-specific technology shocks. We use the algorithm to efficiently solve models with 2, 4, and 6 countries (i.e., up to 12 continuous state variables).</p>
</abstract>
    <self-uri xlink:href="http://www.nber.org/papers/t0345.pdf"></self-uri>
       </article-meta>
    </front>
    <article-type>unpublished</article-type>
  </article>
</articles>
