Anton C. Yang
  • Home
  • Research
    • Gravity & Estimation
    • Demand Studies Initiative
    • Agriculture
    • Trade Policies
  • Technical Resource
  • Teaching
  • Blog
  • Curriculum Vitae
  • Machine Learning Lab
  • Home
  • Research
    • Gravity & Estimation
    • Demand Studies Initiative
    • Agriculture
    • Trade Policies
  • Technical Resource
  • Teaching
  • Blog
  • Curriculum Vitae
  • Machine Learning Lab

ANTON YANG

Why do we need calibrations and thoughts on calibrating a CDE demand system in the CGE model.

9/29/2020

0 Comments

 
Why do we need to calibrate? Before answering this question, perhaps we want to first revisit the fundamental points of estimation and calibration. Last year I presented a guest lecture in Dr. Hertel’s AgEc 618, so many thoughts I wrote in the slides, while have been evolved, seem to be still valid. In the structural (econometric) estimation, what we essentially do is to extract the average information in the data or observables, then the key point of subsequent calibration is to replicate the data with the exact fit in the model.

The next question is how to produce the exact fit. Taking a simpler CES case in a general equilibrium model as an example, in the model there is one sigma substitutability parameter, one border cost parameter and one distance elasticity of trade costs that stayed constant across the estimation and subsequent calibration. In this case, the difference in the implementation between econometric estimation and the calibrated exact fit is that the distribution parameters are often permitted to vary bilaterally in the calibration (see, e.g., Balistreri and Hillberry, Economic Inquiry, 2008), whereas in the econometric estimation each origin has a unique solution of distribution parameter presented in the utility function of each buyer or destination’s representative agent. Still with the CES, but in a more complex case on the Melitz model, Balistreri, Hillberry and Rutherford (JIE, 2011) allow bilateral variation in fixed costs to generate the exact fit.

One issue prior to the CDE parameters ever had been directly estimated was that income elasticities of certain goods in developing countries do not change considerably when incomes grow, whereas empirically it must be true that income elasticities will generally decrease as incomes grow, up to different orders of magnitude across goods. The issue thus spurred partial motivation for calibration, but this doesn’t seem to be a problem based on the results in my CDE estimates. Nevertheless, the point of calibrating the CDE is still to produce the exact fit that closely reflects the dataset, therefore broadly it does seem that subsequent calibration is necessary.

The CDE-in-GTAP case is more complicated. In this case, there is one distribution parameter, one expansion parameter and one partial substitution parameter, along with many other parameters in the GTAP model with CDE private household demand. The important question to ask is, which parameters should we believe to be common to all countries, and which parameters do we want to free up to produce an exact fit? Without thinking twice, I would immediately think that, just like the CES, the distribution parameters should be freed up, and that the expansion and substitution parameters are common to all countries. Similar to my responses in a group discussion with Dr. Hertel, this is not because we don’t believe that in the real world each country should not have her own consumer preferences, it is simply because that from the given piece of data information and the ad hoc specification the CDE model does not reveal regionally differentiated information at all, but rather predict different demand orbits and shares of consumption at each level of unit-cost prices. Put it simply, the econometric exercise does not exploit the variations of other variables (i.e., other than income and/or prices) that make the consumer preferences differently illustrated.

However, this choice becomes more complicated when I relate the normalization strategy of distribution parameters to other parameters in the econometric estimation.  In our CDE estimation paper, the change of value of either distribution or expansion parameters would affect the other one as well as the seemingly “floating” utility levels.  Through the design of normalization procedure, we find that our estimation would be robust if we let 1) the sum of the distribution parameters to equal the joint value of the right-hand side of additivity equation (in our case we set this value to equal 1 for the purpose of numerical stability), and 2) the sum of the expansion parameters to equal the number of goods; this means that the expansion parameters as exponents on utility tend to equal one on average across the goods. We also find that these operations tend to subtly improve the scaling of the utility level. So technically, the parameters that should be freed up might be both the distribution and expansion parameters in a way satisfying the same optimization constraints in the estimation.
​
The issue is that, in general, we don’t want the parameters we use to generate an exact fit to be used also in any counterfactual analysis. However, the expansion parameters do enter the counterfactuals. Maybe one way to think about this problem is that for each distribution parameter present in the utility of the destination’s market there is a one-to-one mapping of one expansion parameter wherein the solution to the partial substitution function is unique at each income-price pair. For this reason, we shall seek for appropriate values of distribution parameters that produce the exact fit corresponding to the data, while satisfying both the adding-up constraints for distribution and expansion parameters. 
0 Comments

Some Thoughts about the Schools of Linearization and Levels in Contemporary CGE Models

7/3/2020

0 Comments

 
These days I have been in frequent conversations with people who have good insights into schools of levels and linearization in contemporary CGE models. Some thoughts are derived from our discussions. These are things at least I would think that have created somewhat communication barriers between schools of CGEMs and NQTMs. One thing that initially draws my attention was due to an argument that linearization is indifferent to implicit or explicit preferences ’slash' technologies. But perhaps before we even think about theories and computational strategies we may want to ask how less good were computers if we all step into a time machine and go back to 20-30 years ago. At that time, an empirical solution in levels might have been much more challenging than it is now today. A mathematical algorithm that crawls toward the solution with a series of linear approximations was probably much more robust. With advances and innovations in the technology of computer advancement, however, all have become ancient history. This probably means that for a less extensive collection of CGE models there is no need to do approximations and the exact hat calculus would be the suitable candidate to calculate counterfactuals. This then makes sense that majority of the school of NQTM modelers do not use the GEMPACK software which typically comes up with solution to errors associated with linearization to iterate the linearization over and over until a new solution in levels is achieved. Pioneers of CGEs have developed this strategy to solve large general equilibrium trade models (e.g., huge contributions to CGE due to Johansen, Dixon and Hertel). If the schools of CGEs and NQTMs ever found a way to communicate, then the CGE modelers would likely explain that they actually obtained the levels solution with the so-called iterative linearization procedure, then undoubtedly NQTM modelers would question why CGE modelers do it that way. The fact is probably that the linearization representation with GEMPACK indeed has their own advantages in solving large CGE models (e.g., see IMPACT working paper), while some younger generations of CGE economists have been locked into this tradition, but similar to what the IMPACT working paper (revised version published in Economic Modeling) argues, with contemporary advances in computers and softwares we might tend to have less need to do approximations (e.g., Johansen matrix inversion) if levels solution is becoming inexpensive and more reliable.  ​
0 Comments

    Anton C. Yang

    Bridging gap between CGE and NQTMs

    Archives

    September 2020
    July 2020

    Categories

    All

    RSS Feed

Proudly powered by Weebly