Hello again. I'm DSOC's Juan. It's time for the part III of the Structural Estimation Series. In the last post I described the dynamic optimization problem of Harold Zurcher according to Rust (1987), and presented some important concepts such as the transition function, the value function and the structural parameters. Here I plan to achieve three things:
- Discuss how information that is unobservable to the researcher is typically included in the decision process model.
- Define two important functions that form the basis for the estimation of structural parameters of most DDC problems. Once you get these down, the rest becomes considerably easier.
- Show how to express both functions in code. This blog post is batteries-included. At the end, you can find a gist with some Python code implementation of the functions I will discuss here, showing how they work. You can open the gist in Google Colab and get your hands dirty with the code.
Since this post is a continuation of the Part II, the numbering of the equations also starts from where we left last time (just in case you wonder why it starts in 5).
This is a busy post, let's begin.
A Model With Unobservable Information
So far we have been assuming that we, as researchers, can observe all the information available to the agent. However, there is always some information affecting the choices of the agent that cannot be observed. Therefore, even if we could calculate the utility of all the alternatives, the agent might not choose what we believe is the maximizing option. Economists often assume that these deviations are random, and follow some probability distribution.
The two following assumptions are commonly found in most DDCs:
The utility function can be expressed in the following way:
Note that I'm dropping the time index for the sake of simplicity. Instead, whenever I refer to a value one time step into the future I will use an apostrophe, as in .
When stochastic components enter the decision process the possibility arises that these random deviations affect the evolution of the state. For the purpose of simplification, it is common to assume that stochastic deviations materialize as a result of the current state, and vary by choice, and that they only affect the evolution of state through their effect on the decision made by the agent. These errors are therefore serially uncorrelated, which simplifies things. The transition of the state and the deviations can therefore be stated as follows:
Here, is the distribution of errors, parameterized by . Note also that this distribution is the same for all agents at all points in time.
Discrete State with finite support
In the last post I only talked in very broad terms about the state. In this post, we will assume that is a discrete variable. The evolution behaves as a Markov Chain, meaning that the future value is only determined by the value one period before . It has a very short memory. Markov chains can be described by a transition matrix (also called a stochastic matrix). The dimensions of the matrix represent all the possible states that the variable can take, and each row represents the probability of the variable evolving from a given state to a different one in one period. Quantecon explains very well Markov Chains, in case you need to review them.
in Equation (6) represents the transition probability given by the corresponding transition function.
The set of parameters that define the DDC is:
- : the parameters of the utility function.
- : the parameters of the transition function. In the case of discrete state, these are just the transition matrix itself.
- : the parameters of the distribution of random disturbances.
Let's define as the set of all the parameters above.
The Building Blocks of DDC Algorithms
Using these assumptions, we can express the value function conditioned on choice at state in the following way:
Note that the value at a given state is: , which is the value of choosing the best option for the agent, whatever it is, at all times in the future.
Related to is the policy function , which represents the agent's decision rule among all the choices in .
Now consider the probability of an agent choosing a given option when it finds itself in state :
These probabilities are called Conditional Choice Probabilities . These are an important building block, and we will go back to them in a second.
So far we have talked about the "value function", which takes as parameters a choice and a state , is parameterized by some and returns a real value. Since the number of choices and the number of states are both discrete and finite, we can (in theory, with enough computer juice of course) calculate the value for each combination of state and choice. We can represent all these values using matrix notation.
Now a confession: I think that the matrix notation used in most papers is confusing. For example, Aguirregabiria and Mira (2002) represent the conditional choice probabilities as an vector, with being the number of states and the size of the choice set... which means that they are a column vector which mixes the conditional choice probability of different choices... and if you need to obtain the conditional probabilities for a given choice you have to somehow remember the corresponding indices and extract them from that vector? Well that's a tall glass of NOPE right there.
I bet this small detail confuses many, even if it is the right notation for an academic paper.
This blog is about implementing algorithms in software, so I'll use a notation that fits better that purpose: the NumPy array notation! It goes like this:
An array is a three-dimensional thing with the shape: .
Here, is the cardinality of D, or the number of choices available to the agent. In the example of Harold Zurcher, he only has the options of replacing or not replacing the engine, so = 2.
is the number of discrete states. For example, in the 1987 paper, John Rust divided the continuous number of miles into 90 categories. In this case, .
Finally, is the size of the remaining dimension, which can depend on the vector you're dealing with.
Having said that, let's define the following arrays:
- : . The transition matrix (an M x M matrix for each possible choice).
- : . The Conditional Choice Probability, or the probability of choosing an alternative. The sum of the elements across the first dimension must add up to 1.
- : . The discounted expected value of choosing action i for each state x.
- : . The utility level corresponding to action i for each possible state x.
- : . The expected value of the random deviation given that i is the optimal choice when agent is in state x.
- : . Represents the expected discounted value of each possible combination of choice and action.
- : A scalar in .
Note that both and depend on . We will see later an example of these arrays when we make a parametric assumption about that distribution.
So basically, you're just stacking arrays side by side, and the meaning of each dimension is clear and easy to get.
The expected value can now be expressed as:
Remember that the first dimension of is the choices dimension. Summing across this dimension, weighting each choice by its CCP, gives us the expected discounted value:
Here, is the Hadamard (pairwise) product.
appears on both sides of the equation. Some refactoring reduces it to:
This equation is one of the most important concepts in the literature on DDCs. Let's call it , or Phi Map. It takes values from the Probability space and maps them into the Value space (the reals which are consistent with the solution to the dynamic optimization problem given the current parameter values, which not necessarily are THE structural parameter values we're searching for).
We can also obtain the inverse map: a function that takes elements from the Value space and maps them to the Probability space. For the kind of simple models we're going to focus on, we need to make assumptions about the distribution of the stochastic utility shocks. A distribution that is commonly used in the literature is the Type I Extreme Value Distribution (surprise!). This distribution yields the following values:
Remember that is an array with dimensions , so when we use , we're summing across the first dimension. In other words, this is nothing else but the typical Logit formula everybody knows.
Since we made a parametric assumption about we must also use the corresponding value for , which for the Type I Extreme Value distribution is , with being Euler's constant.
Let's call Equation (11) , or Lambda Map.
Note that and hold for any value of , and , not just for the values that we expect to estimate. Also, note that you can form a self-map in probability space by composing and . For example, . Conversely, you can form a self-map of onto itself by composing .
What is important is that both compositions form a contraction mapping, and therefore have a fixed point. If you keep iterating long enough, you will eventually see that gets very close to and converges to . Also, reaching the fixed point in means that you have reached a fixed point in . Many algorithms employ this property to estimate the structural parameters of the utility function.
Code Implementation in Python
The rest of the blog post continues in the gist below. It shows the Python code implementation of the components of the DDC problem using the Bus Engine exchange problem. I left enough comments on each cell for you to follow each of the steps. The gist shows how the fixed point in both the Value space and the Probability space is reached by iterating through and . Feel free to open the gist in Google Colab and play with it. Save a copy on your own Drive if you wanna modify it on your own.
And that's it for now. In the next post we'll implement some of the most famous algorithms by making use of the mappings we discussed this time.
Until the next one!
- Aguirregabiria, Victor and Mira, Pedro (2010) "Dynamic discrete choice structural models: A survey," Journal of Econometrics, 156(1): 38-67.
- Rust, J. (1987) "Optimal replacement of GMC bus engines: An empirical model of Harold Zurcher," Econometrica 55:999–1033.
- 楠田 康之（２０１９）経済分析のための構造推定アルゴリズム