Payoff Computation

<< Click to Display Table of Contents >>

Navigation:  Reference Manual > Advanced Simulation Methods > Payoff Definitions (.vpd files) >

Payoff Computation

Previous pageReturn to chapter overviewNext page

The way in which the payoff is computed depends on the type of payoff.  For calibration payoffs, it also depends on whether or not Kalman Filtering is active.  Note that, except in the case of Kalman Filtering and the G, R and K distribution options with ordinary calibration,  if you have only one variable in your payoff definition, the actual value of the weight simply scales the payoff and will not change optimization results.  In the case of Kalman filtering this is not true since the weight is one indication of the quality of the data relative to the model.  For each type of computation the payoff is initialized at the beginning of the simulation and changed each TIME STEP.


Policy Payoffs

Ordinary *P payoffs represent the area under a curve. For example, if the target variable is Discounted Cash Flow, the payoff cumulates this, so the result is essentially Net Present Value.

At each TIME STEP the values of all the variables are multiplied by the weights they have been given, then they are multiplied by TIME STEP and added to the payoff. (This is true independent of the integration technique you have chosen.  The payoff is effectively always integrated using Euler integration.) The optimizer is designed to maximize the payoff, so variables for which more is better should be given positive weights and things for which less is better should be given negative weights.  The weights should be set so that all elements are scaled to be of the same order of magnitude.  Having done this you can adjust weights up and down to emphasize different aspects of the payoff.

If you are interested in looking at the initial or final values of a variable then use the *PF or *PI timing options. Note that these use raw variable values, not multiplied by TIME STEP.


Calibration Payoffs without Kalman Filtering

At each TIME STEP the data for variables to be compared is checked to see if a value is available.

If model and data values are available, the difference between the data and the model variable is computed, along with any likelihood terms that depend on parameters. (See Payoff Element Types for details, which vary according to the distribution assumption.) This number, which is always positive, is then subtracted from the payoff so that the payoff is always negative.  Maximizing the payoff means getting it to be as close to zero as possible.

The data with which the model variable is being compared can come from any .vdf file, whether from Import Datasets or from a previous simulation, or from a GET...DATA function.  In addition, the data can be computed within the model using a := equation.


Calibration Payoffs with Kalman Filtering

The payoff that is reported for Kalman Filtering is a log-likelihood.

At each TIME STEP the Kalman Equations are used to update the covariance matrix for the prediction errors.  This inner product of the errors on the available data weighted by this matrix is computed and this value, which is positive, is subtracted from the payoff so that the payoff is always negative. Information Resources has a list of references on this process.

With Kalman filtering active, the weights for variables are the variances of the measurement error associated with the given data stream.  This is completely different from the meaning of the weight when Kalman filtering is not active, so you will need to use two different payoff control files if you want to compute a payoff both with and without Kalman filtering.  You must provide a positive weight for each data series that is measured.  If you believe the measurement is perfect, supply a small number (for example .01).  The mathematics for using the Kalman filter with zero measurement error are not implemented in Vensim.

NOTE The weight has the opposite qualitative interpretation (to normal Payoff files) when Kalman filtering is active; it is a measurement variance rather than the inverse of the standard error of measurement.  A large weight results in less importance given to the data.



Values of NA for model or data are skipped; a warning is issued for model values but not data values (since missing data is expected).

Invalid transformations, such as LN(0) are skipped with a warning.