Please enable JavaScript to view this site.

Vensim Help

 

It is now possible to mix calibration and policy elements in optimization payoff files. This makes it possible to do things like adding penalty terms to calibration payoffs that account for priors on model parameters or enforce complex constraints.

 

There are also several additional subtypes that make payoffs logarithmic, restrict the timing of payoff computation, and use non-Gaussian calibration error assumptions.

 

The new payoff keywords modify the existing *C (calibration) and *P (policy) options, and can be mixed.

 

For example:

*CK - calibration, with Gaussian errors and a weight interpretation matching that in Kalman filtering

*CL - calibration, with errors in log space

*PF - policy, with payoff contribution computed only at FINAL TIME

 

Note that the extended payoff options are generally not meaningful for Kalman filtering, which assumes Gaussian errors, and will generally be treated like type *C.

 

Examples

 

See the examples in OptSensi .

 

Calibration Payoffs

For calibration payoffs, the *C keyword may be followed by a Transform modifier L,  and a Distribution modifier, G, K, O, R, or Y. The Transform and Distribution modifiers may be combined, e.g. *CRL computes a robust error metric on log-transformed model and data.

 

Transform

 

Logarithmic

 

Like policy payoffs, calibration payoffs may now use the ā€˜Lā€™ modifier to take logarithms of the model and data values before computing payoff contributions. Thus a *CL payoff with the default (Normal) distribution computes:

 

((LN(model)-LN(data))*weight)^2 

 

This provides lognormal errors, rather than the standard normal assumption. The logarithm requires positive values for model and data variables; nonpositive values will be skipped with a warning message.

 

The log transform should not be used with Kalman filtering.

 

Distribution

 

The treatment of the distribution option depends on whether Kalman filtering is active. The filter implementation assumes Normally distributed (Gaussian) errors, and therefore other options like Robust are not appropriate and generally will be ignored. When the Kalman filter is active, the scale or weight parameter is always interpreted as a variance.

 

Distribution

Modifier

Regular Calibration

Kalman filtering

Normal

none

(default)

Payoff contribution calculated as:

 

- ((model-data)*weight)^2

 

where the weight is typically 1/(standard error of measurement).

The interpretation of the weight is inverted and squared, i.e. it is a variance, so the payoff contribution is

 

- (model-data)^2/weight/2 - LN(weight)/2

 

where the weight is interpreted as the variance of the measurement error. The second term arises from the Normal likelihood, and makes the weight usable as an optimization parameter.

 

Caution: The difference in interpretation makes Kalman and ordinary payoff files (.vpd) incompatible. For compatibility, us the 'K' option.

Gaussian

G

This computes the complete normal log likelihood, excepting inessential constant terms, as in Kalman filtering, except that the scale parameter is the standard deviation of the measurement error rather than the variance. This is often the most convenient format to use, because the standard deviation is the most available and intuitive measure of the quality of the data.

 

The payoff contribution is

 

- (model-data)^2/std_dev^2/2 - LN(std_dev)

 

Inclusion of the LN(std_dev) term makes it possible to estimate the error scale as an optimization parameter.

NA

Kalman

K

This makes the interpretation of the weight parameter consistent whether Kalman filtering is active or not.

 

Example:

 

*CK

model|data/variance

 

Contributes:

 

- (model-data)^2/variance/2 - LN(variance)/2

 

Other differences will exist due to the state estimation in a Kalman simulation.

Same as Normal.

Kalman Only

O

NA

The 'O' modifier is for use only with Kalman filtering; it permits inclusion of an element in the filter's state update without contributing to the optimization payoff.

 

As for other Kalman filtering applications (*C or *CK), the scale parameter is a variance.

Robust

R

The absolute value of differences between the model and the data rather than the squared residual. To use this feature, just specify *R as the payoff type, e.g.,

 

*CR

model|data/scale

 

Which will contribute to the payoff as:

 

- ABS((model - data)/scale) - LN(scale)

 

The underlying assumption is that errors have a Laplace or double exponential distribution. The second term, LN(scale) arises from the Laplace likelihood, and makes the scale usable as an optimization parameter.

NA

Cauchy

Y

The Cauchy distribution has undefined mean and variance (but a defined median), so it is a more extreme error assumption than the Laplace distribution. To use this feature, specify *Y as the payoff type, e.g.,

 

*CY

model_var|data_var/scale

 

Which will contribute to the payoff as:

 

- LN( 1 + ((model_var - data_var)/scale)^2 )

NA

Huber

H

The Huber loss function combines aspects of the Normal (Gaussian) distribution and the Robust (Laplace) loss function.

 

*CH

model|data/scale

 

Which will contribute to the payoff as:

 

if ( (model - data)/scale > 2 )

- 2*(ABS(model - data)/scale-1) - LN(scale)

else

- (model-data)^2/scale^2/2 - LN(scale)

 

In other words, it behaves like the Normal distribution with squared errors where the errors are of scale <= 2 (i.e. 2 standard deviations), and like a robust ABS(error) estimate for outliers. This combines the robustness of the absolute value with greater efficiency near the mean.

NA

Binomial

B

The Binomial distribution is provided for estimation of discrete events, as one might  encounter in a model of individual health care outcomes. It represents the log likelihood of the outcome of Binomial trials with the given characteristics.

 

*CB

model p success|data n successes/n trials

 

Unlike the other distributions above,

the first entry represents the model-calculated probability that an event occurs, not a (random) number of model events! Note that if the model time step and data interval do not match, you may need to adjust the probability to match the data interval.

the comparison data codes the number of events that actually occurred;

the third parameter is not a weight or scale factor, but represents the number of trials conducted.

 

In the special case where n_trials = 1, this simplifies to a Bernouilli trial, with probability of success p, so the payoff contribution is:

 

if ( data n successes <> 0 )

log(model p success)

else

log(1-model p success)

 

In this case, any nonzero value of the data is regarded as "success." The modeled probability of success must be greater than 0 and less than 1.

 

In other cases. The payoff contribution is:

 

log( model p success^data_n_successes 

* (1-model p success)^(n_trials-data_n_successes) )

 

The n_trials should be an integer >= 0 (zero trials will be ignored). The data_n_successes should be an integer >= 0 and <= n_trials. The modeled probability of success must be greater than 0 and less than 1.

NA

Poisson

P

The Poisson distribution is provided for estimation of discrete events, as one might  encounter in a queueing process. It represents the log likelihood of the outcome of Poisson arrivals with the given characteristics.

 

*CP

model f events|data n events/weight

 

Similar to the Binomial distribution,

the first entry represents the model-calculated frequency of events per time step, not a (random) number of model events! Note that if the model time step is not 1 unit, or events do not occur at all times, or the data interval differs from the time step, you may need to adjust the probability to match the data interval.

the comparison data codes the number of events that actually occurred.

Unlike the Binomial case, the third entry is a weight, and should generally be 1 (or 0 to ignore this entry in the payoff).

 

The payoff contribution is:

 

(data n events*log(model f events)-model f events)*weight

 

The data_n_events should be an integer >= 0. The modeled frequency of events must be greater than 0.

NA

 

 

Policy Payoffs

 

For policy payoffs, the *P keyword may be followed by a Transform modifier L,  and a Timing modifier, I or F. The Transform and Timing modifiers may be combined, e.g. *PFL contributes weight*LN(variable), only at final time.

 

Transform

*PL indicates a logarithmic payoff, such that the payoff sums weight*LN(variable). The logarithm requires positive values for model and data variables; nonpositive values will be skipped with a warning message.

 

Timing

*PI and *PF are policy elements that are computed only at INITIAL TIME and FINAL TIME, respectively.

I and F cannot be combined.

 

These options are equivalent to adding a model variable with a timing switch, like:

 

payoff elm = IF THEN ELSE(Time > FINAL TIME-TIME STEP/2,model var,0) 

 

Note that if you are combining initial or final values with other values, you may need to adjust the weights to compensate for the fact that ordinary values are integrated over the course of the simulation.