Chapter 7. Nonlinear Optimization Models. Introduction.
Nonlinear Optimization Models
Predicted point spread = Home team rating - Visitor team rating + Home team advantage
Here the objective is to minimize the sum of the squared errors over all observations, the same criterion used elsewhere in this chapter. The sum of the squared errors is a convex function of the estimates a and b, so Solver is guaranteed to find the (unique) estimates of αand βthat minimize the sum of squared errors. The main problem with the least squares criterion is that outliers, points for which the error in Equation (7.12) is especially large, exert a disproportionate influence on the estimates of α and β.
Criterion 1 gives equal weights to older and more recent observations. It seems reasonable that more recent observations have more to say about the beta of a stock, at least for future predictions, than older observations. To incorporate this idea, a smaller weight is attached to the squared errors for older observations. Although this method usually leads to more accurate predictions of the future than least squares, the least squares method has many desirable statistical properties that weighted least squares estimates do not possess.
Instead of minimizing the sum of the squared errors, it makes sense to minimize the sum of the absolute errors for all observations. This is often called the sum of absolute errors (SAE) approach. This method has the advantage of not being greatly affected by outliers.
Unfortunately, less is known about the statistical properties of SAE estimates. Another drawback to SAE is that there can be more than one combination ofa and b that minimizes SAE. However, SAE estimates have the advantage that they can be obtained with linear programming.
A final possibility is to minimize the maximum absolute error over all observations. This method might be appropriate for a highly risk-averse decision maker. This minimax criterion can also be implemented using LP.
End of Chapter 7