Sunday, November 1, 2020

On Forecasting and Uncertainty

As we all wonder how things will look Tuesday evening and Wednesday morning, and then for the next four years, it is interesting to consider how we handle uncertainty in forecasting and modeling.

Highlighting uncertainty:
A range of outcomes on Sunday morning - via 538

"Mapping uncertainty" - via twitter

One person, formerly involved with fivethirtyeight, writes that the modeling should be understood as "mapping uncertainty rather than making predictions."

In a response on Our Salem on population forecasting, EconNW said:

Whenever I develop a forecast for a city, I tell them that, while we are required to have a point forecast for 20 years, I know the forecast is almost certainly wrong.

Strong Towns devoted a post to a closely related matter, "A Reminder for Planners: 'Every Projection is Wrong'

False precision in the Costco case
(Final written argument on Remand)

So a perennial question here is, why does our traffic forecasting erase uncertainty and give the false precision of one number for 2035 or 2040 traffic or other future date?

In so many other exercises in forecasting and modeling, those who issue the forecasts and analyze them highlight levels of uncertainty.

Some of the times highlighting a level of uncertainty would make no difference.

Consultant response to wildly exaggerated
worries about traffic (Council last week)

An error of several hundred percent here  - a 500% error would be 42 trips instead of 7 - at the proposed housing in the German Baptist Church is not at all meaningful for a collector street.

Other times uncertainty has real implications for policy choices.

A forecasting error, well within margins of error, on the State Street corridor might keep us from appropriately adjusting it in a 4/3 safety conversion.

On State Street:
An 80th percentile confidence interval should be
about -20% to +10%.
A 95th percentile interval is much wider.

We should insist on an acknowledgement of margin of error and uncertainty on our traffic forecasting, especially when the City and other bodies are making large policy and budgeting decisions on our roads.

For previous notes as periodically we revisit this theme, see here

(And this note is not about the candidates in the Election; comments about that or them will almost certainly be deleted.)


Walker said...

Furthermore, every agency and government department that routinely forecasts anything (Costs, traffic, growth, etc.) should be legally required to develop and update (with each new forecast made) its own “forecast error tendency” and, when presenting new forecasts, be required to present its own forecast error tendency data and explain how the newest forecast accounts for the agency’s historical error tendency.

Imagine if traffic modelers had to explain why they relentlessly overestimate traffic, and why they relentlessly underestimate costs . . . And use data that shows that their forecasts are far too often simple “wishcasting.”

Salem Breakfast on Bikes said...

It's remarkable how many pieces there have been generally in the form of, "here's what we got wrong in 2016, here's what you got wrong or misunderstood in interpreting what we said, here's how we are doing things differently, and here's how you should interpret it."

As you say, there is just no feedback/assessment loop whatsoever in traffic forecasting.

Walker said...

Or any other kind of government forecasting. Every program and project has embedded goals and forecasts, and government tends to be absolutely silent about comparing the projections — how they sold the thing — with the actual results obtains (see, “urban renewal” for just one notorious example). Every part of government is constantly engaged in making claims about the future (what it expects will happen if this or that policy is pursued) but there is zero effort to do any self-reflection to determine whether there is anything to be learned from the gap between results and projections.