Sunday, January 27, 2019

Traffic Modeling and False Precision in the SRC Q and A

The big Q & A on the SRC has several sections that use traffic modeling, but no section on the modeling itself. The document assumes the truthfulness and usefulness of a set of traffic forecasts for 2040.

I want to step back a little and ask some questions about the way we handle the traffic forecasts. Above all, the SRC team, and traffic planners here generally, both in public employment and in private contracting, pretty much everyone involved in traffic engineering, elide the uncertainty around forecasts. If there is any statistical uncertainty around the projections, you'd never know it.

There are also a couple of other ways that planners play fast-and-loose with the forecasting.

Traffic Forecasts Generally Deserve Margins of Error

As we receive the forecasts now - and they are delivered rather ex cathedra - they are full of false precision.

2040 counts from Section 13.c

2040 counts from Section 15.d
All of these numbers should have a 95% or 80% confidence interval on them. Or some other sign of the margin of error and uncertainty.

In particular, look at this comparison:

Section 13.f: Look at the deltas: 3.1 vs. 2.7 and 3.9 vs. 3.7
If we added a margin of error on the forecasts, it is a near certainty that differences in factor of 0.4 (13%) and 0.2 (5%) are well within any margin of error. (If you look at 22,630 vs. 26,020 from the table just above, the delta is 15%. You might think of other ways to slice this ratio, but the general truth is the 20 year delta is small.) By a reasonable reckoning, not tainted by false precision, if you accept the traffic modeling, there is no real difference between No-Build and Build alternatives in these peak hour delays. It is absurd to say "both 2040 scenarios will result in a significant increase in VHD..." Only false precision suggests this is a "significant" difference. This is, on the contrary, statistically insignificant.

To make it clear: The suggestion here is that differences 20 years out on the order of 10% are statistically insignificant. I don't know what range is meaningful, but it's a lot bigger than the precision in the SRC claims. (We will see that the variance on historical forecasting wildly exceeds 10%.)

Reputable forecasts include statements about uncertainty intervals right up front. Here are two examples.

This forecast that includes uncertainty
and a probabilistic range of outcomes - via fivethirtyeight
Uncertainty at the center of hurricane forecasting
Look at how the cone widens with time!
via Twitter
By eliding uncertainty ranges, it becomes possible to make silly claims for the SRC.

(And even if you accept the numbers straight-up, come one, a 10% improvement for $500 million?!?!?! And more likely a billion with cost overruns and escalations? 10% or even 15% is an amazingly tiny return on a huge, huge investment.)

We Deserve an Historical Assessment of Traffic Forecasting

Maybe 10% seems significant to you. Looking at 20-year forecasts from 1980 clearly shows that 10% is totally inside any margin of error or uncertainty.

This is an awkward graphic, but it points out to something we should be asking from our traffic planners.

There is never any retrospective assessment that asks, "Well, how'd we do?"

The top half of the chart is from a scan of a folded insert with year 2000 traffic projections from the Front Street Bypass FEIS circa 1980. Current counts were added in white and red. (It is also rotated 90 degrees more or less.)

The bottom half of the chart is in the Q & A, a summary of year 2040 traffic projections for the same area.

Another one for year 2000 from Salem Parkway FEIS
At least publicly, we never see old forecasts compared to actuals when the 20 year interval for prediction has passed.

This would give the public and policy-makers an empirically grounded sense for the uncertainty in traffic forecasts.

Front, Commercial, and Liberty all are wildly off
Just look at the projected/actuals on Front, Commercial, and Liberty Streets: 54.2/35.6k (-34%), 4.8k/12.3k (+156%), 5.8/16.6k (+186%). These are all over, some higher, some lower than projections. And they are off by much larger factors than 15%, 13%, or 5%.

I don't know that planners deliberately hide old forecasts, but it is convenient that they not be talked about too much. We should talk about them more and complete the assessment loop. We should know how strongly to weigh them as evidence in argument. From here, it looks like we place way too much faith in them. Even if the modeling has greatly improved, and with computing power alone it should have improved, we should still see the historic variance.

Sometimes the SRC Includes Tolling Forecasts, But Mostly It Ignores their Effects

There other ways the SRC team plays fast-and-loose with forecasting.

Which column here informs "2040 Preferred Alternative Model"?
Remember the numbers from the revenue memo? They have no apparent relation to the numbers in chart from Section 15.d on the 2040 Preferred Alternative. No Toll corresponds to No Build, and the Toll Amounts must correspond to the Preferred Alternative...but how? It looks like the 2040 Preferred Alternative numbers don't include the effects of any toll. Why?

The efficacy of tolling is erased.

Since tolling is an important part of the funding plan, and since tolling is very effective on changing traffic patterns, all of the 2040 forecasts should have additional columns that include the effects of tolling or decongestion pricing.

There is little Reason to Think the SRC would be a Good Investment

So there are multiple sources of uncertainty in our 2040 forecasts:
  • A general statistical range of error/uncertainty inherent to any 20 year forecast
  • The unknown effects of induced demand (what is the uncertainty on the 15% projected increase?)
  • Total erasure of tolling effects
Maybe there are others, too. (Gas and energy price alone should be a big factor; changes to zoning and land use patterns might be another big factor.)

Altogether, I think this means the numbers given to Council for 2040 are not very good. But since they are what we have, it's almost certainly true that there is in aggregate essentially no difference between the Build and No Build alternatives. Based on the information we have, at $500 million, there is little reason to prefer the Build Alternative. If tolling projections were more consistently carried through the analysis, they would even more strongly show the foolishness of the SRC as an ineffective and expensive solution.

2 comments:

Salem Breakfast on Bikes said...

Over at City Observatory, "There’s a $3 billion bridge hidden in the Rose Quarter Project EA" looks at the traffic modeling for the I-5 Rose Quarter project.

Unsurprisingly, ODOT plays fast and loose with the traffic projections.

The 2015 modeled traffic is way higher than the 2016 actual traffic counts, and it turns out ODOT retconned traffic numbers from a future CRC into the current Rose Quarter analysis.

But since the CRC also needed tolling, and this would depress traffic counts, the effects of tolling on I-5 traffic numbers are not included in the I-5 Rose Quarter analysis - just like here with the SRC and excluding effects of tolling!

This is another example of a kind of fundamental sophistry at ODOT.

Salem Breakfast on Bikes said...

Also at City Observatory: "The Lemming Model of Traffic."

"The traffic forecasting model used by ODOT is inherently structured to over-predict traffic congestion, and presents a distorted picture of what’s likely to happen with freeway widening.The model, a classic four-step traffic demand model (albeit with a few tacked on bells and whistles, like a microsimulation package) is decades old technique with a couple of critical and well documented weaknesses. The most important one is that it allows traffic to grow without limit in the “base” case. It predicts that as population increases, more and more people will drive more and more miles, and that they will all be oblivious in their trip making behavior to any changes in congestion. That is, they won’t be deterred by congestion from changing the timing, mode, destination, or whether they take a trip at all. That’s because four-step models, like the one used to create the Rose Quarter I-5 estimates, uses something called static trip assignment, STA that takes almost no notice of the effects of congestion and delay.

In an important sense, static trip assignment is a kind of “lemming” model of travel behavior. It assigns successively more and more trips to congested links even as they become more and more congested. Implicitly, it assumes that traveler behavior doesn’t respond at all to experienced travel conditions, especially delay.
"