I want to step back a little and ask some questions about the way we handle the traffic forecasts. Above all, the SRC team, and traffic planners here generally, both in public employment and in private contracting, pretty much everyone involved in traffic engineering, elide the uncertainty around forecasts. If there is any statistical uncertainty around the projections, you'd never know it.
There are also a couple of other ways that planners play fast-and-loose with the forecasting.
Traffic Forecasts Generally Deserve Margins of Error
As we receive the forecasts now - and they are delivered rather ex cathedra - they are full of false precision.
2040 counts from Section 13.c |
2040 counts from Section 15.d |
In particular, look at this comparison:
Section 13.f: Look at the deltas: 3.1 vs. 2.7 and 3.9 vs. 3.7 |
To make it clear: The suggestion here is that differences 20 years out on the order of 10% are statistically insignificant. I don't know what range is meaningful, but it's a lot bigger than the precision in the SRC claims. (We will see that the variance on historical forecasting wildly exceeds 10%.)
Reputable forecasts include statements about uncertainty intervals right up front. Here are two examples.
This forecast that includes uncertainty and a probabilistic range of outcomes - via fivethirtyeight |
Uncertainty at the center of hurricane forecasting Look at how the cone widens with time! via Twitter |
(And even if you accept the numbers straight-up, come one, a 10% improvement for $500 million?!?!?! And more likely a billion with cost overruns and escalations? 10% or even 15% is an amazingly tiny return on a huge, huge investment.)
We Deserve an Historical Assessment of Traffic Forecasting
Maybe 10% seems significant to you. Looking at 20-year forecasts from 1980 clearly shows that 10% is totally inside any margin of error or uncertainty.
This is an awkward graphic, but it points out to something we should be asking from our traffic planners.
There is never any retrospective assessment that asks, "Well, how'd we do?"
The top half of the chart is from a scan of a folded insert with year 2000 traffic projections from the Front Street Bypass FEIS circa 1980. Current counts were added in white and red. (It is also rotated 90 degrees more or less.)
The bottom half of the chart is in the Q & A, a summary of year 2040 traffic projections for the same area.
Another one for year 2000 from Salem Parkway FEIS |
This would give the public and policy-makers an empirically grounded sense for the uncertainty in traffic forecasts.
Front, Commercial, and Liberty all are wildly off |
I don't know that planners deliberately hide old forecasts, but it is convenient that they not be talked about too much. We should talk about them more and complete the assessment loop. We should know how strongly to weigh them as evidence in argument. From here, it looks like we place way too much faith in them. Even if the modeling has greatly improved, and with computing power alone it should have improved, we should still see the historic variance.
Sometimes the SRC Includes Tolling Forecasts, But Mostly It Ignores their Effects
There other ways the SRC team plays fast-and-loose with forecasting.
Which column here informs "2040 Preferred Alternative Model"? |
The efficacy of tolling is erased.
Since tolling is an important part of the funding plan, and since tolling is very effective on changing traffic patterns, all of the 2040 forecasts should have additional columns that include the effects of tolling or decongestion pricing.
There is little Reason to Think the SRC would be a Good Investment
So there are multiple sources of uncertainty in our 2040 forecasts:
- A general statistical range of error/uncertainty inherent to any 20 year forecast
- The unknown effects of induced demand (what is the uncertainty on the 15% projected increase?)
- Total erasure of tolling effects
Altogether, I think this means the numbers given to Council for 2040 are not very good. But since they are what we have, it's almost certainly true that there is in aggregate essentially no difference between the Build and No Build alternatives. Based on the information we have, at $500 million, there is little reason to prefer the Build Alternative. If tolling projections were more consistently carried through the analysis, they would even more strongly show the foolishness of the SRC as an ineffective and expensive solution.
2 comments:
Over at City Observatory, "There’s a $3 billion bridge hidden in the Rose Quarter Project EA" looks at the traffic modeling for the I-5 Rose Quarter project.
Unsurprisingly, ODOT plays fast and loose with the traffic projections.
The 2015 modeled traffic is way higher than the 2016 actual traffic counts, and it turns out ODOT retconned traffic numbers from a future CRC into the current Rose Quarter analysis.
But since the CRC also needed tolling, and this would depress traffic counts, the effects of tolling on I-5 traffic numbers are not included in the I-5 Rose Quarter analysis - just like here with the SRC and excluding effects of tolling!
This is another example of a kind of fundamental sophistry at ODOT.
Also at City Observatory: "The Lemming Model of Traffic."
"The traffic forecasting model used by ODOT is inherently structured to over-predict traffic congestion, and presents a distorted picture of what’s likely to happen with freeway widening.The model, a classic four-step traffic demand model (albeit with a few tacked on bells and whistles, like a microsimulation package) is decades old technique with a couple of critical and well documented weaknesses. The most important one is that it allows traffic to grow without limit in the “base” case. It predicts that as population increases, more and more people will drive more and more miles, and that they will all be oblivious in their trip making behavior to any changes in congestion. That is, they won’t be deterred by congestion from changing the timing, mode, destination, or whether they take a trip at all. That’s because four-step models, like the one used to create the Rose Quarter I-5 estimates, uses something called static trip assignment, STA that takes almost no notice of the effects of congestion and delay.
In an important sense, static trip assignment is a kind of “lemming” model of travel behavior. It assigns successively more and more trips to congested links even as they become more and more congested. Implicitly, it assumes that traveler behavior doesn’t respond at all to experienced travel conditions, especially delay."
Post a Comment