Sunday, December 18, 2016

Our Lousy Track Record for 1980s Traffic Modeling

Now that we've been able to go through some of the traffic modeling from around 1980 that supported the Front Street Bypass, the Salem Parkway, and the Mission Street Overpass, it's clear that there some pretty big misses. That points towards a systemic problem with the modeling.

On a comfortable majority of road segments, the projected year 2000 traffic volumes were meaningfully larger than actual traffic volumes measured mostly in the 2010s. Even when you look 30 years out instead of 20, the actual traffic still hasn't caught up to the projected numbers. (On a few segments actual traffic is higher, but these are the exception.)

Let's look at the biggest misses.

Wild overestimates:
  • Front Street bypass
  • High/Church in downtown
  • Cherry Street 
  • Portland Road/Fairgrounds Road
  • Pine Street
  • 17th Street
  • Center Street near 17th
  • 12th Street
  • The Mission Street overpass itself
Big undercounts:
  • Liberty/Commercial in downtown
  • Front Street at River Road 
  • 25th Street north of Mission
Many of the other estimates are "in the ballpark" but slightly under actual counts; considering the actual counts here mostly are from the 2010s, not from year 2000, however, the general drift in nearly every case is that there was a system bias for overestimating traffic volumes for year 2000.

Year 2000 projected traffic counts
for Front Street Bypass
(actuals from City)


Year 2000 projections for Salem Parkway
(actuals from City)

Year 2000 projected traffic counts for 17th Street
(actuals from City)

Year 2000 projected counts for Mission Street
(actuals from City)
So the next question is, how did modeling from the early 1990s fare with projections for year 2010 or 2015 traffic volumes? Did we do any better?

While we can't answer that at the moment, we do know there have been very serious problems with the mid-2000s modeling behind the Salem River Crossing.

Traffic projections didn't model reality very well
via N3B
More generally we know that national projections of total vehicle miles failed. A report a few years ago noted that the Feds missed on 61 out of 61 projections!

Trend-line mania:  61 out of 61 projections were too high.
Also, the same slope on the trend line always!
This was not urbanist crazy talk, as the chart made its way into an academic discussion of a revision to Federal modeling.

Here's our old friend, the crazy mismatches
between projection and actual
And here was the revised estimate applied to our own Salem River Crossing.

The difference between ODOT's 2005 projections
and the new FHWA 2014 projections
But is that correction even enough?

Are there any large projects on which the modeling was in total "in the ballpark"?

On the contrary, there is here a real body of evidence that all of our modeling systematically overestimates future traffic volumes. 

Jane Jacobs has thoughts on this, it turns out. 

In Dark Age Ahead she has strong criticism for what we have called here the pseudo-science and practice of hydraulic autoism.

She calls traffic engineering an "antiscience masquerading as the science it has betrayed," and notes that empirical evidence often fails to support claims that "traffic is like water." (pp.74-79)


So if we have so much evidence that traffic engineering and traffic projections rest on errant assumptions and faulty modeling, why do we have any faith at all in year 2035 and 2040 projections for our current project planning? We see that it has failed on a macro-level, trying to predict the total overall aggregate amount of driving, and that it has failed on a micro-level, trying to predict the driving choices for Salemites. It broadly fails in every way!

So maybe we'll discover some modeling that is on target, and we'll need to discuss that. But in the mean time, there are strong reasons to be very suspicious of any traffic modeling we are currently doing and using for policy, taxing, and construction decisions.

Update, July 20th, 2017

Here's another one. The 1980 study for widening the bridges also overstated traffic for year 2000.

They overestimated by a little over 10% in year 2000
(inset color chart with actuals from
"New FHWA VMT Forecasts Implications for Local Planning")

4 comments:

Anonymous said...

I agree that is important to do retrospective analysis of model results; that is, compare the model forecast for a future year that was made 10-20 years previously when it the future year is now the present. However, we need to cognizant that it isn't as simple as examining counts to forecast and reaching a conclusion.

When comparing the forecasts of a model run with the current day counts, there are three possible outcomes:
- The model forecast is higher than the counts
- The model forecast is lower than the counts
- The model forecast are 'close' to the counts (however 'close' is defined)

Some of the questions that we should ask to try and understand the differences:
- Why are the model forecasts higher than counts for this segment of road? (or transit ridership)
- Why are the model forecasts lower than counts for this segment of road?
- Why are the model forecasts 'close' to the counts?

First, we need to understand the data that was used in the model.
- What were the forecasts for population and employment?
- Where was population and employment forecast to grow? (which geographic areas, which employment sectors and even by age groups)
- What were the projects assumed to be completed by the model year?
- Where did the underlying data come from? That is, what type of survey was used to estimate the equations for how/where/when/why people travel. (Did the survey include all modes and all types of trips?)
(if modeling transit, which wasn't really done in the Salem area pre-2000, what is the future transit network assumed to be like, including frequency etc)

Second, we need to understand what actually happened between running the model and when the counts were taken.
- Were the forecasts for population and employment correct at the regional level?
- Were the allocation of population and employment at the local level correct? (did more growth take place in neighborhood A than neighborhood B)
- Were all the assumed road projects built?
- Were other road projects (not in the model) built that might influence how/where/when/why people travel?
(if modeling transit, were there changes in transit service different from what was assumed? different routes, shorter/longer headway etc.)

Third, we need to understand what other changes happened that either aren't included in the model, or that were wrong.
- Did the national economy change (recession, unexpected growth etc)?
- Did people change how they travel, or were new technologies invented that impact travel?
- Have the operating costs for vehicles changed substantially? Is it cheaper or more expensive to drive than when the model was run?

(to be continued due to Blogger limits)
Ray
MWVCOG/SKATS

Anonymous said...


(part 2)

It is also important to remember that counts should not be accepted at face value. The machines that take the counts have some level of error depending on how the counts were taken. The length of the counts is important, as is the day(s) of the week when counted. Typically counts are taken Tuesday through Thursday to provide a 72-hour count. With these you can see the variations that occur between days. Sometimes counts are taken for fewer hours (example is the 16 hour count). Unless these are adjusted with factors derived from permanent counters there will be a larger error. This impacts both the calibration of the model, and when comparing the forecast from 20+ years ago to the most recent count.

Regarding the model, it is important to understand how the model was estimated, that is, what type of survey was used to find out where/why/when/how people travel. Was transit an option for travel or was everyone essentially a captive driver?

Also, the models used in SKATS pre-2000 were of a different form than those used post-2000. Pre-2000 there was no consideration of transit, the network was cruder - only the major streets, and many other details are different (especially in regards to the software used to assign the trips to a network). Post-2000 the models have six modes for traveling (drive alone, drive with passenger, being a passenger, transit use, bicycling and walking); they assign travel to an 'all-street' network, which represents pretty much all the streets (this can be good or bad); and most importantly, the models post-2000 have used a robust travel survey to provide the data for estimation and the model structure represents current best practices.

To summarize, models use information on current travel patterns etc together with assumptions of a future year (implicitly including all the intervening years) to forecast possible travel patterns. To expect a model to be 100 percent accurate when forecasting events 20-years in the future is a bit naive. Unexpected events happen and priorities change (at all levels of government and at the household level) resulting in the reality of the future year being different from the assumptions that were made for the future year.

Hopefully this adds to the conversation.

Ray
MWVCOG/SKATS

Salem Breakfast on Bikes said...

Thanks for the context and information!

There might be more to say on the details, but one thing seems like the central issue.

While all that you say is true in an inward-facing context, as the internal guild conversation on how to improve analysis, much of it is not very relevant for a public-facing conversation and debate about policy.

You say "To expect a model to be 100 percent accurate when forecasting events 20-years in the future is a bit naive. Unexpected events happen and priorities change (at all levels of government and at the household level) resulting in the reality of the future year being different from the assumptions that were made for the future year."

But the problem is that planning documents today routinely cite expected 2031, 2035, and now 2040 traffic volumes in a naive fashion: They confidently state what we "know" is going to be true and undergird millions, perhaps even billions, of dollars in public expense. Planners may or may not know the uncertainty, but the public and policy-makers most certainly do not.

The primary instance of naivete is not here looking back on 1980s analysis and finding it lacking.

The foundational act of naivete is the faux-confidence with which planners give their 2031, 2035, and 2040 projections, and the eagerness of policy makers to embrace these numbers uncritically.

It's time for error bars and confidence intervals to be front-and-center in traffic modeling and projections.

Salem Breakfast on Bikes said...

Updated with another example.