Tuesday, May 5, 2020

New Our Salem FAQ Clarifies Some Things, Muddles Other Things

The disavowal of precision in the FAQ rings a little false
with the tables and numbers they actually published
(quote in red from the FAQ with the April version of Indicators)
The City's just published a FAQ ostensibly responding to questions about the scenarios they shared last month.
We've heard from a lot of you as part of the Our Salem project, and we thank you for your input on the different options - or scenarios - for how Salem could grow in the future.

We know there is a lot of information packed into the scenarios and indicators, so we've created a webpage to help answer some of the most frequently asked questions.
If you are unfamiliar with the basic structure, some of the questions might be helpfully explanatory.
Scenarios help us "test drive" different future land use patterns. For example, one scenario shows more new housing on the edges of Salem, while another shows more areas with mixed-use development. By having multiple scenarios, we can test different ideas in the community to better understand what people like and dislike.

Scenarios also help us know how different land use patterns could impact things like the environment, housing affordability, and how people travel around the community. This is done through a planning model that our consultants use called Envision Tomorrow.

Ultimately, the scenarios will help us develop the vision for future growth in the Salem area. This vision will include a map of how we want to grow as well as priorities that guide our growth.
But on some of the more technical details they talk around the matter.
How accurate are the indicator results?

On a high-level, pretty accurate. On a detailed level, less so.

The indicator results come from a model. The results are not meant to be exact numbers or percentages that reflect what will happen in the future. That is not their purpose. Instead, the indicator results show us how the scenarios compare to each other on a high level.
That appears to be a basic and straight-forward allusion to margin-of-error in forecasting.

But as you can see from the tables they actually published, they carry decimals out to tenths and hundredths, well past any significant digit in something "not meant to be exact." They say "not meant to be exact," but that's not what they do and show.

The FAQ in fact clarifies nothing here and is instead a muddle.

And it turns out they are using the travel model from SKATS for the forecasting.
Why are some of the indicators different from Phase 1?

In Phase 1, the community chose 20 indicators. Since then, people have had a lot of questions about how future growth is going to impact transportation, particularly congestion. In response, we asked the Mid-Willamette Valley Council of Governments (MWVCOG) to use their transportation model to help answer the questions. Their model produced several results, including the new indicators on mode split (e.g., breakdown of how people travel), vehicle miles traveled, and vehicle hours of delay. Because we used the MWVCOG's transportation model, we did not run our consultant's separate transportation model that we used in Phase 1.

We did not carry forward a few of the indicators from the first phase for various data reasons. For example, in Phase 1, we measured bicycle and pedestrian use. Specifically, we looked at the percentage of people who were projected to bike and walk to work. In this visioning phase (phase 2), we used the MWVCOG's transportation model, which provides data on travel mode split. It provided trips by bicycle and walking – as well as by bus and vehicle – and it included all trips, not just trips to work like the model we used in Phase 1. We did not want to cause any confusion by having different results for biking and walking from different models, so we just stuck with mode split as an indicator.

Another example is annual traffic crashes. In phase 1, we used our consultant's transportation model, which produced crash data. That was largely based on historic data for per capita crashes. During this visioning phase (phase 2), we used the MWVCOG's transportation model, which cannot produce crash data.
But as we have seen, traffic forecasting is very inexact, even more so in this particular context since Our Salem specifies no time interval. The traffic forecasters do the public a disservice with the false precision, and the Our Salem team then reproduces that disservice.

But traffic forecasting has a wide variance

Even our screwed-up C19 "task force" uses uncertainty shadows
in modeling and projections
(comments and bar in red added)
As a technical matter I can see now why they changed some of the indicators, but as matters of values and of public process they should have been more transparent about the change, reasons leading into it, and then making the change. The public asked for certain things, like crash data, to be indicators, and did not ask for other things, like vehicle delay, and the project team should not change course so blithely.

More particularly, using the SKATS travel model, and especially smuggling in vehicle delay and its implied levels of service, introduces a bias for autoism, and the study should be more self-aware methodologically about this.

Finally, if they say the numbers are "not meant to be exact," then they should indicate clearly when scenarios are within rounding and forecasting errors of each other, and when the differences are outside of rounding and forecasting errors and are therefore regarded as statistically meaningful. The language in the FAQ is inconsistent with the reporting standards of the published indicators.

All in all, this FAQ is of limited usefulness and still leaves important questions.

No comments: