|That's a bell-ish looking curve with a bias to the left|
What I find extremely funny about this post is adding a graphic from the city that forecasts biking and walking in 2035 and accepting it as a fact or accurate prediction,* but in the same post adding a graphic to show that model forecasts are not accurate but instead have a distribution (everyone who does forecasting knows this, its not a revelation!!!)One reason the comment does not seem to be offered in good faith is that its assumed audience is a fellow forecaster or someone with similar knowledge. Of course "everyone who does forecasting" knows about a distribution on forecasts. Here I am not concerned with professional, guild knowledge addressed to other professionals. This kind of insider talk is not the issue. The relevant issue here is the outward-facing information, what public and electeds "know," and they see only a single number in a forecast.
It appears that the author likes to rely on forecasts when it fits their viewpoints and biases, but rebuke forecast when it doesn't!
BTW - that bell-shape forecast distribution show that the majority of forecasts are within 20% plus or minus of the actual future volumes. I have no problem with that level of accuracy. How many people do you think can predict the Dow Jones Index 20 years from now within 20% +/-??
Here is a forecast from the SRC, for example. (You can read a longer discussion of it, with references, here.)
|2040 counts from a January 2019 SRC report|
The claim here, is that this kind of forecast needs visible statements of uncertainty.
And, indeed, it turns out this paper recommends that. This is the big takeaway: Use a range of forecasts to communicate uncertainty. (See from 2015, "Like Weather Forecasting, our Traffic Forecasting Needs Error Bars," which in light of this study holds up pretty well.)
|Recommendation: Include a range to communicate uncertainty|
|Recommendation: Compare forecast to actual|
|Recommendation: Report on accuracy|
|The spread on a 95% confidence interval is wide|
|The error is bigger on smaller roads, but as a rule of thumb|
we could say +/- 50%
|This table on volume/capacity is from 2015|
* Bike count and bike forecast accuracy is not the main point here, but it has been discussed before, most recently last year. There is no question that bike traffic forecasting has a greater margin of error. The daily variation on bike counts is large also. The point on the chart referenced in the comment, however, was to show a difference of such magnitude (5% goal vs. 0.4% estimated actual, a whole order of magnitude) that no margin of error would close the gap. In that particular analysis, margin of error on a forecast was irrelevant, and proves nothing about cherry-picking.