I think that choice of model (even nonparametric or empirical distributions) and choice of priors are linked. Both are assumptions based on prior knowledge and analytical approach. Both are overwhelmed by the data in a fertile experiment.
Utility functions are a different beast though. They don't have an update procedure and can wildly affect your decision. I'm also convinced they're the best tool we've got so far, so I take it as an illustration that making informed decisions is just hard.
Presentation of summary statistics is fine. I prefer presentation of full, untransformed, unpruned data as well when feasible. It's, of course, often not feasible. I also demand justification for why you think those summary statistics are meaningful and under what kinds of situations they would fail to capture the conclusion presented. Not saying that this isn't done in a frequentist setting, but I think it's harder.
Honestly, really, truly, honestly, i never bothered with p values as a statistician except for two cases. The first is when performing a test for somebody else to go into a standard article format. The second is when automating reports on complex data.
P values are for people who you do not trust to make decisions. Graphs and arrays of summary statistics fro. Several differentodels are for statisticians.
Also, i disagree that model choice will be overwhelmed by the data.
Hah, I should really write these with more care. I'd feel entitled to demand, but more meekly expect that there's a bit more trust and convention in scientific publication. Though that can be taken too far.
You're right that model choice can still break your analysis given large amounts of data. I was thinking more in terms of a whole inquiry where large amounts of data will help you to locate a model that extracts the maximal information from your observations. If we're able to keep experimenting forever, we pretty much assume we'll eventually get highly accurate maps of the world.
The primary difference was in utility functions where no matter how long you experiment they remain exogenous and static.
Utility functions are a different beast though. They don't have an update procedure and can wildly affect your decision. I'm also convinced they're the best tool we've got so far, so I take it as an illustration that making informed decisions is just hard.
Presentation of summary statistics is fine. I prefer presentation of full, untransformed, unpruned data as well when feasible. It's, of course, often not feasible. I also demand justification for why you think those summary statistics are meaningful and under what kinds of situations they would fail to capture the conclusion presented. Not saying that this isn't done in a frequentist setting, but I think it's harder.