Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

2
  • $\begingroup$ Can't we achieve something to assess the uncertainty by boostrapping the calibration process ? Suppose we have a sample with a predicted score of 0.8, we collect N samples similar to it (so thieir predicted score should also be close to 0.8) and proceed to M bootstrap calibrations --> that way maybe we could derive an uncertainty interval for that sample? $\endgroup$ Commented Oct 6, 2021 at 17:42
  • $\begingroup$ @Tanguy one could do that indeed. This sounds to me like the method described below in Jan van der Vegt answer (the one where you do multiple prediction with different dropout and check the variance). $\endgroup$ Commented Oct 7, 2021 at 18:37