Timeline for Confidence intervals for binary classification probabilities
Current License: CC BY-SA 4.0
4 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Oct 7, 2021 at 18:37 | comment | added | Samos | @Tanguy one could do that indeed. This sounds to me like the method described below in Jan van der Vegt answer (the one where you do multiple prediction with different dropout and check the variance). | |
| Oct 6, 2021 at 17:42 | comment | added | Tanguy | Can't we achieve something to assess the uncertainty by boostrapping the calibration process ? Suppose we have a sample with a predicted score of 0.8, we collect N samples similar to it (so thieir predicted score should also be close to 0.8) and proceed to M bootstrap calibrations --> that way maybe we could derive an uncertainty interval for that sample? | |
| Aug 29, 2019 at 9:30 | review | Late answers | |||
| Aug 29, 2019 at 10:12 | |||||
| Aug 29, 2019 at 9:10 | history | answered | Samos | CC BY-SA 4.0 |