I was recently looking at the differences between an ANN (Artificial Neural Network) and a PNN (Probablistic Neural Network).
The PNN is described here: PNN
The ANN is described here: ANN
My question is this: there are several differences between the ANN and the PNN, and most of these differences seem like additional layers added onto the PNN (both being feed-forward nets of course). If you had already had an existing ANN that made empirical decisions (yes/no) for its output, would turning that output into a probability value from 0 to 1 be as simple as reading the raw output from the last neuron (a sigmoid function), or would this not be an accurate probability measurement?
Okay, here is the edit to my question:
Each of the circles below are artificial neurons that each have two parts: the first part is a weighted sum of the inputs, then that result is fed into the second part of the neuron which is the sigmoid (logistic) function (with a range from 0 to 1).
Now, on a typical training/testing demo, the network is trained first using back propagation, then the network is tested. When its tested, typically the network output $Y$ is used as a classifier only. Meaning that if the sigmoid output is $0 \le Y \lt 0.5$, then it positive (1). Otherwise if it is $0.5 \le Y \le 1$, then it is negatively classified (0).
My question was if we can do this, then during the testing phase, could we just read the raw output of the top neuron (which is a sigmoid output between 0 and 1) and treat it as a probability instead of a classification? If not, then why not?

