While I was reading this one wikipedia page on floating-point arithmetic, I stumbled upon a calculation that I found interesting. So, I decided just for fun to try and work it out. I used a spreadsheet to do the summing.
But following both the text description and the sigma notation verbatim [to the best of my understanding, anyway], I wasn't able to arrive at the same solution that the wikipedia page arrived at.
Before I ask my questions, here is the snippet from that wikipedia page that describes the calculation:
...
The common IEEE formats are described in detail later and elsewhere, but as an example, in the binary single-precision (32-bit) floating-point representation, ${\displaystyle p=24}$, and so the significand is a string of 24 bits. For instance, the number π's first 33 bits are:
$$11001001\phantom{2}00001111\phantom{2}1101101\underline0\phantom{2}10100010 0.$$
If the leftmost bit is considered the 1st bit [the MSB], then the 24th bit is zero and the 25th bit is 1; thus, in rounding to 24 bits, let's attribute to the 24th bit the value of the 25th, yielding:
$$11001001\phantom{2}00001111\phantom{2}1101101\underline1.$$ When this is stored using the IEEE 754 encoding, this becomes the significand ${\displaystyle s}$ with ${\displaystyle e=1}$ (where ${\displaystyle s}$ is assumed to have a binary point to the right of the first bit [the LSB?]) after a left-adjustment (or normalization) during which leading or trailing zeros are truncated should there be any, which is unnecessary in this case; as a result of this normalization, the first bit [the MSB? the LSB?] of a non-zero binary significand is always 1, so it need not be stored, saving one bit of storage. In other words, from this representation, π is calculated as follows:
\begin{align*} &( 1 + \sum _{n=1}^{p-1}bit_{n} \times 2^{-n} ) \times 2^{e}\\ &= ( 1 + 1 \times 2^{-1} + 0 \times 2^{-2} + 1 \times 2^{-4} + 1 \times 2^{-7} + \cdots + 1 \times 2^{-23}) \times 2^{1}\\ &= 1.5707964 \times 2\\ &=3.1415928\\ \end{align*}
where ${\displaystyle n}$ is the normalized significand's nth bit from the left [the MSB?], where counting starts with 1...
In-lined inside the square brackets "[]" in the above quoted wikipedia page, I'm using MSB to mean "Most Significant Bit"; and LSB to mean "Least Significant Bit".
I've in-lined those remarks to highlight parts of that description that are throwing me off. What's confusing to me is, the description starts off denoting the MSB as "the first bit". Then it starts referring to the LSB as "the first bit".
Looking at the 24 bit binary number given [${\displaystyle11001001\phantom{2}00001111\phantom{2}1101101\underline1.}$], the order of the corresponding bits that appear in the expansion of the summation [${\displaystyle1 + 1 \times 2^{-1} + 0 \times 2^{-2} + \cdots}$] suggest that the calculation starts from the LSB.
So starting at the LSB, the first cell of my spreadsheet (A1) has the value 1. The "first bit".
In the second spreadsheet cell (A2), I have a formula that multiplies the 1 bit in the binary 2s place, by ${\displaystyle 2^{-1}}$.
In the third spreadsheet cell (A3), I multiply the 0 bit in the binary 4s place by ${\displaystyle 2^{-2}}$...and so on consecutively and respectively up to the MSB.
So my spreadsheet has a column of 24 cells that correspond to the 24 bits. The first cell simply has the value 1. The remaining 23 cells have a formula: bit * POWER(2;-n) (where 'bit' is one of '0' or '1'; and 'n' corresponds to the 'nth bit' specification of the above description - having a range of '1-23'). The 25th cell in that column has the formula SUM(A1:A24) * POWER(2;1).
The sum I get with that formula when starting my counting at the LSB, is ${\displaystyle3.436558485}$. The sum I get with that formula when starting my counting at the MSB, is ${\displaystyle3.5707962513}$.
The sum I get with that formula when starting my counting at the LSB, and excluding ${\displaystyle2^{3}+\cdots+2^{5}+\cdots+2^{6}\cdots}$ from the calculation, is ${\displaystyle3.155308485}$.
The sum I get with that formula when starting my counting at the MSB, and excluding ${\displaystyle2^{3}+\cdots+2^{5}+\cdots+2^{6}\cdots}$ from the calculation, is ${\displaystyle3.5082962513}$.
Now, here are my questions:
- What is the correct sigma notation for converting the binary representation of pi to its decimal representation?
- Why doesn't the example on the wikipedia page include ${\displaystyle2^{3}+\cdots+2^{5}+\cdots+2^{6}\cdots}$ in its calculation?
- What do I need to correct in my spreadsheet formula to make it arrive at the same answer that the wikipedia page arrives at?
- Does that description need correcting in its confusing reference to both the MSB and the LSB as "the first bit"?
- Does the sigma notation in that description need correcting to make it clear that ALL of the bits should be included in the calculation?
- Going by what I described of my spreadsheet calculation, what step(s) or what fundamental mathematical concept have I overlooked or misunderstood?
Thanks in advance.