Hi All,
I using Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production.
I have a case where given the number of bits available and the format of a number, I need to find the maximum number of digits required to represent the number.
For e.g. a 8 bit field if stores Hexadecimal formatted number then we need a max of 2 digits because the max the 8 bit field can store is FF hexadecimal.
For this I used this formula:
CEIL (:v_bits_reqd/LOG(
2,
CASE :v_number_format
WHEN 'BINARY' THEN 2
WHEN 'OCTAL' THEN 8
WHEN 'HEXADECIMAL' THEN 16
END))
So the formula just takes the number of bits available and divides it by the number of bits required to represent each digit in that format (for e.g. 1 in binary, 3 in octal, 4 in hexadecimal etc).
Then the result is "rounded-up" using the CEIL function because if I need 2.5 digits it means I need 3 digits.
But when I run this for 8 bits and hexadecimal, the CEIL function should be getting CEIL(8/4) so CEIL(2) = 2.
But I get the result as 3.
To check I set up a test case and sure enough, the CEIL function was giving a result which was 1 more than the correct result when its input was a whole number and not a fraction.
This was my test case:
WITH bits
AS ( SELECT LEVEL AS bit
FROM DUAL
CONNECT BY LEVEL <= 16),
num_fmt
AS (SELECT 'BINARY' AS number_format FROM DUAL
UNION
SELECT 'OCTAL' AS number_format FROM DUAL
UNION
SELECT 'HEXADECIMAL' AS number_format FROM DUAL)
SELECT b.*,
f.*,
CEIL (
( b.bit
/ LOG (
2,
CASE f.number_format
WHEN 'BINARY' THEN 2
WHEN 'OCTAL' THEN 8
WHEN 'HEXADECIMAL' THEN 16
END)))
AS LENGTH,
b.bit
/ LOG (
2,
CASE f.number_format
WHEN 'BINARY' THEN 2
WHEN 'OCTAL' THEN 8
WHEN 'HEXADECIMAL' THEN 16
END)
AS num_bits_decimal,
LOG (
2,
CASE f.number_format
WHEN 'BINARY' THEN 2
WHEN 'OCTAL' THEN 8
WHEN 'HEXADECIMAL' THEN 16
END)
AS bits_per_digit
FROM bits b, num_fmt f
ORDER BY number_format, bit;
| BITS | NUMBER_FORMAT | LENGTH | NUM_BITS_DECIMAL | BITS_PER_DIGIT |
| 1 | BINARY | 1 | 1 | 1 |
| 2 | BINARY | 2 | 2 | 1 |
| 3 | BINARY | 3 | 3 | 1 |
| 4 | BINARY | 4 | 4 | 1 |
| 5 | BINARY | 5 | 5 | 1 |
| 6 | BINARY | 6 | 6 | 1 |
| 7 | BINARY | 7 | 7 | 1 |
| 8 | BINARY | 8 | 8 | 1 |
| 9 | BINARY | 9 | 9 | 1 |
| 10 | BINARY | 10 | 10 | 1 |
| 11 | BINARY | 11 | 11 | 1 |
| 12 | BINARY | 12 | 12 | 1 |
| 13 | BINARY | 13 | 13 | 1 |
| 14 | BINARY | 14 | 14 | 1 |
| 15 | BINARY | 15 | 15 | 1 |
| 16 | BINARY | 16 | 16 | 1 |
| 1 | HEXADECIMAL | 1 | 0.25 | 4 |
| 2 | HEXADECIMAL | 1 | 0.5 | 4 |
| 3 | HEXADECIMAL | 1 | 0.75 | 4 |
| 4 | HEXADECIMAL | 2 | 1 | 4 |
| 5 | HEXADECIMAL | 2 | 1.25 | 4 |
| 6 | HEXADECIMAL | 2 | 1.5 | 4 |
| 7 | HEXADECIMAL | 2 | 1.75 | 4 |
| 8 | HEXADECIMAL | 3 | 2 | 4 |
| 9 | HEXADECIMAL | 3 | 2.25 | 4 |
| 10 | HEXADECIMAL | 3 | 2.5 | 4 |
| 11 | HEXADECIMAL | 3 | 2.75 | 4 |
| 12 | HEXADECIMAL | 4 | 3 | 4 |
| 13 | HEXADECIMAL | 4 | 3.25 | 4 |
| 14 | HEXADECIMAL | 4 | 3.5 | 4 |
| 15 | HEXADECIMAL | 4 | 3.75 | 4 |
| 16 | HEXADECIMAL | 5 | 4 | 4 |
| 1 | OCTAL | 1 | 0.333333333 | 3 |
| 2 | OCTAL | 1 | 0.666666667 | 3 |
| 3 | OCTAL | 1 | 1 | 3 |
| 4 | OCTAL | 2 | 1.333333333 | 3 |
| 5 | OCTAL | 2 | 1.666666667 | 3 |
| 6 | OCTAL | 3 | 2 | 3 |
| 7 | OCTAL | 3 | 2.333333333 | 3 |
| 8 | OCTAL | 3 | 2.666666667 | 3 |
| 9 | OCTAL | 4 | 3 | 3 |
| 10 | OCTAL | 4 | 3.333333333 | 3 |
| 11 | OCTAL | 4 | 3.666666667 | 3 |
| 12 | OCTAL | 5 | 4 | 3 |
| 13 | OCTAL | 5 | 4.333333333 | 3 |
| 14 | OCTAL | 5 | 4.666666667 | 3 |
| 15 | OCTAL | 6 | 5 | 3 |
| 16 | OCTAL | 6 | 5.333333333 | 3 |
The LENGTH column is the result. As you can see the length is wrong when BITS/BITS_PER_DIGIT is a whole number but surprisingly this doesn't happen for BINARY where the division takes place with 1?
Any suggestions anyone?
Thanks!
Arijit
Message was edited by: Arijit Kanrar
Added table of results to explain problem more?