A long time ago I learned a little bit of programming for the first time - from a bootlegged FORTRAN IV manual. One of the things I remember distinctly is integer division: 7/4 = 1 (the result is always truncated, never rounded); and if you assign 7/4 to a floating-point variable the result is still 1, because the division is performed first, before the assignment. The operation is performed in the ALU (arithmetic logic unit), not the FPU (floating-point unit), and it is much faster than floating-point division. Much later I learned some C and I did some programming in it, although not much involving integer arithmetic; still, I believe the same rules were true there. Somehow over time I got to believe that that's what "hardware arithmetic" meant, with regard to integer division. I never learned machine language (or even assembly, though there is still time), but I assumed integer division as I learned it in FORTRAN and C was in fact very close to how the machine itself works.
In PL/SQL there is an integer data type, PLS_INTEGER, which is supposed to "use machine arithmetic" (direct quote from the Oracle documentation, for example here: PL/SQL Data Types ). Yet I find these results:
alter session set plsql_code_type = native; -- same results with or without this
declare
i pls_integer := 7;
j pls_integer := 4;
n number;
k pls_integer;
begin
n := i/j;
k := i/j;
dbms_output.put_line ( 'n: ' || to_char(n) );
dbms_output.put_line ( 'k: ' || to_char(k) );
end;
/
PL/SQL procedure successfully completed.
n: 1.75
k: 2
Is my understanding of "hardware arithmetic" wrong? Is there more than one specification for "integer division" in the world of computer hardware? Has the standard definition of "integer division" changed since the last century? Or is the claim that PLS_INTEGER uses "hardware arithmetic" not entirely true?
I suspect/speculate (with no real basis) that the PL/SQL interpreter parses the code and has its own interpretation of "machine arithmetic". For example, if it sees a variable of NUMBER type on the LHS of an assignment, it will do floating point division (of some kind) on the RHS, even when the operands are both specifically declared PLS_INTEGER. And, even when the LHS is also declared PLS_INTEGER, the result is rounded instead of truncated. Somehow I doubt that that's "machine arithmetic"; I wonder what the performance penalty is from such overhead, compared to true C for example.
Thank you, - mathguy