I am trying to understand why in some cases with a table that has been sampled at 100% (sample_size = num_rows), the following query returns some tables that cannot possibly hold the number of rows in the number of blocks listed in the statistics. The number of blocks times the average number of rows per block is quite a bit smaller than the number of rows listed. It is easy to see why the opposite would be true for fragmented tables. I did check the block size and that the tables are not compressed. I see this oddity in both 11.2.0.4 and 12.1.0.1. I am assuming the rows per block is approximately blocksize / avg_row_len.
The reference manual says avg_row_len is in bytes. The following assumes a database with 8k block size.
select owner, table_name, round(num_rows - ((8192/avg_row_len)* blocks),2) as xs_rows from dba_tables
where ((8192/avg_row_len)* blocks) < num_rows
and avg_row_len > 0
order by 1,2;