Oracle performance, slow for larger and more complex results.
748848Jan 25 2010 — edited Feb 1 2010Hello Oracle forum,
At the moment i have a Oracle database running and i'm specifically interested in the efficiency spatial extension for webmaps and GIS.
I've been testing the database with large shape files (400mb - 1gigabyte) loaded them into the database with shp2sdo->sql*loader.
Using Benchmark factory i've test the speed of transactions an these drop relatively quickly. I've started with a simple query:
SELECT id FROM map WHERE id = 3 when I increase the amount of id's to 3-10000 the performance decreases drastically.
so :
SELECT id FROM map WHERE id >=3 and id <= 10000
The explain plan shows the second query , both query's use the index.
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 9828 | 49140 | 22 (0)| 00:00:01 |
|* 1 | INDEX RANGE SCAN| SYS_C009650 | 9828 | 49140 | 22 (0)| 00:00:01 |
--------------------------------------------------------------------------------
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
675 consistent gets
0 physical reads
0 redo size
134248 bytes sent via SQL*Net to client
7599 bytes received via SQL*Net from client
655 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
9796 rows processed
The statistics does not show very weird stuff, but maybe i'm wrong. Nothing changed in the explain plan except for the range scan instead of a unique scan.
The query returns lots of results and this is I think the reason why my measured time of the query is large. The time it takes returning large amount of rows increases quickly for more rows.
. Can this be solved? The table has been analyzed before starting the query.
The parameters of the database are not really changed from standard, I increased the amount of memory used by Oracle 11g to 1+ gigabyte.
and let the database itself decide how it uses this memory.
The system specs are and db_parameters are:
Oracle 11G
Memory Processor # of CPUs OS OS Version OS B
1.99 gb Intel(R) Core(TM)2 CPU 6600 @ 2.40GHz 2 Microsoft WindowsXP 5.2600
0=Oracle decides which value will be given
cursor_sharing EXACT
cursor_space_for_time FALSE
db_block_size 8192
db_recovery_file_dest_size 2147483648
diagnostic_dest C:\DBBENCHMARK\ORACLE
dispatchers (PROTOCOL=TCP) (SERVICE=gistestXDB)
hash_area_size 131072
log_buffer 5656576
memory_max_target 1115684864
memory_target 1048576000
open_cursors 300
parallel_max_servers 20
pga_aggregate_target 0
processes 150
resumable_timeout 2162688
sort_area_size 65536
Sga=632mb
PGA=368mb
javapool=16mb
largepool=8mb
other=8mb
So I indexed and analyzed the data what did i forget? I can speed it up with soft parsing, but the problem remains . Hopefully this enough information for some analysis, does anyone experienced the same problems ? I tested with SQLdeveloper the speed and is shows the same speed as Benchmark factory. What could be wrong with the parameters?
Thanks,
Jan Martijn
Edited by: user12227964 on 25-jan-2010 4:53
Edited by: user12227964 on 26-jan-2010 2:20