Hi,
I'm currently testing the fully managed ODP.NET and noticed a severe performance problem if I set the property OracleCommand.InitialLOBFetchSize to -1.
Results from a small test program that just executes "select * from table where rownum <= 5000" on a table with 20 columns where 7 of those columns are CLOBs with different values for InitialLOBFetchSize:
Native with 0: 11.277 s
Managed with 0: 10.357 s
Native with 5120: 5.182 s (1.235 s if the OracleCommand.FetchSize is increased to 1 MB)
Managed with 5120: 0.568 s
Native with -1: 0.156 s
Managed with -1: 122.416 s
(Note: With "native" I mean the only partly managed ODP.NET.)
Those results are fully reproduceable with only minor differences. The property OracleCommand.FetchSize seems to only have a noticeable effect on the native provider and there also only for positive values of InitialLOBFetchSize.
It's very nice that the managed provider is faster with a initial LOB fetch size of 5 KB, but what is wrong with -1? Can anyone explain this? Maybe a bug?
Using a performance profiler I can see that all that time is used in OracleDataReader.Read() waiting for a network operation to complete (no CPU usage): Screenshot
Server: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
Native client: 32-bit 11.2.0.3 Patch 22
Managed client: 4.121.1.0 from NuGet (I know that this isn't an official source, but I don't want to download the full client just for a 6 MB assembly)
.NET 4.0 x86 console application