Skip to Main Content

Oracle Database Discussions

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Interested in getting your voice heard by members of the Developer Marketing team at Oracle? Check out this post for AppDev or this post for AI focus group information.

ORA-30036

User_MIMMVJul 4 2016 — edited Jul 8 2016

We have  9.2.0.8 database environment  where an export dump is taken from the production database and is imported in to the reporting database.The import was taking huge time to complete so we decided do use COMMIT=N to reduce the import time by increasing the size of UNDO tablespace.. We increased the size of the UNDO tablespace to slightly greater than the largest table size, still failed with error :

IMP-00058: ORACLE error 30036 encountered

ORA-30036: unable to extend segment by XX in undo tablespace 'UNDOTS'

IMP-00028: partial import of previous table rolled back: XXXXX rows rolled back

Please advise if I missed out on some thing. What is the correct logic to arrive at the required UNDO size in such scenario. Thanks.

Comments

Bharat G

Hi ,

Please check the following Link.

http://logic.edchen.org/2012/10/resolve-ora-30036.html

Regards,

Bharat

MKJ10930279

share the UNDO size and the largest table size

User_MIMMV

Largest table : 15GB and UNDO tablespace : 20GB

MKJ10930279

take the db to noarchive mode and import the dump. once import is complete, bring back the db to archive mode

MKJ10930279

also see that you have sufficient disk space

AndrewSayer

3125743 wrote:

We have  9.2.0.8 database environment  where an export dump is taken from the production database and is imported in to the reporting database.The import was taking huge time to complete so we decided do use COMMIT=N to reduce the import time by increasing the size of UNDO tablespace.. We increased the size of the UNDO tablespace to slightly greater than the largest table size, still failed with error :

IMP-00058: ORACLE error 30036 encountered

ORA-30036: unable to extend segment by XX in undo tablespace 'UNDOTS'

IMP-00028: partial import of previous table rolled back: XXXXX rows rolled back

Please advise if I missed out on some thing. What is the correct logic to arrive at the required UNDO size in such scenario. Thanks.

What made you come to the conclusion that it was the number of commits that was making the import slow?

https://docs.oracle.com/cd/B10501_01/server.920/a96652/ch02.htm

COMMIT

Default: n

Specifies whether Import should commit after each array insert. By default, Import commits only after loading each table, and Import performs a rollback when an error occurs, before continuing with the next object.

So you'd need an UNDO tablespace large enough to contain all of the UNDO generated in loading your largest table. That is not the same as the size of the largest table, you'd have to actually measure how much UNDO gets generated. Is target table indexed?

I think you'd find that actually you don't save that much time by not committing until the entire table has loaded anyway.

User_MIMMV

Thanks Andrew. But the fact of the matter is the import did complete much faster for other tables except for this one very huge table that has 150+ million rows and yes it does have indexes for 2 of the columns. I would like to know how much the size of the UNDO needs to be for imprting this table with COMMIT=N option. I don't find it any where in Documentation and Metalink.

JohnWatson2

IMP-00058: ORACLE error 30036 encountered

ORA-30036: unable to extend segment by XX in undo tablespace 'UNDOTS'

IMP-00028: partial import of previous table rolled back: XXXXX rows rolled bac

You could use the RESUMABLE switch. THat would give you the opportunity to correct the problem and let the import continue, rather than triggering a rollback.

AndrewSayer

I think if excessive commits were really an issue then it would be down to the array size being used. What is your buffer parameter?

As a first guess of the UNDO generated from an index I would say it would be roughly the size of the index key per row. As John Watson recommends, if you use the resumable parameter then if your first guess is not correct you can always adjust mid-process.

User_MIMMV

Unfortunately we can't change the UNDO size on the fly as it needs to go through a process. I need to proactively set the size before the import to avoid the error. I have fairly large buffer size set for the import:

buffer=80000000

MKJ10930279

As you are importing prod to other db, do you have sufficient disk space for datafile to grow in other db.

you can take db to noarchive mode and import, then bring back to archive mode.

User_MIMMV

No we can't do that as it is a daily job. I want to know the mechanism to size the UNDO tablespace when importing an object with out intermittent commits and considering the associated indexes if it impacts.

MKJ10930279

create UNDO with autoextend on and try

AndrewSayer

3125743 wrote:

No we can't do that as it is a daily job. I want to know the mechanism to size the UNDO tablespace when importing an object with out intermittent commits and considering the associated indexes if it impacts.

There is no hard and fast rule. You'd have to measure how much undo is currently be generated per array insert and multiply that by the number of array inserts you'll be doing. Then factor in that your inserts will likely grow over time so so will your UNDO needs.

I think you'd be better off loading smaller amounts of data, probably just the rows that have been modified since the last load, and also ensure that you process the deletes for data that has been removed.

Have you considered materialized views for your reporting database? There are alternative replication techniques that would also be appropriate, IMO it doesn't make sense to do a daily clone using exp imp, it's just too much work

Jonathan Lewis

Since you say latert that it's a daily job, are you dropping the table before doing the import or truncating it and using the ignore=yes option.

If you're not dropping it, are you dropping the two indexes before the import.

If you're not dropping the two indexes then your undo will (worst case) be in the region of 250 * number of rows in table. That's in the region of 40GB for your table.

I am a little surprised that you got an ora-30036 when you had commit=y

I would have thought with an 80MB arraysize no one transaction would have been that much of a threat - I'd have expected an ORA-01555 to be much more likely.

Regards

Jonathan Lewis

Jonathan Lewis

I should have said:

I'd drop the indexes before the import, then import that table as a seperate import with "indexes = n" then create the indexes afterwards.

This will have to be messier if there's a primary key on the table supported by one of the indexes

Regards

Jonathan Lewis

User_MIMMV

Thanks Jonathan. Yes one of the indexes is a PK. We are truncating all the tables before import. I did not get the UNDO estimate you  provided earlier. Can I request you to illustrate with an example please? Thanks again!!

"If you're not dropping the two indexes then your undo will (worst case) be in the region of 250 * number of rows in table. That's in the region of 40GB for your table."

Jonathan Lewis

An import is a simple array insert that will be maintaining the indexes one row at a time. The undo overhead per index entry is about 110 - 120 bytes, you've got two indexes, so double up and add a little for the table row and you get about 250 bytes of undo per row in the table.

If you don't have a foreign key constraint that references the table you could:

truncate table

disable primary key

drop both indexes -- the PK index may have dropped automatically on the call to disable

import with indexes=no ignore=yes

enable primary key

create second index

Regards

Jonathan Lewis

User_MIMMV

Thank you again for the detailed explanation as always.

vijaydba_appsdba

Have your problem solved.

Did autoextend=yes option didnt work for you?

Thanks,

Vijay

John Thorton

3125743 wrote:

We have  9.2.0.8 database environment

Please join the 21st Century at your earliest convenience.

Jonathan Lewis

Oracle 9R2 was launched in the 21st century.

Post SQL & results that prove that an upgrade is necessary.

1 - 22
Locked Post
New comments cannot be posted to this locked post.

Post Details

Locked on Aug 5 2016
Added on Jul 4 2016
22 comments
1,633 views