Skip to Main Content

SS Enqueue and sort segment request waits on after trying to drop temp tablespace

Bobby DurrettJan 21 2014 — edited Jan 28 2014

We have a case open with Oracle support but I thought I would throw this out there if that is ok.

We have an database running on HP-UX 11.11 PA-Risc and it looks like any session that wants to use some temp space is hanging on an SS enqueue or in a few cases is waiting on a "sort segment request" wait.

Saturday we tried to move all the users over to a new temporary tablespace and drop the old one but the drop hung and we control-C'd out of it.  We put the users back to the original temp.  Note that both the old and new temp tablespaces are locally managed.

Before trying to drop the old temp tablespace we killed the existing sessions including one that had been running for two weeks and was hung on SMON.

Queries of v$fast_start_transactions and x$ktuxe don't indicate that SMON is rolling back some large transaction.

Also, SMON appears to be running this query continually:

SELECT file#, block#, LENGTH

  FROM uet$

WHERE ts# = :1 AND segfile# = :2 AND segblock# = :3 AND ext# = :4

Here are the locks held by SMON for what it's worth:

ADDR             KADDR                   SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK
---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
C000000028C43CD8 C000000028C43CF8          8 TT         74         16          4          0      59610          0
C000000028C43C68 C000000028C43C88          8 TS         74 -666633304          6          0      59650          0
C000000028C1CB38 C000000028C1CB58          8 ST          0          0          6          0         13          0

ST, Space Management Transaction
TS, Temporary Segment (also TableSpace)
TT, Temporary Table

Same database had a recovery scenario about a month or so back due to some deleted data files.  Also, over the holidays we had to rebuild a huge global index and increased our temp tablespace to get that done which is why we are trying to shrink it now by creating a new smaller one.  Also, the index creation was taking forever in the existing dictionary managed tablespace so we ended up moving the index to a locally managed one.  Also, UET$ has about 33,000,000 rows and most of the data is in dictionary managed tablespaces.  Queries to dba_free_space typically take 30 minutes to return - i.e. we know it is messed up and has been this way for a long time.

Fun for us.  If anyone has some insight that would be great.

- Bobby

This post has been answered by Suntrupth on Jan 22 2014
Jump to Answer
Locked Post
New comments cannot be posted to this locked post.
Post Details
Locked due to inactivity on Feb 25 2014
Added on Jan 21 2014