Hi
We have 2 node RAC database of version 11.2.0.4 and its running on Linux platform.
min_free_kbytes=65536 and actual memory on server is 500GB.
There are no hugepages configured on server and oracle is using automatic memory management feature.
We observed below messages in syslog of one rac node.
When this message occurred, top command showed more than 10gb free memory and same amount of free swap space.
Please help us to understand what could be issue? I had gone through some metalink doc but they are related to exadata.
Does it indicating memory bottleneck on server?
Feb 9 15:43:59 clph387 oracle: page allocation failure. order:5, mode:0x20
Feb 9 15:43:59 clph387 Pid: 10608, comm: oracle Tainted: P --------------- 2.6.32-431.23.3.el6.x86_64 #1
Feb 9 15:43:59 clph387 Call Trace:
Feb 9 15:43:59 clph387 [<ffffffff8112f89a>] ? __alloc_pages_nodemask+0x74a/0x8d0
Feb 9 15:43:59 clph387 [<ffffffff8116e2d2>] ? kmem_getpages+0x62/0x170
Feb 9 15:43:59 clph387 [<ffffffff8116eeea>] ? fallback_alloc+0x1ba/0x270
Feb 9 15:43:59 clph387 [<ffffffff8116e93f>] ? cache_grow+0x2cf/0x320
Feb 9 15:43:59 clph387 [<ffffffff8116ec69>] ? ____cache_alloc_node+0x99/0x160
Feb 9 15:43:59 clph387 [<ffffffff8116fe30>] ? kmem_cache_alloc_node_trace+0x90/0x200
Feb 9 15:43:59 clph387 [<ffffffff8117004d>] ? __kmalloc_node+0x4d/0x60
Feb 9 15:43:59 clph387 [<ffffffff81450c7a>] ? __alloc_skb+0x7a/0x180
Feb 9 15:43:59 clph387 [<ffffffff81451d90>] ? skb_copy+0x40/0xb0
Feb 9 15:43:59 clph387 [<ffffffffa040c55c>] ? tg3_start_xmit+0xa8c/0xd80 [tg3]
Feb 9 15:43:59 clph387 [<ffffffff81461104>] ? dev_hard_start_xmit+0x224/0x480
Feb 9 15:43:59 clph387 [<ffffffff8147cb9a>] ? sch_direct_xmit+0x15a/0x1c0
Feb 9 15:43:59 clph387 [<ffffffff81461608>] ? dev_queue_xmit+0x228/0x320
Feb 9 15:43:59 clph387 [<ffffffffa07dc8fc>] ? bond_dev_queue_xmit+0x2c/0x50 [bonding]
Feb 9 15:43:59 clph387 [<ffffffffa07dcc0f>] ? bond_start_xmit+0x2ef/0x5d0 [bonding]
Feb 9 15:43:59 clph387 [<ffffffff8112bbcb>] ? __rmqueue+0x34b/0x490
Feb 9 15:43:59 clph387 [<ffffffff81461104>] ? dev_hard_start_xmit+0x224/0x480
Feb 9 15:43:59 clph387 [<ffffffff8146159d>] ? dev_queue_xmit+0x1bd/0x320
Feb 9 15:43:59 clph387 [<ffffffff8149b128>] ? ip_finish_output+0x148/0x310
Feb 9 15:43:59 clph387 [<ffffffff8149b3a8>] ? ip_output+0xb8/0xc0
Feb 9 15:43:59 clph387 [<ffffffff8149a685>] ? ip_local_out+0x25/0x30
Feb 9 15:43:59 clph387 [<ffffffff8149ab80>] ? ip_queue_xmit+0x190/0x420
Feb 9 15:43:59 clph387 [<ffffffff8112f263>] ? __alloc_pages_nodemask+0x113/0x8d0
Feb 9 15:43:59 clph387 [<ffffffff814aff0e>] ? tcp_transmit_skb+0x40e/0x7b0
Feb 9 15:43:59 clph387 [<ffffffff814b244f>] ? tcp_write_xmit+0x22f/0xa90
Feb 9 15:43:59 clph387 [<ffffffff8117004d>] ? __kmalloc_node+0x4d/0x60
Feb 9 15:43:59 clph387 [<ffffffff81450c7a>] ? __alloc_skb+0x7a/0x180
Feb 9 15:43:59 clph387 [<ffffffff814b2ce0>] ? tcp_push_one+0x30/0x40
Feb 9 15:43:59 clph387 [<ffffffff814a36bc>] ? tcp_sendmsg+0x9cc/0xa20
Feb 9 15:43:59 clph387 [<ffffffff8144af5b>] ? sock_aio_write+0x19b/0x1c0
Feb 9 15:43:59 clph387 [<ffffffff8105af80>] ? __dequeue_entity+0x30/0x50
Feb 9 15:43:59 clph387 [<ffffffff8144adc0>] ? sock_aio_write+0x0/0x1c0
Feb 9 15:43:59 clph387 [<ffffffff81188a5b>] ? do_sync_readv_writev+0xfb/0x140
Feb 9 15:43:59 clph387 [<ffffffff810a6d31>] ? ktime_get_ts+0xb1/0xf0
Feb 9 15:43:59 clph387 [<ffffffff81084a1b>] ? try_to_del_timer_sync+0x7b/0xe0
Feb 9 15:43:59 clph387 [<ffffffff8109afa0>] ? autoremove_wake_function+0x0/0x40
Feb 9 15:43:59 clph387 [<ffffffff81060aa3>] ? perf_event_task_sched_out+0x33/0x70
Feb 9 15:43:59 clph387 [<ffffffff81226d76>] ? security_file_permission+0x16/0x20
Feb 9 15:43:59 clph387 [<ffffffff81189ab6>] ? do_readv_writev+0xd6/0x1f0
Feb 9 15:43:59 clph387 [<ffffffff81528c0e>] ? thread_return+0x4e/0x760
Feb 9 15:43:59 clph387 [<ffffffff81189c16>] ? vfs_writev+0x46/0x60
Feb 9 15:43:59 clph387 [<ffffffff81189d41>] ? sys_writev+0x51/0xb0
Feb 9 15:43:59 clph387 [<ffffffff811d82c6>] ? sys_io_getevents+0x56/0xb0
Feb 9 15:43:59 clph387 [<ffffffff8100b072>] ? system_call_fastpath+0x16/0x1b