Hello,
I have a problem with the comstar as a FC target.
New install of Solaris 11.1
HBA is an Emulex LPe11002
Brocade 5100B switches
2x 10x 3TB NL-SAS disks in raidz2 in pool
It all works, but the speed is unusable slow to the LUN.
iSCSI work and I am able to hit the max of the network so there is no problems with access to access the disks.
HBA info
HBA Port WWN: 10000000c98e9712
Port Mode: Target
Port ID: 12000
OS Device Name: Not Applicable
Manufacturer: Emulex
Model: LPe11002-E
Firmware Version: 2.80a4 (Z3F2.80A4)
FCode/BIOS Version: none
Serial Number: VM92923844
Driver Name: emlxs
Driver Version: 2.70i (2012.02.10.12.00)
Type: F-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 20000000c98e9712
HBA Port WWN: 10000000c98e9713
Port Mode: Target
Port ID: 22000
OS Device Name: Not Applicable
Manufacturer: Emulex
Model: LPe11002-E
Firmware Version: 2.80a4 (Z3F2.80A4)
FCode/BIOS Version: none
Serial Number: VM92923844
Driver Name: emlxs
Driver Version: 2.70i (2012.02.10.12.00)
Type: F-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 20000000c98e9713
iostat 2 sec apart:
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
dipool 44.1M 54.5T 0 19 1.01K 134K
dipool 44.1M 54.5T 0 2 0 196K
dipool 45.0M 54.5T 0 50 0 210K
dipool 45.0M 54.5T 0 0 0 64.0K
dipool 45.8M 54.5T 0 50 0 274K
dipool 45.8M 54.5T 0 0 0 64.0K
dipool 45.8M 54.5T 0 0 0 0
dipool 45.0M 54.5T 0 35 0 125K
dipool 45.0M 54.5T 0 0 0 64.0K
dipool 44.5M 54.5T 0 34 0 61.0K
dipool 44.5M 54.5T 0 0 0 64.0K
dipool 44.5M 54.5T 0 0 0 64.0K
dipool 44.6M 54.5T 0 34 0 61.0K
dipool 44.6M 54.5T 0 0 0 64.0K
I also tried openindiana, the speed was good, but link will die and then capturing stmf debug shows the following when using the Emulex.
FROM STMF:210406652: abort_task_offline called for LPORT: lport abort timed out, 1000's of them
Jun 7 14:02:18 emlxs: [ID 349649 kern.info] [ 5.0608]emlxs1: NOTICE: 730: Link reset. (Disabling link...)
Jun 7 14:02:18 emlxs: [ID 349649 kern.info] [ 5.0333]emlxs1: NOTICE: 710: Link down.
Jun 7 14:04:41 emlxs: [ID 349649 kern.info] [ 5.055D]emlxs1: NOTICE: 720: Link up. (4Gb, fabric, target)
Jun 7 14:04:41 fct: [ID 132490 kern.notice] NOTICE: emlxs1 LINK UP, portid 22000, topology Fabric Pt-to-Pt,speed 4G
Jun 7 14:10:19 emlxs: [ID 349649 kern.info] [ 5.0608]emlxs1: NOTICE: 730: Link reset. (Disabling link...)
Jun 7 14:10:19 emlxs: [ID 349649 kern.info] [ 5.0333]emlxs1: NOTICE: 710: Link down.
Jun 7 14:12:40 emlxs: [ID 349649 kern.info] [ 5.055D]emlxs1: NOTICE: 720: Link up. (4Gb, fabric, target)
Jun 7 14:12:40 fct: [ID 132490 kern.notice] NOTICE: emlxs1 LINK UP, portid 22000, topology Fabric Pt-to-Pt,speed 4G
I also tried a Qlogic QLE2460-SUN and that has the same problem in both OI and Solaris, ultra slow
HBA Port WWN: 2100001b3280b
Port Mode: Target
Port ID: 12000
OS Device Name: Not Applicable
Manufacturer: QLogic Corp.
Model: QLE2460
Firmware Version: 5.2.1
FCode/BIOS Version: N/A
Serial Number: not available
Driver Name: COMSTAR QLT
Driver Version: 20100505-1.05
Type: F-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 2000001b3280b
It seems no one is using Solaris as a FC target anymore and since we do not have 10Gbe in our lab and some systems cannot communicate via IP to others, FC is the only form of backup.
Can someone please let me know if they are using Solaris as an FC target and perhaps some pointers. On the example above I am trying to clone using VMware from a LUN on an EMC array to the Solaris node. As I mentions the speed is good in OI, but then it seems there is a driver issue.
Cloning in OI from the EMC LUN to the back server:
1 sec apart.
alloc free read write read write
>>>>>>> ----- ----- ----- ----- ----- -----
>>>>>>> 309G 54.2T 81 48 452K 1.34M
>>>>>>> 309G 54.2T 0 8.17K 0 258M
>>>>>>> 310G 54.2T 0 16.3K 0 510M
>>>>>>> 310G 54.2T 0 0 0 0
>>>>>>> 310G 54.2T 0 0 0 0
>>>>>>> 310G 54.2T 0 0 0 0
>>>>>>> 310G 54.2T 0 10.1K 0 320M
>>>>>>> 311G 54.2T 0 26.1K 0 820M
>>>>>>> 311G 54.2T 0 0 0 0
>>>>>>> 311G 54.2T 0 0 0 0
>>>>>>> 311G 54.2T 0 0 0 0
>>>>>>> 311G 54.2T 0 10.6K 0 333M
>>>>>>> 313G 54.2T 0 27.4K 0 860M
>>>>>>> 313G 54.2T 0 0 0 0
>>>>>>> 313G 54.2T 0 0 0 0
>>>>>>> 313G 54.2T 0 0 0 0
>>>>>>> 313G 54.2T 0 9.69K 0 305M
>>>>>>> 314G 54.2T 0 10.8K 0 337M
We have tons of other devices connected to the Brocade 5100B switches. I tried connecting the system to two different switches individually with the same result. We are basically 100% Emulex shop and I only have the one qlt card
I have now tried a brand new Emulex LPe11002 card in a different PCI-E slot, new cable and different FC switch.
I have similar problems with Openindiana and no problems with any of the emc vnx/cx/data domain connected to the same switches or any of the hosts connected to them as the targets using the same LPe10000/LPe11002/LPe12002 cards.
Any help/pointers would be greatly appreciated.
Thanks,