HPE FlexFabric 650FLB Adapter May Cause PSOD on ESXi 6.x

Seems, there is an issue on HPE Blade Servers when a specific adapter is installed on server.

If there is any server of the below models in your virtual environment that you should consider to this post:

  • HPE ProLiant BL460c Gen10 Server Blade
  • HPE ProLiant BL460c Gen9 Server Blade
  • HPE ProLiant BL660c Gen9 Server

HPE servers running VMware ESXi 6.0, VMware ESXi 6.5 or VMware ESXi 6.7 and configured with an HPE FlexFabric 20Gb 2-port 650FLB Adapter with driver version 12.0.1211.0 (or prior), a “wake NOT set” message is logged in the Vmkernel logs.

Then after 20 to 30 days of server run time, a Purple Screen of Death (PSOD) may display a brcmfcoe: lpfc_sli_issue_iocb_wait:10828 message.

The following errors are displayed in the VMkernel logs after approximately 50 to 70 days of server run time:

2019-07-20T00:46:10.267Z cpu33:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330 IOCB wake NOT set, Data x24 x0

2019-07-20T01:10:14.266Z cpu33:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330 IOCB wake NOT set, Data x24 x0

2019-07-20T02:16:25.801Z cpu33:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330 IOCB wake NOT set, Data x24 x0

2019-07-20T02:22:26.957Z cpu33:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330 IOCB wake NOT set, Data x24 x0

2019-07-20T03:26:39.057Z cpu11:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330 IOCB wake NOT set, Data x24 x0

2019-07-20T04:06:46.158Z cpu11:69346)WARNING: brcmfcoe: lpfc_sli_issue_iocb_wait:10828: 0:0330 IOCB wake NOT set, Data x24 x0

A purple diagnostic screen similar to the following is displayed after 10 to 30 days:

2019-07-20T12:36:45.514Z cpu13:1603745)0x4394c509b920:[0x41800a1eee39]lpfc_sli_get_iocbq@(brcmfcoe)#+0x1d stack: 0x430698f6ccd0

2019-07-20T12:36:45.514Z cpu13:1603745)0x4394c509b940:[0x41800a1f8914]lpfc_sli4_handle_eqe@(brcmfcoe)#+0x35c stack: 0x202

2019-07-20T12:36:45.514Z cpu13:1603745)0x4394c509b9f0:[0x41800a1f9811]lpfc_sli4_intr_bh_handler@(brcmfcoe)#+0x89 stack: 0x43011d2e07f8

2019-07-20T12:36:45.514Z cpu13:1603745)0x4394c509ba20:[0x4180098d1b44]IntrCookieBH@vmkernel#nover+0x1e0 stack: 0x0

2019-07-20T12:36:45.515Z cpu13:1603745)0x4394c509bac0:[0x4180098b1db0]BH_DrainAndDisableInterrupts@vmkernel#nover+0x100 stack: 0x4394c509bba0

2019-07-20T12:36:45.515Z cpu13:1603745)0x4394c509bb50:[0x4180098d3952]IntrCookie_VmkernelInterrupt@vmkernel#nover+0xc6 stack: 0x82

2019-07-20T12:36:45.515Z cpu13:1603745)0x4394c509bb80:[0x41800992f27d]IDT_IntrHandler@vmkernel#nover+0x9d stack: 0x0

Resolution

To fix this issue, download and install the following driver and firmware components:

For VMware ESXi 6.7

Emulex(BRCM) Fibre Channel over Ethernet driver for VMware vSphere 6.7 Version 2019.03.03 (This includes driver version 12.0.1216.4)

HPE Firmware Flash for Emulex Converged Network Adapters for VMware vSphere 6.7 Version 2019.03.01 (This includes firmware 12.0.1216.0)

For VMware ESXi 6.5

Emulex(BRCM) Fibre Channel over Ethernet driver for VMware vSphere 6.5 Version 2019.03.03 (This includes driver version 12.0.1216.4)

HPE Firmware Flash for Emulex Converged Network Adapters for VMware vSphere 6.5 Version 2019.03.01 (This includes firmware 12.0.1216.0)

For VMware ESXi 6.0

VMware ESXi 6.0 brcmfcoe 12.0.1110.39 FCoE Driver for Emulex and OEM Branded Converged Network Adapters 

HPE Firmware Flash for Emulex Converged Network Adapters for VMware vSphere 6.0 Version 2019.03.01 (This includes firmware 12.0.1216.0)

Davoud Teimouri

Davoud Teimouri is as a professional blogger, vExpert 2015/2016/2017/2018/2019, VCA, MCITP. This blog is started with simple posts and now, it has large following readers.

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our newsletter and join other subscribers

Holler Box