Nakivo Backup & Replication – Command Line Interface – Part 1

Nakivo Backup & Replication is now one of favorite backup and replication suites for vSphere environments and many small businesses using it for keeping their data secure, preventing down time during DR and recover any file or object from their services which hosting by virtual machines.

I wrote some posts about Nakivo Backup & Replication and I suggest that read the previous posts, if you are not familiar with that:

Nakivo BR has many features and the features will be useful when you have an automation tools for doing management tasks easier and faster. hopefully, Nakivo BR has it essentially.

How can you access to CLI?

There is three ways that you can access to CLI and run your commands:

Using Command Line Interface Locally

To use the product’s command line interface (CLI) on the machine where NAKIVO Backup & Replication Director is installed, follow the steps below:

  • Run the CLI executable:
    • If NAKIVO Backup & Replication is installed on a Windows OS, run the cli.bat file located in the bin folder inside the product installation folder (“C:\Program Files\NAKIVO Backup & Replication” by default).
    • If NAKIVO Backup & Replication is installed on a Linux OS, run the cli.sh file located in the director/bin folder inside the product installation folder (“/opt/nakivo/” by default).
      • If you have access to SSH, you can run CLI via SSH session.
      • Also if you have deployed your director as virtual appliance, CLI s accessible via director menu:

Nakivo CLI

  • Run your commands.

Using Command Line Interface Remotely

To use the product’s command line interface (CLI) from a remote machine, follow the steps below:

  • Copy the CLI executable and jar files to the machine from where you plan to use the CLI:
    • If NAKIVO Backup & Replication is installed on a Windows OS, copy the cli.bat and cli.jar files located in the bin folder inside the product installation folder (“C:\Program Files\NAKIVO Backup & Replication” by default).
    • If NAKIVO Backup & Replication is installed on a Linux OS, copy the cli.sh and cli.jar files located in the director/bin folder inside the product installation folder (“/opt/nakivo/” by default).
  • On the machine from where you plan to use the CLI, configure the PATH system variable as described at http://java.com/en/download/help/path.xml
  • Run commands using the following format: <command> <host> <port> <username> <password>

Example: To get a list of jobs of the product which is installed on the machine with the 192.168.10.10 IP address, uses the 4443 port number for the Director Web HTTPS port, and has “admin” as login and password for the product’s web UI, run the following command: –job-list –host 192.168.10.10 –port 4443 –username admin –password admin

Using Command Line Interface in Multi-Tenant Mode

Triggering an action inside tenant in the multi tenant mode via command line interface requires providing a tenant ID as an argument:

cli.bat –repository-detach [repo_id] –username [login] –password [password] –tenant [tenant-id]

Updated: November 30, 2016 — 1:02 pm

Memory Ballooning Problem – Windows Server 2008 R2

it seems, there is incompatibility issue between Windows Server 2008 R2 and VMware Ballooning driver and it’s cause of stop error on Windows:

Stop A: 0xA

As Microsoft describe in its KB, the issue is happening when ballooning activated on virtual machines that use NUMA.

So, we know that NUMA can improve our machines performance by grant local access to memory.

You can read my post about NUMA for more information:

NUMA And vNUMA – Back To The Basic

Microsoft has released a hotfix for fixing this issue on Windows Server 2008 R2.

It’s strongly recommended to download and install the hotfix on all your virtual machine that those have Windows Server 2008 R2 as guest OS.

You can download the hotfx from the below link:

Hotfx Download

Updated: November 26, 2016 — 9:01 am

Supported Servers – vSphere 6.5

Which server brand do you use? HPE, Dell, Fujitsu or any other. It doesn’t matter, you should check your server compatibility with new vSphere version before planning for migration or upgrade.

I don’t want to share server list because the list will be different during time and new servers will be added to the list.

You can find supported servers in VMware Compatibility Guide and it’s best reference for servers compatibility.

Also you can check it on OEM web sites:

  1. HPE: VMware Support Matrix
    • Just you should choose your ESXi version on the web page and trust to the result!
  2. Dell: Virtualization Solutions
    • Choose VMware ESXi version and then should click on “Manual” and download a PDF which contains list of compatible servers.
  3. Cisco:UCS Hardware and Software Interoperability Matrix Tool (New)
  4. Fujitsu: I couldn’t find a tools on their web site and we have to download a PDF file and find our product. Sample link for FUJITSU Server PRIMERGY: x86 Servers released OS
  5. Lenovo (IBM): OS Interoperability Guide

I know, there is more OEM vendor and may be you are using another server. Please share me your server compatibility matrix to share it with our readers.

Also want to keep this post update and other vendors will be added in future.

Updated: November 25, 2016 — 8:33 pm

VMware Hardware Version 13

Each new version of vSphere includes some improvements and new features and many of them will be applied on virtual machines. The improvements and features will be add to “Hardware Version” and you be able to use those, if you use latest “Hardware Version”.

It’s strongly recommended that don’t upgrade your hardware version to latest just when you need to use a specific feature or expand hardware resources that older hardware version doesn’t support that.

Because “Hardware Version” doesn’t have any compatibility with older ESXi and if you have mixed cluster, you can’t use latest hardware version.

Here is an example:

You have a cluster and the cluster contains some ESXi 6.5, 6.0 and 5.5. If you upgrade hardware version to 11, your virtual machine will be hosted by ESXi 6.0, ESXi 6.5 and the machine will not be migrated on ESXi 5.5.

So, keep your hardware version compatible with oldest ESXi in your environment. You can downgrade hardware version but it’s not recommended.

For make sure about hardware version, you can change default version on your cluster anytime.

Let’s review new hardware version, compare it with older versions and compatibility with ESXi:

Feature

ESXi 6.5 and later

ESXi 6.0 and later

ESXi 5.5 and later

ESXi 5.1 and later

ESXi 5.0 and later

ESX/ESXi 4.x and later

ESX/ESXi 3.5 and later

Hardware version

13

11

10

9

8

7

4

Maximum memory (GB)

6128

4080

1011

1011

1011

255

64

Maximum number of logical processors

128

128

64

64

32

8

4

Maximum number of cores (virtual CPUs) per socket

128

128

64

64

32

8

1

Maximum SCSI adapters

4

4

4

4

4

4

4

Bus Logic adapters

Y

Y

Y

Y

Y

Y

Y

LSI Logic adapters

Y

Y

Y

Y

Y

Y

Y

LSI Logic SAS adapters

Y

Y

Y

Y

Y

Y

N

VMware Paravirtual controllers

Y

Y

Y

Y

Y

Y

N

SATA controllers

4

4

4

N

N

N

N

NVMe Controller s

4

N

N

N

N

N

N

Virtual SCSI disk

Y

Y

Y

Y

Y

Y

Y

SCSI passthrough

Y

Y

Y

Y

Y

Y

Y

SCSI hot plug support

Y

Y

Y

Y

Y

Y

Y

IDE nodes

Y

Y

Y

Y

Y

Y

Y

Virtual IDE disk

Y

Y

Y

Y

Y

Y

N

Virtual IDE CD-ROMs

Y

Y

Y

Y

Y

Y

Y

IDE hot plug support

N

N

N

N

N

N

N

Maximum NICs

10

10

10

10

10

10

4

PCNet32

Y

Y

Y

Y

Y

Y

Y

VMXNet

Y

Y

Y

Y

Y

Y

Y

VMXNet2

Y

Y

Y

Y

Y

Y

Y

VMXNet3

Y

Y

Y

Y

Y

Y

N

E1000

Y

Y

Y

Y

Y

Y

Y

E1000e

Y

Y

Y

Y

Y

N

N

USB 1.x and 2.0

Y

Y

Y

Y

Y

Y

N

USB 3.0

Y

Y

Y

Y

Y

N

N

Maximum video memory (MB)

2 GB

2 GB

512

512

128

128

128

SVGA displays

10

10

10

10

10

10

1

SVGA 3D hardware acceleration

Y

Y

Y

Y

Y

N

N

VMCI

Y

Y

Y

Y

Y

Y

N

PCI passthrough

16

16

6

6

6

6

0

PCI Hot plug support

Y

Y

Y

Y

Y

Y

N

Nested HV support

Y

Y

Y

Y

N

N

N

vPMC support

Y

Y

Y

Y

N

N

N

Serial ports

32

32

4

4

4

4

4

Parallel ports

3

3

3

3

3

3

3

Floppy devices

2

2

2

2

2

2

2

Updated: November 24, 2016 — 8:05 pm

Deprecated and unsupported – Qlogic and Emulex devices

VMware has published a list that includes unsupported and deprecated devices from two vendors:

  1. Emulex
  2. Qlogic

Deprecated devices may still be worked and drivers will be installed but those devices are not supported on vSphere 6.5 officially.

You need to upgrade your hardware before upgrading vSphere, but it’s your choice! Because your device may be worked without any issue.

You can find the deprecated and unsupported devices in the below table:

PartnerDriver NameDevice IDsDevice Name
Emulexlpfc10DF:F0E5:0000:0000Emulex LPe1105-M4 4 Dual-Channel 4Gb/s Fibre Channel HBA
10DF:F0E5:0000:0000Emulex LPe1150 Single-Channel 4Gb/s Fibre Channel HBA
10DF:F0E5:0000:0000Emulex LPe1150 4Gb/s Fibre Channel Adapter
10DF:F0E5:10DF:F0E5Emulex LPe1150 Single-Channel 4Gb/s Fibre Channel HBA
10DF:F0E5:10DF:F0E5LPe1150-E Emulex LPe1150 Single-Channel 4Gb/s Fibre Channel HBA for Dell and EMC
10DF:FE00:0000:0000LPe11002 4Gb Fibre Channel Host Adapter
10DF:FE00:0000:0000NE3008-102
10DF:FE00:0000:0000NE2000-001
10DF:FE00:0000:0000Emulex LPe11000 4Gb PCIe Fibre Channel Adapter
10DF:FE00:10DF:FE00Emulex LPe11002 Dual-Channel 4Gb/s Fibre Channel HBA
10DF:FE00:10DF:FE00N8403-018
10DF:FE00:10DF:FE00EMC LPe11000-E
10DF:FE00:10DF:FE00EMC LPe11002-E
10DF:FE00:10DF:FE00Emulex LPe11000 Single-Channel 4Gb/s Fibre Channel HBA
10DF:FE00:10DF:FE22Emulex L1105-M Emulex LPe1105-M4 Dual-Channel 4Gb/s Fibre Channel mezzanine card for Dell PowerEdge
10DF:FE00:103c:1708403621-B21 Emulex LPe1105-HP Dual-Channel 4Gb/s Fibre Channel mezzanine card for HP BladeSystem c-Cl
10DF:FE00:10DF:FE00A8002A – FC2142SR Emulex LPe1150-F4 Single Channel 4Gb/s Fibre Channel HBA for HP
19A2:0704:0000:0000Emulex OneConnect OCe10100 FCoE Initiator
19A2:0704:1137:006EM72KR-E Emulex OneConnect OCe10102 10GbE FCoE Mezzanine CNA for Cisco UCS-B Servers
19A2:0704:10DF:E630OCe10102-FM-E Emulex OneConnect 10GbE FCoE, iSCSI, NIC CNA for EMC VNX (including VNXe) Symmetrix an
19A2:0704:10DF:E630OCe10102-FX-E Emulex OneConnect 10GbE FCoE, iSCSI, NIC CNA for EMC VNX (including VNXe) Symmetrix an
elxnet19A2:0211:0000:0000Emulex OneConnect OCe10100 NIC
19A2:0700:0000:0000Emulex OneConnect OCe10100 NIC
19A2:0700:103C:1746HP NC550m Dual Port Flex-10 10Gbe BL-c Adapter
19A2:0700:103C:1747HP NC550SFP Dual Port 10GbE Server Adapter
19A2:0700:103C:1748Emulex OneConnect OCm10102-I-HP, NIC
19A2:0700:103C:1749Emulex OneConnect OCe10102-I-HP, NIC
19A2:0700:103C:174AHP NC551m Dual Port FlexFabric 10Gb Network Adapter
19A2:0700:103C:174BHP CN1000E Converged Network Adapter
19A2:0700:103C:3314HP NC551i Dual Port FlexFabric 10Gb Converged Network Adapter
Qlogicqla4xxx1077:4022:0000:0000QLA4022
1077:4022:1077:0122QLA4050
1077:4022:1077:0124QLogic QLA4050C
1077:4022:1077:0124QLA4050C-E-SP
1077:4022:1077:0124QLA4050C-HDS-SP
1077:4022:1077:0124IBM iSCSI Server TX adapter (30R5201) (QLA4050C)
1077:4022:1077:0124QLA4050C
1077:4022:1077:0128QLogic QLA4052C
1077:4022:1077:0128QLA4052C-E-SP
1077:4022:1077:0128QLA4052C-HDS-SP
1077:4022:1077:0128QLA4052C
1077:4022:1077:012EQLogic iSCSI Expansion Card for BladeCenter (32R1923) (QMC4052R)
1077:4032:1077:014FQLogic QLE4060C
1077:4032:1077:014FQLE4060C
1077:4032:1077:014FQLE4060C-E-SP
1077:4032:1077:014FQLogic iSCSI Single-Port PCIe HBA for IBM System x (39Y6146) (QLE4060C)
1077:4032:1077:0158QLE4062C
1077:4032:1077:0158QLE4062C-E-SP
1077:4032:1077:0158QLogic iSCSI Dual Port PCIe HBA for IBM System x (42C1770) (QLE4062C)
1077:4032:1077:0158QLogic QLE4062C
Updated: November 24, 2016 — 6:43 pm
Teimouri.net © 2016 Frontier Theme