Azure: Proceed with MayaNAS configuration in High Availability only after setting Service principal login for all the mayascale instances.
AWS: MayaNAS Instance is running with the custom IAM role that provides necessary permissions for HA role
Getting Started
Connect to Administration Web console available on http://<mayascale1-ip>:2020 and login as admin and click Getting Started
Click Next to proceed with Choose Storage Node
Step1 → Choose Storage Node
- Select 2-Node HA
- Select Storage Pool Type
Volume Group provides robust block storage
Choose ZFS Pool for advanced file system with compression, snapshot, and replication - Click Next to proceed with High Availability
Provide configuration details for High Availability
- Active/Passive is selected
- Primary: hostname filled automatically
- Secondary: Enter hostname that should match uname -n output
- Resource ID: Enter any number 1 to 255
- Virtual IP address: Enter any valid IPV4 address that is outside this subnet IP address
- Heartbeat Link1: (Left) Select the interface to use and enter IP address of the secondary node
- Heartbeat Link1:(Right) Select the interface to use and enter IP address of the primary node
Provide Heartbeat Link2 if more interfaces available
Click Next to proceed to the next step.
Please wait while High Availability configuration is being applied to both the MayaScale instances. This typically takes around 30 seconds to finish.
Click Next to proceed to the next step.
Step 2 → Configure Primary mirror NVMe Target
Provide details for Primary NVMe Target
- Node Name: Click and then add .nvme1 suffix
- Portal Tag: Click and accept default value 1
- Interfaces: Select the interface where the backend NVMe mirroring traffic will occur
- IP Address: This field will be filled automatically once interface is selected
- Port Number: Click and accept default value 4422. This way it is different from the default NVMe TCP port 4420 that will be used for the NVMe clients
- Authorized access only: Leave this checkbox unchecked
- Disks: Click to select all the available NVMe
Click Next to proceed to the next step.
Step 3 → Configure Secondary mirror NVMe Target
Provide details for Secondary NVMe Target
- Node Name: Click and then add .nvme1 suffix
- Portal Tag: Click and accept default value 1
- Interfaces: Select the interface where the backend NVMe mirroring traffic will occur
- IP Address: This field will be filled automatically once interface is selected
- Port Number: Click and accept default value 4422. This way it is different from the default NVMe TCP port 4420 that will be used for the NVMe clients
- Authorized access only: Leave this checkbox unchecked
- Disks: Click to select all the available NVMe
Click Next to proceed to the next step.
Select mirror disk for storage Pool
Select NVMe disks to pair and click Mirror to establish mirroring
Click Next to proceed to the next step.
Step 4 → Configure Storage Pool
Provide Storage Pool details
- Label: Name for the storage pool
- Description: Optionally enter some description for this pool
Rest of the fields will be automatically filled. Click Next to proceed to the next step.
Step 5 → Configure Volume
Select Storage Pool where new volumes will be provisioned
- Click the checkbox under Name columne of the storage pool
- Volume Type:
Click on Block for block volumes sharing over iSCSI or NVME/TCP protocol
Click File System for file system sharing using NFS or SMB protocol
Click Next to proceed to the next step.
Create Volume
Provide new volume details
- Label: Enter volume name the checkbox under Name columne of the storage pool
- Description: Optionally enter some description for this volume
- ZVOL Options: Select appropriate options for the new volume
LZ4 Compression: Select if compression is preferred
Sparse: Select if thin provisioning volume is preferred
Sync: Select option for data consistency
always → writes will be acknowledged only after committed to stable storage
standard → writes will be committed only if the client requested
never → all writes are fully async - Volume Size: Enter desired volume size. For Sparse volume this can exceed available storage pool size
- Sequence Volume: Select if more than one volumes in sequence to be created.
For example default 3 will create 3 new volumesnvlun1
,nvlun1
,nvlun2
if label isnvlun
Click Next to proceed to the next step.
Step 6 → Configure Volume Mapping
Provide details on how the new volumes will be shared
- Volume: Select the volume to be mapped
- Controller:
Chooseiscsi
ornvme-tcp
for block volume
Choose NFS or SMB for file system
Click Next to proceed to the next step.
Provide details on how the new volumes will be shared
- New NVMe Target: Click on this radio button to create new NVMe target for the first mapping to make use of Virtual IP address
otherwise the mapping will be a local resource only and will not be controlled by HA faiilover scripts
Add custom name after nqn.2022-20.com.zettalane: - Available: Select the VIP address
- IP Address: VIP address will appear
- Portal Tag: unique number for this cluster wide target. Default choice is cluster id value
- CID: Controller ID for NVMe target
- Authorized access only: Do not click on this to allow all access.
Click Next to proceed to the next step.
Confirm Volume Mappings
Review the mapping details provided by you
If there are more mappings to be specified for other volumes click on checkbox to Configure more mapping and the next step will present you with option to provide more mappings.
Click Next to proceed to the next step.
Final review of mapping details on how the new volumes
Click Finish to finish Getting Started Wizard.
Click on Add or remove Mappings to make sure the new volumes are activated and ready for access from clients.
Occasionally the mappings may not be active due to delay in activation of the Virtual IP address in Azure environment. In that case you may manually activate by selecting the volume and clicking Bind