What is the recommended storage pool RAID configuration?
If you need high performance, data mirroring is recommended. For larger SATA drives (for example 4TB), CloudByte recommends RAIDz2.
How can I expand the tpool size?
- SSH to the Node.
- Run the following command:
- Run the following to expand the tpool size:
./expandtpool OS_disk_name. For example,
The size of the tpool increases by 10G.
Can I create a mixed disk pool, for example with SATA and SSD disks?
Yes. But CloudByte does not recommend this because you get only the performance of the lowest disk, in this case SATA.
How many log devices can I attach to a pool?
Depends on the workload. For instance, in a WRITE intense scenario, a higher ZIL improves your WRITE performance.
How much L2ARC should I add to the pool?
Depends on the workload. For instance, in a READ intense scenario, a higher L2R improves READ performance.
How to improve the performance of L2ARC and the log device?
L2ARC and log device are intended to improve the READ and WRITE respectively. Therefore, high performance SSD storage is recommended for an L2ARC or ZIL
What pool do you recommend for a data protection use case?
A large RAIDz Group with high performance data disks and one or two parity disks are ideal.
What is the recommended RAID-Z configuration?
For RAID-Z1, RAID-Z2, and RAID-Z3 CloudByte recommends a 2^n + p configuration.
|n||RAID-Z 1, RAID-Z 2, and RAID-Z 3.|
|p||Parity: p=1 for RAID-Z1, p=2 for RAID-Z2 and p=3 for RAID-Z3 where RAID-Z1 = 3, 5, 9, 17, …, RAID-Z2 = 4, 6, 10, 18, …, RAID-Z3 = 5, 7, 11, 19, …|
Is single parity RAID equivalent to well known RAID 5?
Almost similar. Also it eliminates the WRITE HOLE problem in RAID 5.
Is double parity RAID equivalent to well known RAID 6?
Almost similar. Also, it eliminates the WRITE HOLE problem in RAID 5.
What is RAID triple parity and when should I use it?
RAID triple parity tolerates failure of three disks in a RAID group. Use it when you handle critical data and want to evade chances of disk failure.
Do you recommend NFS/CIFS/ISCSI/FC on the same pool or should I have the access protocols on different pools?
It doesn’t matter. Configure based on the usage and requirement.
Can a VSM span multiple pools?
Not in the current release.
Is pool IOPS derived from the disk?
Disk plays a role in deriving IOPS, but it is also based on RAM and L2ARC.
Can I extend the pool IOPS by extending the pool?
Can I delete a Pool with active Storage Volume?
Can I identify the failed disks in a pool?
ElastiCenter Alerts provides you with the details.
What happens if you pull a disk from a pool that has a spare disk?
Spare disk is attached to the pool.
What happens if you pull a disk from a pool that does not have a spare disk?
Leaves the Pool in a degraded list.
How do I disable SCSI reservation on disks?
Warning: Do not disable SCSI reservation on production environments. The following procedures are recommended only for troubleshooting on development and testing setups.
- Make an SSH connection to the Node.
- Run the following command
When you move the setup from test to production environment, ensure that all disks are SCSI reservation-enabled.
If the disablescsi file exists,
- Run the following command to remove it:
- Create new Pool.
How will I clear the pools created in an older setup?
If you add a new Pool on the disks being consumed by existing Pool, it will be deleted automatically. However, to manually clear the Pools from an older setup, do either of the following (depending on the scenario):
- Reinstalled ElastiStor without removing the Pools: In the Nodes page, go to the section Existing Pools (at the bottom of the page) and then click the Clear link under the column Clear.
- Connected disk arrays (with Pools) to a different Node: Apply Refresh Hardware after connecting to the new Node. All existing Pools are displayed in the Nodes page. Go to the section Existing Pools (at the bottom of the page) and then click the Clear link under column Clear.
How do I configure partial failover?
CloudByte ElastiStor supports selective failover of Pools. If partial failover is enabled, in the case of network failure, the secondary node in the cluster takes over.
By default, partial failover is enabled. If not enabled, go to the Pool settings and enable.
- In ElastiCenter, select Pools in the left pane.
- In the Pools page, select the Pool for which you want to enable partial failover.
- In the Actions pane, select Settings.
- Click Edit and then enable partial failover.
Can I set AD-level authentication for a VSM?
Yes. For details, see Active Directory integration with ElastiStor