- getwebb

Systems with two SATA-SSDs use the second drive for additional hot-tier NDFS data storage.
Protection Domain & consistency Group.
Snapshot group: don’t exists.
vDisk: Not relevant, your backup the full VM not only vDisks.
Time stream = Local sync snapshots VS Site-to-site Async snapshots.
When the guest VM sends a read request through the hypervisor, the Controller VM will read from the local copy first, if
present. If the host does not contain a local copy, then the Controller VM will read across the network from a host that
does contain a copy. As remote data is accessed, it will be migrated to storage devices on the current host, so that future
read requests can be local.
Each node in a Nutanix cluster requires three IP addresses, one for each of the following components:



IPMI interface
ESXi host
Nutanix Controller VM
Planned Failover or Failback
This operation does the following:
1. Creates and replicates a snapshot of the protection domain.
2. Shuts down VMs on the local site.
3. Creates and replicates another snapshot of the protection domain.
4. Unregisters all VMs and removes their associated files.
5. Marks the local site protection domain as inactive.
6. Restores all VM files from the last snapshot and registers them on the remote site.
7. Marks the remote site protection domain as active.
E-mail notification of alerts is enabled by default and sends alert messages automatically to Nutanix customer support
through customer-opened ports 80 or 8443.
Redundancy Factor 3 means there will be three copies of data, and the cluster can tolerate the failure of any two nodes
or drives in different blocks. A redundancy factor of 3 requires that the cluster have at least five nodes, and it can be
enabled only when the cluster is created.
Replication Factor 3
In addition, containers must have replication factor 3 for guest VM data to withstand the failure of two nodes.
The data protection feature requires that a source site must be running the same or higher Nutanix OS (NOS) release as
the remote (target) site to function properly, so it may be necessary to upgrade a source site to use the selected remote
site. (A remote site can be running a lower NOS version than a source site but not the reverse.)
Nutanix recommends creating a single storage pool to hold all disks within the cluster.
The guest VM sends a read request through the hypervisor, the Controller VM will read from the local copy first, if
present.
The tiers depend on the Nutanix model type and can include the following:


Storage Pools
Containers


vDisks
Datastores/SMB Shares
The tiers depend on the Nutanix model type and can include the following:




Storage Pools
Containers
vDisks
Datastores/SMB Shares
Logging Into the Web Console
Enter http://management_ip_addr in the address field and press Enter. Replace management_ip_addr with the IP
address of any Nutanix Controller VM in the cluster.
If a cluster includes nodes with different license types, the cluster and each node in the cluster defaults to the minimum
feature set enabled by the lowest license type. For example, if two nodes in the cluster have Pro licenses and two nodes
in the same cluster have Starter licenses, all nodes will effectively have Starter licenses and access to that feature set
only. Attempts to access Pro features in this case result in a Warning in the web console.
Data Path Redundancy also responds when a local Controller VM is unavailable. To maintain the storage path, the
cluster automatically redirects the host to another Controller VM. When the local Controller VM comes back online, the
data path is returned to this VM.
When run from the Controller VM command line, NCC generates a log file with the output of the diagnostic commands
selected by the user.
NCC actions are grouped into plugins and modules.
•
•
Plugins are objects that run the diagnostic commands.
Modules are logical groups of plugins that can be run as a set.
A Nutanix Controller VM runs on each node, enabling the pooling of local storage from all nodes in the cluster.
There are separate colors for regular protection domains (green=active, gray=inactive) and protection domains created
by vStore protect (blue=active vStore, light blue=inactive vStore).
If a VM is migrated to another host, future read requests will be sent to a local copy of the data, if it exists. Otherwise,
the request is sent across the network to a host that does contain the requested data. As remote data is accessed, it will
be migrated to storage devices on the current host, so that future read requests can be local.
The first character indicates whether the log entry is an Info, Warning, Error, or Fatal. The next four characters indicate
the day on which the entry was made. For example, if an entry starts with F0820, it means that at some time on August
20th, the component had a failure.
Check cluster status: Verify that all services are up on all Controller VMs.
Cluster Check (NCC): Hand off the cluster to Nutanix Support
Product Mixing Restrictions:
•
•
All nodes in a cluster must be the same hypervisor type (ESXi, KVM, or Hyper-V).
All Controller VMs in a cluster must have the same NOS version.
Configuring a Remote Site (Physical Cluster)
•
•
vStore Name mappings
Compress on the wire
Stargate: is responsible for all data management and I/O operations and is the main interface from the hypervisor (via
NFS, iSCSI, or SMB).I/O mover
Cassandra: Cassandra stores and manages all of the cluster metadata in a distributed ring-like manner based upon a
heavily modified Apache Cassandra.
Curator: Curator is responsible for managing and distributing tasks throughout the cluster, including disk balancing,
proactive scrubbing, and many more items.
Pithos: is responsible for vDisk (DSF file) configuration data. Pithos runs on every node and is built on top of Cassandra
Prism: is the management gateway for component and administrators to configure and monitor the Nutanix cluster.
This includes Ncli, the HTML5 UI, and REST API.
When a protection domain is inactive, backups (scheduled snapshots) in that protection domain are disabled.
Single storage pool with multiple containers.
Reconfig
Fingerprinting is done during data ingest of data with an I/O size of 64K or greater > Data is not deduplicated on either
tiers.
The task is a new feature that has not yet been incorporated into the web console.
• The task is part of an advanced feature that most administrators do not need to use.
The shortest possible snapshot frequency is once per hour.
Download Foundation (multi-node installation tool), Phoenix (Nutanix Installer ISO), and hypervisor ISO image files to a
workstation.
time stream, where snapshots are stored on the same cluster as the source VM
Boot drive failure will eventually cause the Controller VM to fail. The host does not access the boot drive directly, so
other guest VMs can continue to run. Data Path Redundancy will redirect the storage path to another Controller VM.
For example, if an entry starts with F0820, it means that at some time on August 20th, the component had a failure.
Local datastore name: Nonconfigurable components.
Health/VM/Alerts
This features invokes VSS to create an application consistent snapshot for a VM and is limited to consistency groups with
just a single VM.
• The cluster comprises three or more blocks (unless the cluster was created with redundancy factor 3, then the cluster
must comprise five or more blocks).
• Every storage tier in the cluster contains at least one drive on each block.
• Every container in the cluster has replication factor of at least 2.
• The storage tiers on each block in the cluster are of comparable size.
2HDD on different nodes
In a cluster with a replication factor 2, losing two drives on different nodes and in the same storage tier means that
some VM data extents could lose both replicas.
Cluster not started
Planned/Disaster/faillback
1&3
Consistency Group named after the VM
VMs / containers (metro)
VMs/Hosts
Compression is enabled: Deduplication will not take effect