ESXI storage introduction
ESXi hosts support host-level storage virtualization, which logically abstracts the physical storage layer from virtual machines.
The following storage technologies are supported by ESXi:
Direct-attached storage
- internal hard disks or external storage systems attached to the ESXi host through a direct connection using protocols such as SAS or SATA.
- This type of storage does not require a storage network to communicate with your host,
- Does NOT support vSphere features that require shared storage, such as High Availability and vMotion.
Fibre Channel
- a high-speed network technology used for SANs.
- It works by encapsulating SCSI commands and transmitting them between FC nodes.
- ESXi hosts should be equipped with Fibre Channel host bus adapters (HBAs).
Fibre Channel over Ethernet
- a network technology that encapsulates Fibre Channel frames over Ethernet networks. The same Ethernet link carries both FC and Ethernet traffic.
Internet SCSI
- iSCSI is a protocol used for encapsulating SCSI control and data in TCP/IP packets, enabling access to storage devices over standard TCP/IP networks.
- In networks pioneers : most of Datastore LABs will be based on ISCSI storage
Network-attached storage
- a file-level storage shared over standard TCP/IP networks. Files are usually accessed using the NFS (Network File System) protocol.
- Virtual machines use virtual disks to store their operating system, program files, and other data.
Virtual disks are large files that can be copied, moved, deleted. and archived just like any other file.
Each virtual disk resides on a DataSstore that is deployed on the physical storage. From the standpoint of a virtual machine,
each virtual disk appears as if it were a SCSI drive connected to a SCSI controller.
Whether the actual physical storage is being accessed through storage or network adapters on the host is typically transparent to the guest operating system and its application
ESXI storage - Network Diagram
if you have look at network diagram above , you will see that each ESXI host have 4 datastore as the following:
- each ESXI host has direct local datastore with 400GB
- each ESXI host can connect to SAN storage [20TB] through network
- each ESXI host can connect to ISCSI storage [15TB] through network
- each ESXI host can connect to NAS storage [10TB] through network
ESXI storage Protocols
Fiber Channel
- Support boot from SAN : YES
- Support vMotion : YES
- Support vSphere High availability HA : YES
- Support Distributed Resources Scheduler DRS : YES
- Support Raw Device Mapping RDM : YES
Fiber Channel over Ethernet
- Support boot from SAN : YES
- Support vMotion : YES
- Support vSphere High availability HA : YES
- Support Distributed Resources Scheduler DRS : YES
- Support Raw Device Mapping RDM : YES
I SCSI
- Support boot from SAN : YES
- Support vMotion : YES
- Support vSphere High availability HA : YES
- Support Distributed Resources Scheduler DRS : YES
- Support Raw Device Mapping RDM : YES
NFS over NAS
- Support boot from SAN : NO
- Support vMotion : YES
- Support vSphere High availability HA : YES
- Support Distributed Resources Scheduler DRS : YES
- Support Raw Device Mapping RDM : NO
Direct-attached storage
- Support boot from SAN : NO
- Support vMotion : YES
- Support vSphere High availability HA : NO
- Support Distributed Resources Scheduler DRS : NO
- Support Raw Device Mapping RDM : YES
Some Consideration with Storage Protocols
Direct-attached storage is sometimes used for the ESXi installation.
DAS can also be used in smaller environments that don’t require shared SAN storage. Noncritical data
DAS sometimes stored on direct-attached storage, for example CD-ROM ISO images, VM templates, decommissioned VMs, etc.
Shared storage enables you to use some advanced vSphere features such as vMotion, High Availability and Distributed Resource Scheduler.
It can also be used as a central DataStore for VM files and templates, clustering of VMs across ESXi host and allocation of large amounts of storage to the ESXi hosts.