introduction
DRS configuration is based on cluster level
in pioneers.lab L we have created cluster called cluster16 , which include three ESXI host : ESXI151 ,ESXI152,and ESXI153
please back to previous articles DRS introduction here before configure DRS
DRS Network Diagram
as we usual do in all our configuration ,
it;s better to understand network diagram to ease your mission in configuration
we have clluster16 contain three ESXI host : ESXI151, ESXI152 ,and ESXI153
all three ESXI server are connected to local storage beside of shared storage [NFS Datastore, and ISCSI datastore ],
and also belong the same management subnet 172.16.x.x/16 , as well as same storage sunbet 172.21.x.x/16
please look to network diagram below
Configure DRS
open vcenter > cluster > configure > DRS
advanced DRS configuration
If VM has high space vmdk > it’s NOT recommended to configure [SDRDS] > Since it will slow our performance
DRS and SDRS is working automatically > So we need to control DRS using : [DRS rules ]
- Sometime we need that 2 VMs should NOT be in same host : for example Exchange CAS 1 and CAS 2 [if that host down then ALL CAS server will be down and exchange environment lost ] à we need to configure
- Sometime we need 2 VMS to be in same Host : for example SharePoint front end and its SQL database [for best performance and faster IOps transaction à we need to configure
Automation level : recommended to be fully automated
Threshold: from conservative [which accept ONLY priority 1] à to aggressive [which accept DRS for priority 1,2,3,4,5] à please note VMWARE highly recommend to apply aggressive
Predictive : if enable à it will rely on history of VMWARE VROPS [to be discussed later ] * by default DRS work after Host resource Overload happened but with this feature DRS will happen before resource overload
VM distribution : if enabled à make all VMs in resources with same number [NOT based on load but based on VM quantity]
We can configure DRS based active RAM[Assigned] or consumed RAM
Same thing also for CPU [consumed CPU rather Assigned ]
Distributed Power management DPM : only used by ISP when there huge environment : which used to save power for some VMs
How to create [affinity rule ]
- Host and cluster à select Cluster configuré à configuration à configure VM/host rule à create rule à select either :
- [keep VM together ]
- or [separate VM]
- keep VM on specific HOST
- when create Rule à we have to differentiate between
- should: meaning apply as much as possible but will move VM if Host going down
- must : which mean rule will be applied and there is NO override à so if host going down à VM will NOT move will be down also
Exclude VMD from DRS
Host and cluster à select Cluster àconfigure à configuration à configure VM override : will exclude VM from apply DRS à and apply specific settings for that VM
Please note : Since we are using LAB and there is NO resources overload à DRS will NOT provide us with Added value à but it’s very important to configure it in production environment
DPM Distributed Power Management
- DPM is a DRS “extension” that helps to save power (so we can be “green”).
- It recommends powering off or on ESX hosts when either CPU or memory resource utilization decreases or increases.
- VMware DPM also takes into consideration VMware HA settings or user-specifies constraints
- It means that e.g. if our HA tolerates one host failure, à DPM will leave at least two ESXi host powered on.
- How does DPM contact with ESXi host?
- It uses wake-on-LAN (WoL) packets or these out-of-band methods: Intelligent Platform Management Interface (IPMI) or HP Integrated Lights-Out (iLO) technology.
- When DPM powers off ESXi host, that host is marked as Standby (standby mode).
monitor DRS
it’s highly recommended to monitor DRS to see what is going on behind the scenes
Conclusion
DRS is grate feature of Vsphere to balance resources in vsphere environment Automatically without human intervention
DRS is based on vMotion Feature : so it’s highly recommended to fully understand vMotion before Configure DRS