Breadcrumb Build – Region A VVD Virtual Infrastructure on VxRail Part 7: Adjusting the Shared Edge/ Compute VxRail

Welcome to Part 7 of my VVD on VxRail Breadcrumb Build Series

As with the first VxRail we deployed for management use, the SEC VxRail also needs some minor tweaks before we can consider it VVD aligned. Thankfully, the list is shorter than with the management Rail, so we will get through this relatively quickly.

Anything in the format [input_value] represents a value from your preparation that you should insert (without the square brackets of course)

Set SDDC Deployment Details on the Management vCenter Server

  • Log into [sec-vcenter-fqdn]
  • Home -> Global Inventory Lists -> vCenter Servers -> Resources
  • Select [sec-vcenter-fqdn]
  • Configure -> Settings -> Advanced Settings -> Edit
  • Add
    • config.SDDC.Deployed.Type: VVD
    • config.SDDC.Deployed.Flavor: Standard
    • config.SDDC.Deployed.vvd_vc_conversion: 4.3.0
    • config.SDDC.Deployed.WorkloadDomain: SharedEdgeAndCompute
    • config.SDDC.Deployed.Method: DIY
    • config.SDDC.Deployed.InstanceId: [generated-random-uid]

Add Management vSphere Licenses

Quite likely you already have the licenses added from earlier work, but just in case you are using a different license for the SEC VxRail, then add it here

  • VC -> Home -> Administration -> Licenses
  • On Licenses tab -> Create New Licenses
  • Add and Assign Licenses

Add the Shared Edge and Compute vCenter to the vCenter Servers VM Group

  • Select [mgmt-cluster] -> Configure -> VM/Host Group -> Select vCenter Servers group -> Edit -> Add [sec-vcenter-vmname]

Rename Components (optional but handy)

  • Rename Distributed Switch to [sec-vds]
  • Rename vSAN data store to [sec-vsan-datastore]

Configure the Shared Edge and Compute Cluster

  • Add all ESXi hosts to the [ad-domain]
    • Log into [sec-vcenter-fqdn]
    • Navigator -> Hosts and Clusters -> Select [sec-cluster]
    • For Each Host
      • Configure -> System -> Authentication Services -> Join Domain
        • [ad-domain]
        • [ad-psc-bind-username] / [ad-psc-bind-password]
  • System -> Security Profile -> Edit (next to Services)
    • Active Directory Service -> Startup Policy -> Start and stop with host
    • SSH -> Startup Policy -> Start and stop with host
  • Create resource pools
    • Right Click [sec-cluster] -> New Resource Pool
      • Name: regiona-sec-rp-sddc-edge
      • CPU-Shares: High
      • CPU-Reservation: 0
      • CPU-Reservation Type: Expandable
      • CPU-Limit: Unlimited
      • Memory-Shares: Normal
      • Memory-Reservation: 16GB
      • Memory-Reservation Type: Expandable
      • Memory-Limit: Unlimited
    • Right Click [sec-cluster] -> New Resource Pool
      • Name: regiona-sec-rp-user-edge
      • CPU-Shares: Normal
      • CPU-Reservation: 0
      • CPU-Reservation Type: Expandable
      • CPU-Limit: Unlimited
      • Memory-Shares: Normal
      • Memory-Reservation: 0
      • Memory-Reservation Type: Expandable
      • Memory-Limit: Unlimited
    • Right Click [sec-cluster] -> New Resource Pool
      • Name: regiona-sec-rp-user-vm
      • CPU-Shares: Normal
      • CPU-Reservation: 0
      • CPU-Reservation Type: Expandable
      • CPU-Limit: Unlimited
      • Memory-Shares: Normal
      • Memory-Reservation: 0
      • Memory-Reservation Type: Expandable
      • Memory-Limit: Unlimited
  • Configure the VDS
    • Log into [sec-vcenter-fqdn]
    • Select [sec-vds] -> Configure -> Advanced
    • MTU 9000
    • NSX Portgroups
    • Distributed Port Group -> New Distributed Port Group
      • Uplink01 Static Binding VLAN [sec-uplink01-vlan]
      • Uplink02 Static Binding VLAN [sec-uplink02-vlan]
  • Change Default for Main Portgroups
    • [sec-vds] -> Distributed Port Group -> Manage Distributed Port Groups -> Teaming and failover -> Next
    • All DPGs except Uplink01 and Uplink02 -> Next
    • Route based on physical NIC load -> Next -> Finish
  • Uplink01 -> Edit Settings -> Teaming and Failover
    • dvUplink2 -> Unused uplinks -> OK.
  • Uplink02 -> Edit Settings -> Teaming and Failover
    • dvUplink1 -> Unused uplinks -> OK.
  • Hosts & Clusters (each host)
    • Configure -> VMKernel Adapters -> vMotion Adapter -> Edit -> NIC Settings -> MTU -> 9000
  • Distributed Switch -> Configure -> Resource Allocation -> System Traffic
    • Virtual SAN Traffic: High
    • vMotion Traffic: Low
    • vSphere Replication (VR) Traffic: Low
    • Management Traffic: Normal
    • vSphere Data Protection Backup Traffic: Low
    • Virtual Machine Traffic: High
    • Fault Tolerance Traffic: Low
    • iSCSI Traffic: Low
    • NFS Traffic: Low
  • [sec-vds] -> Configure -> Health Check -> Edit
    • Enabled for VLAN and MTU and Teaming and failover
  • Modify vSphere HA
    • Hosts & Clusters -> Cluster -> Configure -> vSphere Availability -> Edit
    • Failures and Responses -> VM Monitoring -> VM Monitoring Only
  • Advanced Options on the ESXi Hosts (All Hosts)
    • Configure -> System -> Advanced System Settings -> Edit
    • Filter – esxAdmins
    • Change Config.HostAgent.plugins.hostsvc.esxAdminsGroup to [sddc-admins]
    • Filter – vsan.swap
    • VSAN.SwapThickProvisionDisabled to 1
    • Filter – ssh
    • UserVars.SuppressShellWarning to 1
    • OK
  • Mount NFS Storage
  • Create any folders you want for VMs and Templates

Right. Thats that job done. Told you it was shorter than the last time. Onto NSX for the SEC vCenter in the next post

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.

Up ↑

%d bloggers like this: