CCIE Data Center Passed

This blog was my personal blog to study for the CCIE Data Center and can contain some mistakes . ūüėČ

I passed the lab at June 27 2013 and now I ain’t posting on this blog anymore.

If you want to know new stuff regarding Data Center of whatever, please have a look at :

BLOG.JOOSTVANDERMADE.COM

 

OTV and MTU

OTV Header Format

Some consideration must be given to the MTU across the transport infrastructure. Consider the OTV packet header layout.

A 42 byte OTV header is added and the DF (Don’t Fragment) bit is set on ALL OTV packets. The DF bit is set because the Nexus 7000 does not support fragmentation and reassembly. The source VLAN ID and the Overlay ID is set, and the 802.1P priority bits from the original layer2 frame is copied to the OTV header, before the OTV packet is IP encapsulated. Increasing the MTU size of all transport interfaces are required for OTV.

 

http://routing-bits.com/2011/06/16/cisco-otv-part-i/

 

vPC Role

There are two defined vPC roles: primary and secondary.
vPC role defines which of the two vPC peer devices processes Bridge Protocol Data Units (BPDUs) and responds
to Address Resolution Protocol (ARP).
Use role priority <value> command (under vPC domain configuration context) to force vPC role to primary for a
dedicated peer device.
<value> ranges from 1 to 65535 and the lowest value will dictate the primary peer device.
In case of tie (same role priority value defined on both peer devices), lowest system mac will dictate the primary
peer device.
To know which of the 2 peer devices is primary or secondary, use show vpc role command:

iSCSI Qos N5K

Identify iSCSI traffic :

class-map type qos class-iscsi
match protocol iscsi
match cos 4
class-map type queuing class-iscsi
match qos-group 4
policy-map type qos iscsi-in-policy
class type qos class-fcoe
set qos-group 1
class type qos class-iscsi
set qos-group 4

policy-map type queuing iscsi-in-policy
class type queuing class-iscsi
bandwidth percent 10
class type queuing class-fcoe
bandwidth percent 10
class type queuing class-default
bandwidth percent 80
policy-map type queuing iscsi-out-policy
class type queuing class-iscsi
bandwidth percent 10
class type queuing class-fcoe
bandwidth percent 10
class type queuing class-default
bandwidth percent 80
class-map type network-qos class-iscsi
match qos-group 4
policy-map type network-qos iscsi-nq-policy
class type network-qos class-iscsi
set cos 4
pause no-drop
mtu 9216
class type network-qos class-fcoe
system qos
service-policy type qos input iscsi-in-policy
service-policy type queuing input iscsi-in-policy
service-policy type queuing output iscsi-out-policy
service-policy type network-qos iscsi-nq-policy

 

Source : http://d2zmdbbm9feqrf.cloudfront.net/2012/eur/pdf/BRKRST-2930.pdf

iSCSI Ports

iSCSI uses TCP (typically TCP ports 860 and 3260).

Nexus 1000v Config for UCS with iSCSI boot.

Inside UCS, all VLANs are trunked and not native, EXCEPT the iSCSI VLAN;

The iSCSI NICs needed to be set as the Native VLAN to allow the hardware to boot from the iSCSI devices, since there was no OS to allow us to assign a specific VLAN until after boot;

Inside the Standard vSwitches that carried the iSCSI traffic, the VLANs were not assigned, since they were the Native VLANs already assigned to the Hardware NICs;

Inside all Cisco Devices, the Native VLAN is always 1, if not specifically assigned inside the Port-Group.¬† Once we set VLAN 166 to Native on the Uplink Port-Profile, the connectivity commenced.¬† Originally, the Uplink Port Profile was set the same as all other Uplinks, with a VLAN assigned, but not denoted as ‚ÄúNative‚ÄĚ.

port-profile type ethernet iSCSI-uplink-166

vmware port-group

switchport mode trunk

switchport trunk allowed vlan 166

switchport trunk native vlan 166

channel-group auto mode on mac-pinning

no shutdown

system vlan 166

state enabled

https://communities.cisco.com/thread/27668

The host’s configuration will depend on which CNA is installed in the UCS blade. The Cisco M81KR Virtual Interface Card (VIC) has the ability to have many virtual interfaces presented to the host from a single card. With this capability, vNICs can be created specifically for the iSCSI traffic. In UCS Manager, when creating the vNIC or vNIC template, select the iSCSI VLAN and assign it as the native VLAN. Note that VIC does not support¬†non native¬†VLAN booting with iSCSI

http://www.cisco.com/en/US/prod/collateral/ps10265/ps10276/whitepaper_c11-702584.html

N1KV Layer 3 mode

Change svs mode from Layer 2 to Layer 3 in Cisco Nexus 1000V.
The configuration in the highlighted code is optional to change svs mode from Layer 2 to Layer 3.
Note
switch(config)# svs-domain
switch(config-svs-domain)# no control vlan
Warning: Config saved but not pushed to vCenter Server due to inactive connection!
switch(config-svs-domain)# no packet vlan
Warning: Config saved but not pushed to vCenter Server due to inactive connection!
switch(config-svs-domain)# svs mode L3 interface mgmt0
Warning: Config saved but not pushed to vCenter Server due to inactive connection!
switch(config-svs-domain)# show svs domain

SVS domain config
Domain id: 101
Control vlan: NA
Packet vlan: NA
L2/L3 Control mode: L3
L3 control interface: mgmt0
Status: Config push to VC successful.
switch(config-svs-domain)#
Note: Control VLAN and Packet VLAN are not used in L3 mode

N1Kv installation on an UCS System

If you are installing a Cisco Nexus 1000V in an environment where the upstream switch does not support static port channels, such as the Cisco Unified Computing System (UCS), you must use the channel-group auto mode on mac-pinning command instead of the channel-group auto mode command

N1K VEM Upstream Switch Prerequisites

The following spanning tree prerequisites apply to the switch upstream from the Cisco Nexus 1000V
on the ports connected to the VEM

spanning-tree port type edge trunk

spanning-tree bpdu filter

spanning-tree bpdu guard.

Nexus1000v VSM/VEM communication over Layer3

Over Layer 3 (recommended): Communication between the VSM and the VEM is done through Layer 3, using the management interface of the VSM and a VMkernel interface of the VEM. Layer 3 connectivity mode is the recommended mode.

The Layer 3 mode encapsulates the control and packet frames through User Datagram Protocol (UDP). This process requires configuration of a VMware vmkernel interface on each VMware ESX host, ideally the service console of the VMware ESX server. Using the ESX/ESXi management interface alleviates the need to consume another vmkernel interface for Layer 3 communication and another IP address. Configure the VMware VMkernel interface and attach it to a port profile with the l3control option.

Nexus1000V(config)# port-profile type vethernet L3vmkernel

Nexus1000V(config-port-profile)# switchport mode access

Nexus1000V(config-port-profile)# switchport access vlan <X>

Nexus1000V(config-port-profile)# vmware port-group

Nexus1000V(config-port-profile)# no shutdown

Nexus1000V(config-port-profile)# capability l3control

Nexus1000V(config-port-profile)# system vlan <X>

Nexus1000V(config-port-profile)# state enable

Note: <X> is the VLAN number that will be used by the VMkernel interface.

The l3control configuration sets up the VEM to use this interface to send Layer 3 packets, so even if the Cisco Nexus 1000V Series is a Layer 2 switch, it can send IP packets.