Você está na página 1de 8

###############################################

1. Cisco Data Center Layer 2/Layer 3 Technologies


#################################################

1.1 - Fix Multicast and Configure a Redundant RP


Multicast is flowing through DC1-N7K-2. Configure a backup Phantom RP candidate on
DC1-N7K-1 using interface Loopback 1. Ensure multicast is configured consistently to
support VXLAN between the DC1-N5K-1, DC1-N5k-2, and Cisco Nexus DC1-7K-1, DC2-7K-2,
DC1-7k-5, and DC1-N7K-6.

1.2 - Configure Jumbo MTU


All Cisco Nexus switches in DC1 should allow jumbo frames on VLANs 101-103

1.3 - Configure VXLAN


Extend VLANs 101-103 between the n5k and n7k switches using the routed network as an
underlay. Use Loopback 0 as a source.

1.4 - Configure vPC


Configure 2 virtual port-channels towards Fabric Interconnects. Use Port Channel ID
100 for FI A and Port Channel 200 for FI B. Ensure that the vPC domain is properly
configured for VXLAN. Only trunk the required VLAN's. Use interface VLAN 10 as
needed. Refer to the topology diagram as required.

1.5 - Configure HSRP with VXLAN on Cisco Nexus 7000


Configure HSRP gateway on the Cisco Nexus 7000 for the servers to utilize. Ensure
that N7K-5 is always active. Put the newly created Layer 3 interfaces into VRF
"Tenant1"

1.6 - Configure a Custom COPP

Configure a custom strict COPP policy on DC1-N7K with a suffix of "ccie". Ensure
that this COPP policy is applied.

1.7 - Troubleshooting OTV


Troubleshoot OTV configuration to ensure that servers in DC-1 can reach servers in
DC-2 over vlan 100.

######################################
2. Cisco Data Center Network Services
######################################

2.1 - ACI Service Graph Rendering

You must complete the deployment of an L4-L7 Service Graph on ACI within the tenant
named ASAv between the inside and outside EPGs. The ASAv has been deployed in routed
mode. When the service graph is deployed correctly, you must ping from the VM named
SecureVM to the VM named UnsecureVM. You can use the VM consoles within vCenter for
testing. Use CCIE-ASAV for the service graph and contract name. Ensure that the
bridge domains are configured to meet these requirements
2.2 - Implement ACI Shared Fabric Services
A customer is trying to implement a three-tier application on ACI. The web service
function resides in a tenant called web and the Applications and Database services
reside in a tenant called app-db. The web VM and app VM should be able to
communicate (ping) each other.

The ACI-Web VM resides on SRV-3, which is vPC connected to N2K-3 and N2K-4. The
ACI-App and ACI-DB VMs reside on SRV-5, which is directly connected to LEAF-3 and
LEAF-4. Correct the configuration.
Requirement: Do not create any new policies.

2.3 - Implement Shared L3 Out

Ensure the web services are made available to devices outside the fabric, using OSPF
for the L3 connection, and SVIs for connectivity. The ACI-Web VM (10.3.1.10) should
be able to communicate (ping) the Cisco Nexus 7000 OSPF peer device (10.3.6.1).

Correct all configuration as necessary for the ACI-Web VM to communicate to the L3


OSPF devices attached outside the fabric.

Requirement: Do not create any new policies.

############################################
3. Data Center Storage Networking and Compute
############################################

3.1 - Implement Cisco UCS Domain Infrastructure

Refer to Diagram 2 and configure these items:

• Discover all attached compute devices.


• Connections between chassis and fabric interconnects should be aggregated.
• Configure the system to use highest redundancy level for power.
• If traffic is not received for 5 minutes, ensure that the MAC addresses are
removed from the MAC address table.
• Configure FCoE VSANs. The VSANs should be unique to each fabric interconnect. The
VLAN ID assigned should be the same as the VSAN ID.

3.2 - Configure Cisco UCS Infrastructure Connectivity


Refer to Diagram 2 and configure these items:
• With Cisco UCS in the default Fibre Channel mode, configure the FCoE uplink going
to the respective Cisco Nexus 5000.
a. On DC1-FI-A ensure the FCoE uplink is not aggregated into a port channel.
b. On DC1-FI-B ensure the FCoE uplink is aggregated into a port channel. The
channel ID should be 50.
c. Make any required changes on the Cisco Nexus 5000 to achieve this. The channel
IDs should match and the VFC ID used should be the same as the VSAN ID. Trunk only
the required VLANs and VSANs where applicable.

3.3 - Configure Cisco Nexus Storage Features


• Your company requires that each servers PWWN is called a unique name. Configure
N5K-3 and N5K-4 so that when the following PWWNs log in, the appropriate name is
shown in the flogi table and fcns database when the host logs in.

N5K-3--VSAN 700
PWWN 00:20:23:43:54:23:54:54 --- Server1
PWWN 00:20:23:46:54:23:54:54 --- Server2
N5K-4--VSAN 800
PWWN 00:20:23:49:54:23:54:54 -- Server1
PWWN 00:20:23:52:54:23:54:54 -- Server2
• Your company also requires that a zone and zone set for VSAN 700 and 800 be
created. You must configure the members using the names configured earlier. Use the
naming schemes that follow when you create the zone and zone set. The configuration
should be active when complete.

N5K-3--VSAN 700
Zone set name DC_vsan_700
Zone name DC_vsan_700

N5K-4--VSAN 800
Zone set name DC_vsan_800
Zone name DC_vsan_800
Score: 2 points

3.4 - Configure Cisco UCS Infrastructure Network Connectivity

Refer to Diagram 2 and configure these items:

• Configure port channels from each fabric interconnect to DC1-N5K-1 and DC2-N5K-2.
Ensure that if LACP frames are not received from the N5Ks, the ports are put into a
suspended state.
• Configure the appropriate querier configuration for VLAN 104. The IP address
should be applied to both fabric interconnects, and the VLAN should not flood
multicast traffic on all ports by default.
• Configure VLAN 103 so that multicast traffic is flooded by default.

3.5 - Configure Cisco UCS Infrastructure Policies


• You notice that the time on the Cisco UCS Domain has a slight drift. Configure it
so it always syncs its time with a single source.
• You must allocate new local user accounts so that users can access only certain
portions of the domain.
o Configure CCIE-User1 to have access to full network functionality, as well as the
ability to create RBAC roles, but nothing else.
o Configure CCIE-User2 so that when they login, they have access to create
storage-related policies and also modify the software configuration of the blade.
They should be denied access to everything else.
o Faults can be raised for numerous reasons in a Cisco UCS Domain. You notice that
when faults are cleared, they still show up in the Cisco UCS Manager GUI. Configure
the domain so that when faults are cleared, they are removed from the fault
database.

3.6 - Create and Implement Service Profile Features


As part of this question, you must create or modify service profile features. All
policies must be placed onto the service profile named UCSM-LSP2.
Configure these items:
• Create a local disk configuration policy called CCIE-local, and ensure these
things:
a. Assume there are two local disks. Configure it as such that if one disk fails,
the OS is still intact.
b. Ensure that if another service profile is associated with a different policy,
the data on the disks is not removed.
• Without creating new policies, ensure that the vNICs are configured to send and
receive LLDP frames
• Without creating new policies, ensure that if the fabric interconnect uplinks go
down, the vNIC remains in an UP state.
• Without creating new policies, ensure that all vMedia transfers from the KVM are
encrypted.

3.7 - Troubleshoot SAN Boot

You must get a blade to boot from a Fibre Channel storage array using the service
profile named "UCSM-LSP1". Some of the configuration is in place, but you notice
that the blade is still unable to boot. Make any necessary changes to Cisco UCS
Manager and Cisco Nexus 5000 configurations to ensure that the blade boots up
successfully. If completed properly, the LUN should be visible in the KVM when the
VIC option ROM loads, and the blade should boot the vSphere operating system from
the storage device.

Do not create any new policies. If configuration must be modified, modify only
existing policy assigned to the profile.

UCSM-LSP1 should be associated to Chassis-1/Blade-1. Refer to Diagram 2 for naming


conventions.

3.8 - Cisco UCS Central Registration

You are going to register your Cisco UCS domain with Central. You have been
instructed to ensure that only global service profiles and resource pools are
managed by Cisco UCS Central. The Cisco UCS shared password has been set as
"C1scoucs". Ensure that the Cisco UCS domain is ever unregistered and that all
global policies are removed where possible.

Objectives: Register Cisco UCS domain with Central

3.9 - Cisco UCS Central Global Policies


To better manage Cisco UCS resources, you have decided to start leveraging Cisco UCS
Central. You start this process by creating new global resources pools for UUID,
MAC, WWNN, and WWPN. Use the information in this table.
Global Resources Pool Name Pool Size
Central-UUID 10
Central-MAC 10
Central-WWPN 10
Central-WWNN 10
You will also be creating these VLANs to be available on Cisco UCS Manager. Use the
information in this table.
Global VLAN Name VLAN ID Fabric Visibility
Web 101 Dual
App 102 Dual
DB 103 Dual

After these pools have been created, you must reconfigure the local service profile
"UCSM-LSP2" to use the global pools you created.

3.10 - Cisco UCS Central Global Service Profiles


To help improve management on Cisco UCS service profiles, your company has requested
that you begin using global service templates on Cisco UCS central. You must create
a global service profile template and then deploy the first global service profile
to your Cisco UCS domain in preparation for the server team to install an operating
system on. The requirements for the service profile are below. Use your own naming
convention where one in not provided to you.
• The global service profile template should use your previously created resource
pools.
• The service profile template should require the user to acknowledge any applied
profile changes.
• Service profiles cloned from the global template should be dynamically linked,
allow any changes to the template to propagate to the global service profiles.
• The global service profile should leverage redundant network interfaces based on
existing vNIC templates.
• The global service profile you create from the template should be named UCSM-GSP1
and deployed to the Cisco UCS compute node with blade 1/2 serial number.

##########################################
4. Data Center Automation and Orchestration
###########################################
4.1 - Understanding Basic Scripting Used with ACI

As the data center expert, you must analyze the provided script,
createStudentTenants.py and the newStudentTenants.json data file, and make this
change for all student EPGs that the script creates when it is executed.

Original Physical Domain Name: OldStudent-Domain


New Physical Domain Name: NewStudent-Domain

Your task is to make the necessary changes to the domain name in the script, then
execute the script against ACI to create the necessary tenant configuration.

4.2 - Programmability Using Python


Currently VLAN ranges defined in the createStudentTenants.json data file include
1-1000 for all student EPGs. Modify the VLAN range used from 1-1000 to 1501-2500.
Example:
VLAN 1 should be changed to VLAN 1501
VLAN 2 should be changed to VLAN 1502
VLAN 201 should be changed to VLAN 1701
etc.

4.3 - Device Discovery with Cisco UCS Director


Your task is to ensure that all Nexus, UCS and APIC devices in both data centers
have been discovered by Cisco UCS Director.

4.4 - Basic Provisioning with Cisco UCS Director


Povision a new VLAN throughout your data center. Leverage some of the workflows
located in the CCIE-DC orchestration folder, and deploy VLAN 105 on devices in DC1.
Use workflow found in UCSD Orchestration folder labeled CCIE-DC. Build from this
workflow, and add compound tasks Add_N5K-VLAN and Add_N7K-VLAN.
Validate flow and execute using these variables:
VLAN = 105
VLAN Name = ACI-DCI
Server Profile = UCSM-LSP1
vNIC = eth0, eth1
Org = UCSM-10.1.1.40
Admin = UCSM-10.1.1.40
N7K VDCs = 3 & 4

4.5 - Advanced Orchestration with Cisco UCS Director

Add VLAN 105 to 7k devices in DC2 and UCSB. VLAN 105 must to be added to Cisco UCSB,
extended through OTV link, and onto DC2 switches. Using existing Cisco UCS Director
workflows.

###################################
5. Data Center Fabric Infrastructure
####################################

5.1 - ACI Fabric Policies

Several ACI fabric policies are already deployed, but they do not appear to be
working correctly. You have been tasked with correcting the BGP route reflector and
NTP fabric policies. The NTP source for the fabric is 10.1.1.254 with no
authentication. Both spines should be configured as route reflectors.
You have also been instructed to configure ACI such that acknowledged faults do not
adversely affect the fabric health score.
Only default policies should be used. Do not create any new policies or use any
custom policy for NTP and the BGP route reflector other than default or existing
policies.

5.2 - ACI eBGP Peering Over vPC in Common Tenant


You have been informed of an issue with the ACI L3 Out BGP connection between LEAF-1
and LEAF-4 to DC2-N7K-2. Your goal is to correct the issue without making any
changes to the DC2-N7K-2 device or to the ACI fabric pod policies.
When complete, active BGP sessions should be between DC2-N7K-2 and ACI LEAF-1 and
ACI LEAF-4. The VM named DCI should be able to ping the external IP address
10.3.9.1. The L3 Out domain must remain in tenant Common. The endpoint must remain
in the tenant named "dci".

Use SVI interfaces for L3 Out from the ACI fabric. BGP connection is over vPC
between ACI fabric and DC2-N7K-2
The "DCI" test VM can be accessed via vCenter console.

vCenter- 10.1.1.215 (admin/Cisco!123)


DCI (VM)- 10.3.8.10 (root/Cisco!123)
5.3 - ACI Extension Over OTV
As part of your company's migration strategy to ACI, your manager has requested that
the EPG "dci" be extended over OTV using VLAN 105. Some of the ACI-related
configuration has already been completed.
Requirements:
You may use only existing ACI fabric access policies for this task.
DC2-N7K-2 configuration should not be altered for this task
Upon correct configuration, the VM DCI-VM1 hosted on Cisco UCS blade 1/1 should be
able to reach the DCI-VM2 VM that is hosted on SRV-3.

Refer to Diagram 1 for additional details.


DC1 Test VM: 10.3.9.10 "DCI-VM1"
DC2 Test VM: 10.3.9.11 "DCI-VM2"

5.4 - ACI APIC Maintenance


You have been asked to restore the ACI APIC cluster to three operational nodes.
APIC3 has failed, and the replacement has been cabled up and powered on. Your task
is to configure it to join the fabrc and bring the cluster to three nodes that are
in the Fully Fit health state.

5.5 - VMM Networking

Your customer wants to use VMM integration to push and ACI DVS to the vCenter
located at 10.1.1.215, and wants Server-5 in Data Center 2 to be attached to the
DVS.

Some of the configuration is in place, but it does not appear to be working


correctly. Correct the existing configuration such that the CCIE-DVS domain has no
faults, and that inventory shows all hypervisors and port groups. Add any VMM
configuration needed, use only existing fabric access policies, and change if
needed.
Access Information for vCenter:
Domain name: CCIE-DVS
vCenter Controller: vCenter
Username: admin
Password: Cisco!123
vCenter IP address: 10.1.1.215
DVS Version is 6.0
Data Center: CCIE-DC
Use an existing VLAN pool.

5.6 - Fabric Monitoring and Collector Policies

The customer wants to make sure that any audit logs, events, and faults on the
leaves, spines, and APICs are preserved. The messages are used to perform root cause
analysis for problems encountered during the day-to-day operations on the ACI
fabric. The customer has a server (IP address 1.1.1.1) installed to collect these
messages. Create the ACI policy configurations as needed.
Use default settings unless previously provided.
Make sure audit logs, events, and faults that are part of the default and custom
configured policies are collected.

Você também pode gostar