Você está na página 1de 1107

Welcome, and thanks for purchasing my ICND2 Study Guide!

You’re about to benefit from the same clear, comprehensive CCENT and CCNA instruction that thousands of students around the world have used to earn their certifications. They’ve done it, and you’re about to do the same!

On the next page or two, I’ve listed some additional free resources that will definitely help you on your way to the CCENT, the CCNA, and to real- world networking success.

Use them to their fullest, and let’s get started on your exam pass!

Chris Bryant

“The Computer Certification Bulldog:

Udemy:

Over 38,000 happy students have made me the #1 individual instructor on Udemy, and that link shows you a full list of my free and almost-free Video Boot Camps! (Use the discount code BULLDOG60 to join my 27-hour CCNA Video Boot Camp for just $44!)

YouTube :

(Over 325 free training videos!)

Website:

(New look and easier-to-find tutorials in Dec. 2013!

Facebook:

Twitter:

See you there!

Chris B.

Copyright © 2013 The Bryant Advantage, Inc.

All rights reserved. This book or any portion thereof may not be reproduced or used in any manner whatsoever without the express written permission of the publisher except for the use of brief quotations in a book review.

No part of this publication may be stored in a retrieval system,

transmitted, or reproduced in any way, including but not limited to photocopy, photograph, magnetic, or other record, without the prior agreement and written permission of the publisher.

The Bryant Advantage, Inc., has attempted throughout this book to distinguish proprietary trademarks from descriptive terms by following the capitalization style used by the manufacturer. Copyrights and trademarks of all products and services listed or described herein are property of their respective owners and companies. All rules and laws pertaining to said copyrights and trademarks are

inferred.

Printed in the United States of America

First Printing, 2013

The Bryant Advantage, Inc.

9975 Revolutionary Place

Mechanicsville, VA 23116

Contents

The Spanning Tree Protocol

HDLC, PPP, and Frame Relay (Plus A Few Cables!)

Routing And IP Addressing

Fundamentals The Wildcard Mask

OSPF and Link-State Protocols EIGRP Intro To Network Managment and Licensing Intro To VPNs and Tunnels 1st-Hop Redundancy Protocols IP Version 6

Mastering

Binary

Math

and

Subnetting

The Spanning Tree Protocol

I’ve said it before and I’ll say it again — in networking, as in life, we’ll take all the backup plans we can get!

In our networks, that “Plan B” takes the form of redundancy, and in switching, that redundancy takes the form of having multiple paths available between any two given

endpoints in the network. That helps us avoid the single point of failure, which in today’s networks is totally unacceptable.

(A single point of failure is a point in the network where if something goes down, the entire network comes to a standstill.)

The benefit of those additional paths does carry some danger. If all the paths in the following diagram were available at all times, switching loops could form.

What we need is for one path between any two endpoints to be available, while stopping

What we need is for one path between any two endpoints to be available, while stopping the other paths from being used unless the primary path goes

down.

Then, of course, we want that backup path to become available ASAP.

The Spanning Tree Protocol (STP), defined by IEEE 802.1d, does this for us by placing ports along the most desirable path into forwarding mode, while ports along less-desirable paths are placed into blocking mode. Once STP converges, every port on these paths is in either forwarding or blocking mode. At that point, only one path is available between any two

destinations, and a switching loop literally cannot occur.

Note: You’re going to hear about routing loops later in your studies. Those happen at Layer 3. STP has nothing to do with routing loops. STP is strictly a Layer 2 protocol and is used to prevent switching loops. Watch that on your exam.

If a problem arises with the open path, STP will run the spanning-tree algorithm to recalculate the available paths and determine the best path.

Ports along the new best path will be brought out of blocking mode and into forwarding mode, while ports along less- desirable paths will remain in blocking mode. Once again, only one path will be available between any two endpoints.

Ports do not transition from blocking to forwarding mode immediately. These built-in delays help guard against switching loops during the transition. More about those timers later in this section. Let’s say STP has decided the

best path from SW1 to SW3 is the most direct path. (This is not always the case, as you’ll see.) Logically, SW1 sees only one way to get to SW3.

best path from SW1 to SW3 is the most direct path. (This is not always the

If that path becomes unavailable, STP will recalculate its available paths. When that recalculation ends, STP will begin to bring the appropriate ports out of blocking mode and into forwarding mode.

Switching loops cause several

Switching loops cause several

problems:

Frames can’t reach their intended destination, either totally or in part, due to MAC address table entries that will continually change.

Unnecessary strain put on switch CPUs.

These continually flooded frames end up causing a broadcast storm.

Unnecessary use of bandwidth.

Luckily for us, switching loops just don’t occur that often, because STP does a great job of preventing switching loops before they can occur.

The benefits of STP begin with the exchange of BPDUs and the root bridge election.

The Root Bridge Election

STP must first determine a root bridge for every Virtual LAN (VLAN). And yes, your root bridges will be switches. The term “root bridge” goes back to STP’s pre-switch days, and the term stuck even after the move away from bridges to switches. Just one of those things!

Speaking of “one of those things”, the root bridge election is one of those things that can be confusing at first, since you’re reading about the theory

and you may not have seen these terms before. Don’t worry about it. Following the description of the process, I have two fully-illustrated examples for you that are both packed with readouts from live Cisco switches. So hang in there and you’ll knock this stuff out like a champ on exam day!

Now on to the election….

When people are born, they act like they are the center of the universe. They yell, they scream, they expect to have their every desire carried out

immediately. (Some grow out of this; ssome do not.)

In a similar fashion, when a switch is first powered on, it believes it is the root bridge for every VLAN on your network. There must be a selection process to determine the true root bridge for each VLAN, and our selection process is an election process.

The election process is carried out by the exchange of BPDUs (Bridge Protocol Data Units). Switches are continually sending or forwarding BPDUs,

but hubs, repeaters, routers, and servers do not send BPDUs.

but hubs, repeaters, routers, and servers do not send BPDUs. Real-world note: There are different types

Real-world note: There are different types of BPDUs, and the one we talk about 99% of the time is technically called a Hello BPDU. This BPDU type is

often simply referred to as “BPDU”, and that’s the way I refer to it as well.

The Hello BPDU contains a lot of important info…

The root bridge’s Bridge ID

(BID). The BID is a combination of the bridge’s priority and MAC address. The format of the BID puts the priority in front of the MAC address, so the only way the MAC address comes into play during the election is when the contending switches’ priority is exactly the same.

The bridge with the lowest BID will be elected root bridge. The default priority value is:

32768 + The Sys-Id-Ext, which just happens to be the VLAN number.

For example, here’s SW1’s priority for VLAN 1:

Bridge ID

Priority

32769

SW1’s priority for VLAN 100:

Bridge ID

Priority

32868

I know you see the pattern.

The bridge with the lowest BID will be elected root bridge. The default priority value is:

Since the lowest BID wins, the switch with the lowest MAC address will become the root bridge for all VLANs in your network unless the priority is changed.

Cost To Reach Root From

This Bridge: The path with the lowest overall cost to the root is the best path. Every port is assigned a cost relative to its speed. The higher the speed, the lower the port cost.

BID Of The BPDU’s Sender:

This simply identifies which switch sent the BPDU.

The election proceeds as the BPDUs make their way amongst the switches….

When a switch receives a BPDU, the switch compares the root bridge BID contained in the BPDU against its own BID.

If the incoming root bridge BID is lower than that of the switch receiving it, the switch starts announcing that device as the root bridge. The BPDU carrying this winning BID is called a superior BPDU, a term we’ll revisit later in this section.

If the incoming BID is higher

than that of the receiver, the receiver continues to announce itself as the root. A BPDU that carries a non-winning BID is an inferior BPDU.

This process continues until every switch has agreed on the root bridge. At that point, STP has reached a state of convergence. “Convergence” is just a fancy way of saying “everybody’s agreed on something.”

Once all switches agree on the root bridge, every port on every path will be in blocking or

forwarding mode. There are intermediate STP port states you should be aware of:

BLOCKING: Frames are not forwarded, but BPDUs are accepted.

LISTENING: Frames are not forwarded, and we’re doing some spring cleaning on the MAC address table, as entries that aren’t heard from during this time are cleared out.

LEARNING: Frames are

not forwarded, but fresh MAC entries are being placed into the MAC table as frames enter the switch.

FORWARDING: Frames are forwarded, MAC addresses are still learned.

There is a fifth STP state, disabled, and it’s just what it sounds like. The port is actually disabled, and disabled ports cannot accept BPDUs.

We’re going to take two

illustrated looks at STP in action, the first with two switches and the second with three. In the first example, there are two separate crossover cables connecting the switches. It’s important to note that once STP has converged in this network, one port — and only one port — will be in blocking mode, with the other three in forwarding mode.

I haven’t configured anything on these switches beyond a hostname and the usual lab commands, so

I haven’t configured anything on these switches beyond a hostname and the usual lab commands, so what VLANs, if any, will be running on these switches?

We have five default VLANs, and only one is populated. You may never use those bottom four VLANs, but I’d have those

numbers memorized for the exam.

SW1#show vlan brief

VLAN Name

St

1

default

ac

Fa0/5, Fa0/6, Fa0/7, Fa0/8 Fa0/9, Fa0/10,

  • 1002 fddi-default

  • 1003 token-ring-default

  • 1004 fddinet-default

  • 1005 trnet-default

I’ll edit those four bottom VLANs out for the rest of this section, so note them now.

All ports belong to VLAN 1 by

default. There’s something missing, though… notice the ports used to connect the switches, Fa0/11 and Fa0/12, don’t show up in show vlan brief?

That’s because they’re trunk ports, ports connected directly to other switches. You can see what ports are trunking with the show interface trunk command.

SW1#show interface trunk

Port

Mode

Encapsula

Fa0/11 desirable

802.1q

Fa0/12 desirable

802.1q

Port Vlans allowed on trunk Fa0/11 1—4094 Fa0/12 1—4094

Port Vlans allowed and active Fa0/11 1 Fa0/12 1

Port Vlans in spanning tree Fa0/11 1 Fa0/12 none

Running both show vlan brief and show interface trunk is a great way to start the L2 troubleshooting process.

Now back to our network….

To see each switch’s STP values for VLAN 1, we’ll run show spanning-tree vlan 1. First,

To see each switch’s STP values for VLAN 1, we’ll run show spanning-tree vlan 1. First, we’ll take a look at SW1’s output for that command.

(By the way, we’re running PVST, or “Per-VLAN Spanning Tree”, which is why we have to put the VLAN number in. With PVST, each VLAN will run an

independent instance of STP.)

SW1#show spanning-tree vlan 1

VLAN0001

Spanning tree enabled protoc

Root ID Priority

32769

Address

000b.be2c.5180

Cost

19

Port

11 (

Hello Time

2 se

15 sec

Bridge ID Priority

32769 (p

Address

000f.90e2.25c

Hello Time 2

Max Age 20 sec Forward Delay 15 sec Aging Time 30

Interface

Role Sts Cost

P

Fa0/11

Root FWD 19

12

Fa0/12

Altn BLK 19

12

The Root ID is the BID info for the root bridge, and the Bridge ID is the BID info for the local switch. Since the addresses are different for the Root and Bridge ID, this switch is definitely not the root switch. If they’re the same, you’re on the root switch!

The BID of any switch is the priority followed by the MAC address, so let’s compare the

two values:

Root ID BID: 32769:00-

0b-be-2c-51-80

Bridge ID BID: 32769:00-

0f-90-e2-25-c0

The device with the lowest BID will be elected root. Since both devices have the exact same priority, the switch with the lowest MAC address is named the root switch, and that’s exactly what happened here.

On SW1, Fa0/11 is in FWD status, short for forwarding. This port is marked Root,

meaning this port will be used by SW1 to reach the root switch. Fa0/11 is SW1’s root port for VLAN 1.

Fa0/12 is in BLK status, short for blocking. How did the switch decide to put Fa0/11 into forwarding mode while 0/12 goes into blocking? The switch first looked at the path cost, but that’s the same for both ports (19). The tiebreaker is the port priority, found under the “prio.nbr” field. Fa0/11’s port priority is lower, so it’s chosen as the root port.

Let’s mark that on our exhibit and then move on to SW2.

Let’s mark that on our exhibit and then move on to SW2. Here’s the output of

Here’s the output of show spanning-tree vlan 1 on SW2.

SW2#show spanning-tree vlan 1

VLAN0001

Spanning tree enabled protoc

Root ID

Priority

327

Address

000b.be2c.5180

This bridge is the root

Hello Time

2 sec

Max Age 20 sec Forward Delay

Bridge ID

Priority

327

Address

000

Hello Time

2

s

Max Age 20 sec Forward Delay

 

Aging Time

15

Interface

Role Sts Cost

Fa0/11

Desg FWD 19

Fa0/12

Desg FWD 19

We have two really big hints that SW2 is the root switch for

VLAN 1. The first is really, really big — the phrase “This bridge is the root”!

The next isn’t quite as obvious. Both Fa0/11 and Fa0/12 are in FWD status. A root bridge will have all of its ports in forwarding mode.

It would be easy to look at this simple network and say that two ports need to be blocked to prevent switching loops, but blocking one is actually enough to do the job.

Here’s how our switches look now:

It’s a common misconception that the Fa0/12 port on both switches would be blocked in this

It’s a common misconception that the Fa0/12 port on both switches would be blocked in this situation, but now we know that just isn’t the case.

Now we’ll take a look at a three-switch example from a live Cisco switching network

and bring another port type into the discussion.

We have a three-switch full mesh topology. I’ll post the MAC addresses and BIDs of the switches below the diagram. We’ll follow that with a look at the election from each switch’s point of view and decide what we think should have happened in the root bridge election.

Then we’ll see what happened in the root bridge election!

This is an excellent practice exam question. You must be able to look at a diagram such

as this, along with the addresses, and be able to answer the following questions:

Which bridge is the root? Which ports will the non- root bridges select as their root? Where are the designated ports?

How many ports will STP block once convergence is reached?

All questions we’re about to answer with configs from live

Cisco switches! The switch MAC addresses:

SW1: 000f.90e2.2540

SW2: 0022.91bf.5c80

SW3: 0022.91bf.bd80

The priorities and port speeds have all been left at the default.

Priority 32769 (priority 327

The resulting BIDs: SW1: 32769:000f.90e2.2540 SW2: 32769:0022.91bf.5c80 SW3: 32769:0022.91bf.bd80 Here’s what happened during the election, assuming

The resulting BIDs:

SW1: 32769:000f.90e2.2540 SW2: 32769:0022.91bf.5c80 SW3: 32769:0022.91bf.bd80

Here’s what happened during the election, assuming all three switches were turned on at the same time. SW1 sees BPDUs from SW2 and

SW3, both announcing they’re the root. From SW1’s point of view, these are inferior BPDUs; they contain BIDs that are higher than SW1’s. For that reason, SW1 continues to announce via BPDUs that it is the root.

SW2 sees BPDUs from SW1 and SW3, both announcing they’re the root. SW2 sees the BIDs in them, and while SW3’s BPDU is an inferior BPDU, SW1’s is a superior BPDU, since SW1’s BID is lower than that of SW2. SW2 will now forward BDPUs it

receives announcing SW1 as the root.

SW3 is about to start developing a massive inferiority complex, since the BPDUs coming at it from SW1 and SW2 are both superior BPDUs. Since the BPDU from SW1 has the lowest BID of those two BPDUs, SW3 recognizes SW1 as the root and will forward BPDUs announcing that information.

As the root switch, SW1 will have both ports placed into

Forwarding mode, as verified by the edited output of show spanning vlan 1. Note that both of these ports are designated ports.

SW1#show spanning vlan 1

Interface

Role Sts Cost

Fa0/11

Desg FWD 19

Fa0/12

Desg FWD 19

SW2 and SW3 now need to select their root port. Each non- root bridge has two

SW2 and SW3 now need to select their root port. Each non- root bridge has two different ports that it can use to reach the root bridge, but the cost is lower for the port that is physically closer to the root bridge (we’re assuming all port

speeds are the same). Those ports will now be selected as the root port on their respective switches, verified by show spanning vlan 1.

SW2#show spanning vlan 1

Interface

Role Sts Cost

Fa0/11

Root FWD 19

SW3#show spanning vlan 1

Interface

Role Sts Cost

We’re almost done! Either SW2 or SW3 must be elected the designated bridge of their common

We’re almost done! Either SW2 or SW3 must be elected the designated bridge of their common segment. The switch that advertises the lowest cost to the root bridge will be the designated bridge, and that switch’s port on the shared

segment will be the designated port (DP).

In this network, SW2 and S3 will advertise the same cost to each other over the shared segment. In that case, the switch with the lowest BID will be the designated bridge, and we know that’s SW2. SW B’s Fa0/12 port will be put into forwarding mode and named the DP for that segment.

SW C’s Fa0/12 port will be put into blocking mode and will be that segment’s non-designated port (NDP). The DP is always in

forwarding mode and the NDP will always be in blocking mode.

All forwarding ports on the root switch are considered DPs. A root switch will not have root ports. It doesn’t have a specific port to use to reach the root, it is the root!

We’ll verify the DP and NDP port selection with show spanning vlan 1.

SW2#show spanning vlan 1

Interface

Role Sts Cost

Fa0/11

Root FWD 19

Fa0/12

Desg FWD 19

SW3#show spanning vlan 1

Interface Role Sts Cost

Fa0/11 Fa0/12 Root FWD 19 Altn BLK 19
Fa0/11
Fa0/12
Root FWD 19
Altn BLK 19

Now that STP has converged and all switches agree on the root, only the root will originate BPDUs. The other switches receive them, read them, update the port costs, and then forward them. Nonroot switches do not originate BPDUs.

The amazing thing about that topology is that only one port ended up being put into blocking mode and five ports in forwarding mode!

In the previous examples, the speed of both links between switches was the same. What if

the speeds were different?

the speeds were different? In our earlier two-switch example, fast0/11 was chosen as the root port

In our earlier two-switch example, fast0/11 was chosen as the root port on SW1. The port cost was the same (19), so the port priority was the tiebreaker. In this scenario, the speeds of the links are not the same. The faster the port, the

lower the port cost, so now fast0/12 would be chosen as the RP on SW1.

Here are some common port speeds and their associated STP port costs:

  • 10 Mbps: 100

100 Mbps: 19

1 Gbps: 4

  • 10 Gbps: 2

You must keep those costs in mind when examining a network diagram to determine

root ports, because it’s our nature to think the physically shortest path is the fastest path. STP does not see things that way. Consider:

root ports, because it’s our nature to think the physically shortest path is the fastest path.

At first glance, you’d think that SW B would select Fa0/1 as its root port. Would it?

The BPDU carries the Root Cost, and this cost increments as the BPDU is forwarded throughout the network. An individual port’s STP cost is locally significant only and is unknown by downstream switches.

The root bridge will originate a BPDU with the Root Cost set to zero. When a neighboring switch receives this BDPU, that switch adds the cost of the port

the BPDU was received on to the incoming Root Cost.

Root Cost increments as BPDUs are received, not sent. That new value will be reflected in the outgoing BDPU that switch forwards.

Let’s look at the network again, with the port costs listed.100 Mbps ports have a port cost of 19, and 1000 Mbps ports have a port cost of 4.

Reviewing two very important points regarding port cost: The root switch originates the BPDU with a

Reviewing two very important points regarding port cost:

The root switch originates the BPDU with a cost of zero

The root port cost increments as BPDUs are received

When SW A sends a BPDU directly to SW B, the root path cost is zero. That will increment to 19 as it’s received by SW B.

When SW A sends a BPDU to SW C, the root path cost is zero. That will increment to 4 as it’s received by SW C. That BPDU is then forwarded to SW B, which then adds 4 to that cost as it’s received on Fa0/2. That results in an overall root path cost of 8, which will result

in SW B naming Fa 0/2 as the root port.

in SW B naming Fa 0/2 as the root port. The moral of the story: The

The moral of the story: The physically shortest path is not always the logically shortest

path. Watch for that any time you see different link speeds in a network diagram!

You Might Be A Root Switch If….

I’m going to quickly list four ways you can tell if you’re on the root, and four ways you can tell if you’re NOT on the root.

I recommend you check out my free videos on my YouTube channel on this subject. The videos are free and on exam day, you’ll be VERY glad you watched them!

Four tip-offs you’re NOT on the root bridge:

No “this bridge is the root” message

The MAC address of the Root ID and Bridge ID are different

The bridge has a root port There’s a port in blocking mode

Four hints you ARE on the root bridge:

There’s a “this bridge is the root” message

The MAC of the Root ID and Bridge ID are the same

There are no root ports No ports in blocking mode

Changing The Root Bridge Election Results (How and Why)

If STP was left to its own

devices, a single switch is going to be the root bridge for every single VLAN in your network. That single switch is going to be selected because it has a lower MAC address than every other switch, which isn’t exactly the criteria you want to use to select a single root bridge.

The time will definitely come when you want to determine a particular switch to be the root bridge for your VLANs, or when you will want to spread the root bridge workload. You can make this happen with the spanning-

tree vlan root command.

In our previous two-switch example, SW 1 is the root bridge of VLAN 1. We can create 3 more VLANs, and SW 1 will always be the root bridge for every VLAN. Why? Because its BID will always be lower than SW 2.

For this demo, I’ve created VLANs 10, 20, and 30. The edited output of showspanning- tree vlan shows that SW 1 is the root bridge for all these new VLANs.

SW1#show spanning-tree vlan 1

VLAN0010

Spanning tree ena

Root ID Priority

32778

Address

000f.90e1.c240

This bridge is the root

SW1#show spanning-tree vlan 2

VLAN0020

Spanning tree ena

Root ID Priority

32788

Address

000f.90e1.c240

This bridge is the root

SW1#show spanning-tree vlan 3

VLAN0030

Spanning tree ena

Root ID Priority

32798

Address

000f.90e1.c240

This bridge is the root

We’d like SW 2 to act as the

root bridge for VLANs 20 and 30 while leaving SW 1 as the root for VLANs 1 and 10. To make this happen, we’ll go to SW 2 and use the spanning-tree vlan root primary command.

SW2(config)#spanning-tree vla SW2(config)#spanning-tree vla

SW2#show spanning vlan 20

VLAN0020

Spanning tree ena

Root ID Priority

24596

Address

000f.90e2.1300

This bridge is the root

SW2#show spanning vlan 30

VLAN0030

Spanning tree ena

Root ID Priority

24606

Address

000f.90e2.1300

This bridge is the root

SW 2 is now the root bridge for both VLAN 20 and 30. Note the priority value, which we did not configure manually. More on that in a moment!

This command has another great option:

SW2(config)#spanning-tree vla primary Configure this switch secondary Configure switch as

You can configure a switch to be the standby root bridge with

the secondary option. This will change the priority just enough so the secondary root doesn’t become the primary immediately, but will become the primary if the current primary goes down.

Let’s take a look at root secondary in action. We have a three-switch topology for this example. We’ll use the root primary command to make SW3 the root of VLAN 20. Which switch would become the root if SW3 went down?

SW3#show spanning vlan 20

VLAN0020

Spanning tree ena

Root ID Priority

24596

Address

0011.937

This bridge is the root Bridge ID Priority 24596 (priority 2457

Address

0011.9375.de00

SW2#show spanning vlan 20

VLAN0020

Spanning tree ena

Root ID Priority

32788

Address

0011.9375.de00

Bridge ID Priority 32788 (pri

Address 0018.19c7.2700

SW1#show spanning vlan 20

VLAN0020

Spanning tree ena

Root ID Priority

32788

Address

0011.9375.de00

Bridge ID Priority 32788 (pri

Address 0019.557d.8880

SW2 and SW1 have the same default priority, so the switch with the lowest MAC address will be the secondary root, and that’s SW2. Let’s use the root secondary command to make SW1 the secondary root switch for VLAN 20.

SW1(config)#spanning vlan 20

SW1#show spanning vlan 20

VLAN0020

Spanning tree ena

Root ID Priority

24596

Address

0011.9375.de00

Bridge ID

Priority 28692 (priority 2867

Address

0019.557d.8880

SW1 now has a priority of 28672, making SW1 the root if SW3 goes down. A priority value of 28672 is an excellent tipoff the root secondary command is in use. The config shows this as well:

spanning-tree mode pvstspanni spanning-tree vlan 20 priorit

The big question at this point:

Where is STP coming up with these priority settings? We’re getting the desired effect, but it would be nice to know where the numbers are coming from. And by a strange coincidence, here’s where they’re coming from!

If the current root bridge’s priority is greater than 24,576, the switch sets its priority to 24576 in order to become the root. You saw that in the previous example.

If the current root bridge’s

priority is less than 24,576, the switch subtracts 4096 from the root bridge’s priority in order to become the root. If that’s not enough to get the job done, another 4096 will be subtracted.

If you don’t like those rules or you’ve just gotta set the values manually, the spanning-tree vlan priority command will do the trick. I personally prefer the spanning-tree vlan root command, since that command ensures that the priority on the

local switch is lowered sufficiently for it to become the root.

With the spanning-tree vlan priority command, you have to make sure the new priority is low enough for the local switch to become the root switch. As you’ll see, you also have to enter the new priority in multiples of 4096.

SW2(config)#spanning-tree vla <0—61440> bridge priority in

The STP Timers

Once these elections have taken place, the root bridge will begin sending a Hello BPDU out all its ports every two seconds. This Hello BPDU serves as the heartbeat of STP. As long as the non-root bridges receive it, they know the path to the root is unchanged and stable.

Once that heartbeat disappears, it’s an indication of a failure somewhere along the path. STP will run the spanning- tree algorithm to determine the

best available path, and ports will be brought out of blocking mode as needed to build this path.

The Hello BPDUs carry values for three timers:

Hello Time: Time between Hello BPDUs. Default: 2 seconds.

Max Age: The bridge should wait this amount of time after not hearing a Hello BPDU before running the STP algorithm. Default: 20

seconds.

Forward Delay: The amount of time a port should stay in the listening and learning stages as it changes from blocking to forwarding mode. Default: 15 seconds.

Two important notes regarding changing these timers:

These timer values weren’t pulled out of the sky. Cisco has them set at these values to prevent

switching loops during STP recalculations. Change them at your peril.

To change these timers, do so only on the root. You can change them on a non-root, but the changes will not be advertised to the other switches!

You can change these timers with the spanning-tree vlan command, but if you have any funny ideas about disabling them by setting them to zero,

forget it! (I already tried.) Here are the acceptable values according to IOS Help, along with a look at the commands used to change these timers:

Switch(config)#spanning vlan WORD vlan range, example: 1,

Switch(config)#spanning vlan

forward-time

Set the forwa

hello-time

Set the hello

max-age

Set the max a

priority

Set the bridg

root

Configure swi

<cr>

Switch(config)#spanning vlan

<4—30> number of seconds for

Switch(config)#spanning vlan <1—10> number of seconds betw

Switch(config)#spanning vlan <6—40> maximum number of seco

Even if you try to sneak a zero past the router — forget it, the router sees that fastball coming!

Switch(config)#spanning vlan

% Invalid input detected at ’

The STP Interface States

The transition from blocking to forwarding is not instantaneous. STP has interfaces go through two intermediate states between blocking and forwarding -- listening and learning.

A port coming out of blocking first goes into listening. The port is listening for Hello BPDUs from other possible root switches, and also takes this opportunity to do some spring cleaning on its MAC table. (If a

MAC entry isn’t heard from in this time frame, it’s thrown out of the table.)

This state’s length is defined by the Forward Delay timer, 15 seconds by default.

The port will then go into learning state. During this state, the switch learns the new location of switches and puts fresh-baked entries into its MAC table. Ports in learning state do not forward frames.

Learning state also lasts the duration of the ForwardDelay timer.

To review the order and timers involved:

Switch waits 20 seconds without a Hello before beginning the transition process.

Port comes out of blocking, goes into listening for 15 seconds.

Port transitions from listening to learning, stays in learning for 15 seconds.

Port transitions from

learning to forwarding.

The one STP state not mentioned here is disabled. Some non-Cisco documentation does not consider this an official STP state, but since the CCNA is a Cisco exam, we certainly should! Ports in disabled mode are not learning MAC addresses, and they’re not accepting or sending BPDUs. They’re not doing anything!

Those timers are there for a reason, but they’re still a pain in the butt on occasion. Let’s talk about one of those times

and what we can do about it!

Portfast

Consider the amount of time a port ordinarily takes to go from blocking to forwarding when it stops receiving Hello BPDUs:

Port stays in blocking mode for 20 seconds before beginning the transition to listening (as defined by the MaxAge value)

Port stays in listening mode for 15 seconds before transition to

learning (as defined by the Forward Delay value)

Port stays in learning mode for 15 seconds before transition to forwarding mode (also as defined by Forward Delay)

That’s 50 seconds, or what seems like 50 hours in networking time.

In certain circumstances, we can avoid these delays with Portfast.

Portfast allows a port to bypass the listening and learning

stages of this process, but is only appropriate to use on switch ports that connect directly to an end-user device, such as a PC.

Using portfast on a port leading to another networking device can lead to switching loops. That threat is so serious that Cisco even warns you about it on the router when you configure Portfast.

SW2(config)#int fast 0/6 SW2(config-if)#spanning port

%Warning: portfast should onl

%Portfast has been configure

That’s a pretty serious warning! I love the mention of “temporary bridging loops”. All pain is temporary, but that doesn’t make it feel good at the time!

Portfast can be a real help in the right circumstances….

%Portfast has been configure That’s a pretty serious warning! I love the mention of “temporary bridging

… and a real hazard in the wrong circumstances.

… and a real hazard in the wrong circumstances. Make sure you know which is which!

Make sure you know which is which!

One excellent real-world application for portfast is to configuring it on end-user ports that are having a little trouble getting IP addresses via DHCP. Those built-in delays can on

occasion interfere with DHCP. I’ve used it to speed up the IP address acquisition process more than once, and it works like a charm.

Per-VLAN Load Balancing And Etherchannels

STP brings us a lot of good to our network, but on occasion, it gives us a bit of a kick in the butt.

The kick here is that STP will leave only one trunk open between any two given switches, even if we have multiple crossover cables connecting them. While we obviously need STP to help us out with switching loop prevention, we’d really like to

use all of our available paths and bandwidth.

Two ways to make that happen are per-VLAN load balancing and Etherchannels.

Per-VLAN Spanning Tree (PVST) makes the load balancing option possible. Waaaay back in this section, I mentioned that every VLAN is running its own instance of STP in PVST. Now we’re going to see that in action!

through 50 in our production network. We know that whether we have two switches or ten, by default one single switch will be the root for all VLANs.

through 50 in our production network. We know that whether we have two switches or ten,

We know we’ll have one root bridge selected; we’ll assume it’s the one on the right. We also know that the non-root bridge will select one root port,

and the other port leading to the root bridge will go into blocking mode. If we have 50 VLANs in this network, traffic for all 50 VLANs will go over one of the two available links while the other remains totally idle.

and the other port leading to the root bridge will go into blocking mode. If we

That’s not an efficient use of available resources! With PVST load balancing, we can fine- tune the port costs on a per- VLAN basis to enable one port to be selected as the root port for half of the VLANs, and the other port to be selected as the root port for the other half. That’s per-VLAN load balancing!

I want you to see this feature in action, and I want you to see a

I want you to see this feature in action, and I want you to see a classic “gotcha” in this config, so let’s head for the live equipment.

We’re working with VLANs 1 and 100 in this lab, with R1 the root of both VLANs, as well as

any future VLANs.

For clarity, I’m going to edit the Root ID and Bridge ID info from the output of show spanning vlan in this section, since we’re primarily concerned with the port role, status, and cost.

We’ll run show spanning vlan 1 and show spanning vlan 100 on both switches.

SW1#show spanning vlan 1

Interface

Role Sts Cost

Fa0/11

Desg FWD 19

1

Fa0/12

Desg FWD 19

1

SW1#show spanning vlan 100

Interface

Role Sts Cost

Fa0/11

Desg FWD 19

1

Fa0/12

Desg FWD 19

1

SW2#show spanning vlan 1

Interface

Role Sts Cost

Fa0/11

Root FWD 19

12

Fa0/12

Altn BLK 19

12

SW2#show spanning vlan 100

Interface

Role Sts Cost

Fa0/12

Altn BLK 19

1

With SW1 as the root of both VLANs, both ports on that switch are forwarding. There’s a blocked port on SW2 courtesy of STP, which is preventing switching loops AND preventing us from using that second trunk. It’s just sitting there!

With per-VLAN load balancing, we can bring VLAN 100’s traffic over the currently unused link. It’s as simple as lowering the blocked port’s cost for VLAN 100 below that of the currently forwarding port!

They’re both Fast Ethernet interfaces, so they each have a cost of 19. Let’s lower the cost on fast 0/12 for VLAN 100 to 12 and have a look around with IOS Help!

SW2(config)#int fast 0/12 SW2(config-if)#spanning ?

bpdufilter bpduguard cost guard link-type mst port-priority portfast Enable

Don’t send o Don’t acc Change an Change an Specify a Multiple spa Change an an inter

stack-port Enable stack po

vlan

VLAN Switch Spanning

SW2(config-if)#spanning cost

<1—200000000>

port path cos

SW2(config-if)#spanning cost

The result is immediate. When I ran show spanning vlan 100 just seconds later…

SW2#show spanning vlan 100

Interface

Role Sts Cost

Fa0/11

Altn BLK 19

Fa0/12

Root LIS 12

… and shortly after, fast 0/12 is now the forwarding port for

VLAN 100.

SW2#show spanning vlan 100

Interface

Role Sts Cost

Fa0/11

Altn BLK 19

Fa0/12

Root FWD 12

VLAN 100 traffic will now go over fast 0/12 instead of fast 0/11. Pretty cool!

To verify our load sharing, let’s run show spanning vlan 1 and be sure the traffic for that vlan is still going over fast 0/11.

Interface

Role Sts Cos

Fa0/11

Altn BLK 19

Fa0/12

Root FWD 12

Hmm. All traffic for VLAN 1 is also going over fast 0/12. We’re not load balancing — we just changed the link all of the traffic is now using.

Why?

Here’s that gotcha I hinted about earlier. This particular command looks like the one you want, but the spanning cost command changes the port

cost for all VLANs. We need to remove that command and use the VLAN-specific version:

SW2(config)#int fast 0/12 SW2(config-if)#spanning ?

bpdufilter

Don’t send o

bpduguard

Don’t accep

cost

Change an int

guard

Change an i

link-type

Specify a l

mst

Multiple span

port-priority

Change an i

portfast

Enable an i

stack-port

Enable stac

vlan VLAN Switch Spanning T

SW2(config-if)#spanning vlan WORD vlan range, example: 1,

SW2(config-if)#spanning vlan

cost

port-priority

Change an i

Change an i

SW2(config-if)#spanning vlan

<1—200000000>

Change an int

SW2(config-if)#spanning vlan

That’s what we needed! A minute or so later, I ran show spanning vlan 1 and show spanning vlan 100 on SW2. Notice the port blocked in each VLAN as well as the port costs.

SW2# show spanning vlan 100

Interface

Role Sts Cost

Fa0/11

Altn BLK 19

Fa0/12

Root FWD 12

SW2#show spanning vlan 1

Interface

Role Sts Cost

Fa0/11

Root FWD 19

Fa0/12

Altn BLK 19

It’s business as usual for VLAN 1 on fast 0/11, but VLAN 100 traffic is now using the fast 0/12 link. Just watch your commands and per-VLAN load balancing is easy!

Per-VLAN load balancing is one

great solution for those unused links, and here’s another one!

Etherchannels

An Etherchannel is the logical bundling (aggregation) of two to eight parallel Ethernet trunks. This provides greater throughput, and is another effective way to avoid the 50- second wait between blocking and forwarding states in case of a link failure.

How do we avoid the delay entirely? STP considers an Etherchannel to be one physical link. If one of the physical links making up the logical

Etherchannel should fail, there’s no process of opening another port and the timers don’t come into play. STP sees only the Etherchannel as a whole.

In this example, we have two switches connected by three separate crossover cables.

Etherchannel should fail, there’s no process of opening another port and the timers don’t come into

We’ll verify the connections with show interface trunk and then run show spanning-tree

vlan 1.

SW1#show interface trunk

Port

Mode

Encapsulati

Fa0/10 desirable

802.1q

Fa0/11 desirable

802.1q

Fa0/12 desirable

802.1q

SW1#show spanning-tree vlan 1

Interface

Role Sts Cost

Fa0/10

Root FWD 19

Fa0/11

Altn BLK 19

Fa0/12

Altn BLK 19

We know this is not the root switch, because…

there’s no “this bridge is the root” message

there is a root port, which is forwarding

We have three physical connections between the two switches, and only one of them is in use. That’s a waste of bandwidth! Additionally, if the root port on SW1 goes down, we’re in for a delay while one of the other two ports comes out of blocking mode and through listening and learning mode on the way to forwarding.

That’s a long time for a trunk to be down (50 seconds).

Both of these issues can be addressed by configuring an Etherchannel. By combining the three physical ports into a single logical link, not only is the bandwidth of the three links combined, but the failure of a single link will not force the STP timers to kick in.

Ports are placed into an Etherchannel with the channel- group command. The channel- group number doesn’t have to match across the trunk, but it

does have to match between interfaces on the same switch that will be part of the same Etherchannel.

Here’s the configuration, and this is a great chance to practice our interface range command! Nothing wrong with configuring each port individually, but this command saves time — on the job and in the exam room!

To verify that the channel- group number doesn’t have to match between switches, I’ll use group 1 to bundle the ports

on SW1 and group 5 to bundle the ports on SW2.

SW1(config)#interface range

SW1(config-if-range)#channel-

Creating a port-channel inte 00:33:57: %LINK-3-UPDOWN: Int 00:33:58: %LINEPROTO-5-UPDOWN

changed state to up SW2(config)#int range fast 0/

SW2(config-if-range)#channel-

Creating a port-channel inte

00:47:36: %LINK-3-UPDOWN: Int 00:47:37: %LINEPROTO-5-UPDOWN

After configuring an Etherchannel on each router with the interface-level command channel-group, the

output of commands show interface trunk and show spanning vlan 1 verifies that STP now sees the three physical links as one logical link -- the virtual interface port- channel 1 (“Po1”).

Note the Etherchannel’s cost is 9 instead of 19. This lower cost reflects the increased bandwidth of the Etherchannel as compared to a single FastEthernet physical connection.

SW1#show interface trunk

Port

Mode

Encapsulation

Po1

desirable 802.1q

SW1#show spanning vlan 1

Interface

Role Sts Cost

128.

Po1

Root FWD 9

We’ll go to SW2 to use some other Etherchannel verification tools.

You can use show interface port-channel to see the same info you’d see on a physical port. I’ll show you only the first two lines of output:

SW2#show int port-channel 5 Port-channel5 is up, line pro Hardware is EtherChannel, ad

With all this talk of channel- groups and port-channels, you may wonder if the word “Etherchannel” ever makes an appearance on the switch. Believe it or not, there is a show etherchannel command!

SW2# show etherchannel ?

<1—6>

Channel group numbe

detail

load-balance

port

port-channel

protocol

summary

|

<cr>

Frankly, these aren’t commands you’re going to run often. show etherchannel summary gives you some good info to get started with troubleshooting:

SW2#show etherchannel summar

Flags: D — down

P — bundl

  • I — stand-alone

s — suspe

H — Hot-standby (LACP only)

R — Layer3 U — in use

S — Laye f — faile

  • M — not in use, minimum lin

u — unsuitable for bundling w — waiting to be aggregate

  • d — default port

Number of channel-groups in u

Number of aggregators:

1

Group

5

Port-channel

Po5(SU)

Prot

I also like show etherchannel port, since it shows you how long each port in the Etherchannel has been in that state. Here’s the info I received on all three ports (I’m showing you only port 0/10):

SW2#show etherchannel port Channel-group listing:

Group: 5 Ports in the group:

Port: Fa0/10 Port state = Up Mstr In-Bndl

Channel group = 5 Port-channel = Po5 Port index = 0

Mode = GC = - Load =

Age of the port in the curren

Let’s see how STP reacts to losing one of the channels in our Etherchannel.

Before configuring the Etherchannel, closing fast0/10 would have resulted in an STP recalculation and a temporary loss of connectivity between the switches. Now that the channels are bundled, I’ll close that port and immediately run

show spanning vlan 1.

SW1(config)#int fast 0/10

SW1(config-if)#shut

SW1#show spanning vlan 1

Interface

Role Sts Cost

Po1

Root FWD 12

STP does recalculate the cost of the Port-Channel interface. The cost is now higher since there are only two physical channels bundled instead of three, but the truly important point is that STP does not consider the Etherchannel to be down and

there’s no loss of connectivity between our switches.

BPDU Guard

Remember that warning from the router when configuring PortFast?

SW1(config)#int fast 0/5

SW1(config-if)#spanning-tree

%Warning: portfast should onl ports connected to a single host. Connecting hubs, concen switches, bridges, etc… to th interface when portfast is en cause temporary bridging loop Use with CAUTION

%Portfast has been configure have effect when the interfac

You’d think that would be enough of a warning, but there is a chance that someone is going to manage to connect a switch to a port running Portfast, which in turn creates the possibility of a switching loop.

You’d think that would be enough of a warning, but there is a chance that someone

BPDU Guard protects against

this possibility. If any BPDU, superior or inferior, comes in on a port that’s running BPDU Guard, the port will be shut down and placed into error disabled state, shown on the switch as err-disabled.

To configure BPDU Guard on a specific port only:

SW1(config)#int fast 0/5

SW1(config-if)#spanning-tree

% Incomplete command.

SW1(config-if)#spanning-tree

disable Disable BPDU guard fo enable Enable BPDU guard for

SW1(config-if)#spanning-tree

To configure BPDU Guard on all portsrunning portfast on the switch:

SW1(config)#spanning-tree po

Note this command is a variation of the portfast command.

There’s another guard, Root Guard, that is not on the CCNA exam but is perilously close in operation to BPDU guard. I want to clarify the difference:

Root Guard will bring a port down if a superior BPDU is received on that particular port. You’re guarding the local switch’s role as the root, since a superior BPDU would mean another switch would become the root.

BPDU Guard brings a port down if any BPDU is received on that port. This helps prevent switching loops, and can also be used as a security feature by enabling it on unused switch ports.

Let’s see BPDU Guard in action!

In this lab, SW2 is receiving BPDUs from SW1 on fast 0/10, 11, and 12. Let’s see what happens when we enable BPDU Guard on fast 0/10.

SW2(config)#int fast 0/10 SW2(config-if)#spanning bpdug *Mar 1 02:19:26.604:

%SPANTREE-2-BLOCK_BPDUGUARD:

on port Fa0/10 with BPDU Gua Disabling port. *Mar 1 02:19:26.604:

%PM-4-ERR_DISABLE: bpduguard on Fa0/10, putting Fa0/10 in

show int fast 0/10 verifies the port is in err-disabled state:

SW2#show int fast 0/10 FastEthernet0/10 is down, lin

To put things right, we’ll remove BPDU Guard from port 0/10, and then reset it as required by the err-disabled message. After that, all is well!

SW2(config)#int fast 0/10 SW2(config-if)#no spanning bp

(I could have used the spanning bpduguard disable command for the same end result.)

SW2(config-if)#shut

SW2(config-if)#no shut

SW2#show int fast 0/10 FastEthernet0/10 is up, line is up (connected)

That’s enough switching and Etherchanneling for now, but we’re not done at Layer 2. Next up, L2 WAN work, including Frame Relay!

HDLC, PPP, and Frame Relay (Plus A Few Cables)

Here’s the deal with this section….

I’m going to discuss some Layer 1 WAN topics with you at the end of this section. That’ll include how I simulate a WAN in the labs you’ll see here, and some info on how I created a

frame relay cloud in my practice lab for us to use.

Before we get to that info, I’d like you to see the actual labs, and there are plenty of them in this section! This is just a note not to skip the Physical layer info at the end of the discussion and labs involving HDLC, PPP, and Frame Relay — there’s some VERY important information regarding Layer 1 at the end of this section.

With no further ado (whatever that is), let’s hit HDLC and PPP!

HDLC And PPP

With a point-to-point WAN link, we have two options for encapsulation: HDLC and PPP. During our discussion of these protocols, we’ll be running a couple of labs with the following PTP link.

HDLC And PPP With a point-to-point WAN link, we have two options for encapsulation: HDLC and

Cisco actually has its own HDLC

variation, known technically as cHDLC, which sounds more like a chemical element than a protocol. I doubt strongly you see the term “cHDLC” on your exams, as Cisco’s own books and webpages refer to this protocol as “HDLC”.

Why did Cisco develop their own HDLC? The original HDLC didn’t have the capabilities for multiprotocol support.

A couple of notes about Cisco HDLC:

Cisco added the TYPE

field to allow that multiprotocol support.

Cisco’s version of HDLC is not Cisco-proprietary.

This is the default encapsulation on Cisco router serial interfaces.

Let’s get started with some lab work! We’ll assign IP addresses, open the interfaces, wait 30 seconds, and verify our config with show interface serial.

R1(config)#int s1 R1(config-if)#ip address 172. R1(config-if)#no shut

R3(config)#int s1 R3(config-if)#ip address 172. R3(config-if)#no shut

R1#show int s1 Serial1 is up, line protocol

R3#show int s1 Serial1 is up, line protocol

The combination “serial1 is up, line protocol is down” means everything’s fine physically, but there’s a logical issue. As we saw earlier in this section, a PTP link in a lab is going to have a DTE on one end and a DCE on the other, and the DCE

must supply clockrate to the DTE. To see which is which, just run show controller serial on one router.

R1#show controller serial 1 HD unit 1, idb = 0x1DBFEC, d buffer size 1524 HD unit 1, V

If you see DTE on R1, you know R3 has to be the DCE end!

R3#show controller serial 1 HD unit 1, idb = 0x11B4DC, d buffer size 1524 HD unit 1, V

Put the clockrate on the DCE end and the line protocol

comes up in half a minute or so. We’ll again verify with show interface serial, and now I’ll show you where you can see the encapsulation that’s running on the interface — in this case, the default, which is HDLC.

R3(config)#int s1 R3(config-if)#clockrate 56000 19:13:42: %LINEPROTO-5-UPDOWN

comes up in half a minute or so. We’ll again verify with show interface serial, and

R1#show int s1 Serial1 is up, line protocol

R3#show int s1 Serial1 is up, line protocol Hardware is HD64570 Internet address is 172.12.1 MTU 1500 bytes, BW 1544 Kbit reliability 255/255, txload Encapsulation HDLC, loopbac

At this point, each partner in the PTP link can ping the other.

R1#ping 172.12.13.3 Type escape sequence to abort Sending 5, 100-byte ICMP Echo !!!!! Success rate is 100 percent

R3#ping 172.12.13.1 Type escape sequence to abort Sending 5, 100-byte ICMP Echo !!!!! Success rate is 100 percent

The endpoints of a PTP link must agree on the encapsulation type. If one end is running HDLC, the other end must run HDLC as well or the line protocol will go down.

If one of the routers is running another encapsulation type, the physical interfaces will still be up, but the line protocol will go

down and IP connectivity will be lost. To illustrate, I’ll change the encapsulation type on R3’s Serial1 interface to the Point- To-Point Protocol (PPP).

I’ll use IOS Help to illustrate the three encap types we’ll work with in this section. I’ve edited other, less popular choices.

R3(config)#int s1 R3(config-if)#encapsulation ? frame-relay Frame Relay netw hdlc Serial HDLC synchronous ppp Point-to-Point protocol R3(config-if)#encapsulation p

A few seconds later, the line protocol goes down on R3.

19:18:11: %SYS-5-CONFIG_I: Co 19:18:12: %LINEPROTO-5-UPDOWN

A few seconds later, the line protocol goes down on R3. 19:18:11: %SYS-5-CONFIG_I: Co 19:18:12: %LINEPROTO-5-UPDOWN

The encapsulation mismatch has brought the line protocol down, and to bring it back up, we simply need to make the encapsulation type match

again. Before doing so, let’s take a detailed look at PPP.

PPP Features

The default setting of a Cisco serial interface is to use HDLC encapsulation, but you’re generally going to change that encap type to PPP.

Why, you ask? Because PPP offers many features that HDLC does not, including:

Authentication through the use of the Password Authentication Protocol (PAP) and the Challenge- Handshake Authentication

Protocol (CHAP)

Support for error detection and error recovery features

Multiprotocol support (which Cisco’s HDLC does offer, but the original HDLC does not)

We can authenticate over PPP with either PAP or CHAP, and when you have two choices for the same task, you just know you’re going to see a lot of those two choices on your exams. Let’s discuss both of

them while seeing both in action on live Cisco routers!

But before that… just a quick word!

The authentications and labs you’ll see in this section are two-way authentications, where each router is actively authenticating the other. This gives us plenty of practice with our commands, including show and debug commands, but authentication isn’t required to be two-way.

Each of the authentications are separate operations — they’re

not tied in to each other. For example, if we wanted R1 to authenticate R3 in any of the following labs, but not have R3 authenticate R1, that’s no problem.

not tied in to each other. For example, if we wanted R1 to authenticate R3 in

PAP And / Or / Vs. CHAP

First things first — we need to have PPP running over our PTP link before we can even start examining PAP and CHAP. When last we left our routers, R3 was running PPP and R1 was running HDLC, so let’s config R1 for PPP and then verify both interfaces.

R1(config)#int s1 R1(config-if)#encap ppp 19:37:20: %LINEPROTO-5-UPDOWN

R1#show int s1

Serial1 is up, line protocol Encapsulation PPP, loopback

R3#show int s1 Serial1 is up, line protocol Encapsulation PPP, loopback

There’s a lot going on behind the scenes with CHAP and PAP, so we’ll run some debugs during these labs to see exactly how these protocols operate.

One major difference between the two -- CHAP is much more aggressive than PAP. Assume R1 is authenticating R3. With PAP, R1’s just going to sit there and wait for R3 to present a

password.

password. With CHAP, R1 challenges R3 to prove its identity. (To use the dreaded Buzzword Bingo

With CHAP, R1 challenges R3 to prove its identity. (To use the dreaded Buzzword Bingo word, CHAP is much more proactive than PAP.)

We’ll start our CHAP config by creating a username / password database. If you haven’t done

We’ll start our CHAP config by creating a username / password database. If you haven’t done that at this point in the course, you skipped something. ; ) No worries, it’s easy! On R3, we’ll create a database with R1’s name and the password CCNA, and on R1 we’ll create an entry with R3’s name and the same password.

R3(config)#username R1 passwo R1(config)#username R3 passwo

Now we’ll apply CHAP with the ppp authentication chap command on both R1 and R3’s serial interfaces. To watch the authentication process, we’ll run debug authentication ppp on R3 before finishing the config.

R1(config)#int s1 R1(config-if)#ppp authen chap

R3#debug ppp authentication PPP authentication debugging R3(config)#int s1 R3(config-if)#ppp authenticat

20:21:06: Se1 CHAP: O CHALLEN 20:21:06: Se1 CHAP: I CHALLEN 20:21:06: Se1 CHAP: O RESPONS 20:21:06: Se1 CHAP: I RESPONS 20:21:06: Se1 CHAP: O SUCCESS 20:21:06: Se1 CHAP: I SUCCESS

Success!

When all is well with CHAP authentication, this is the debug output. First, a set of challenges from each router, then a set of responses from each, and then two success messages.

Now that we know what the debug output is when things

are great, let’s see what happens when the authentication’s off a bit. I’ll remove the database entry from R1 and replace it with one using ccna for the password instead of the upper-case CCNA. I’ll then reset the interface to trigger authentication.

R1(config)#no username R3 pas R1(config)#username R3 passwo R1(config)#int s1

R1(config-if)#shut

20:30:40: %LINK-5-CHANGED: In 20:30:41: %LINEPROTO-5-UPDOWN R1(config-if)#no shut

20:30:49: %LINK-3-UPDOWN: Int 20:30:49: Se1 CHAP: O CHALLEN 20:30:49: Se1 CHAP: I CHALLEN 20:30:49: Se1 CHAP: O RESPONS 20:30:49: Se1 CHAP: I RESPONS 20:30:49: Se1 CHAP: O FAILURE

The phrase “MD/DES compare failed” is a huge tipoff there’s an issue with the password.

You’re going to see a full set of these messages every 2 seconds with that debug, so while you troubleshoot, you might want to turn the debug off. You may also see the physical state of the interface begin to flap — that is, go up

and down every few seconds.

20:31:43: %LINK-3- UPDOWN: Interface Serial1, changed state to down 20:31:45: %LINK-3- UPDOWN: Interface Serial1, changed state to up 20:31:57: %LINK-3- UPDOWN: Interface Serial1, changed state to down 20:31:59: %LINK-3- UPDOWN: Interface Serial1, changed state to up

If you see that, I would shut

the interface down completely while you fix the config.

This debug illustrates an important point. Your CHAP and PAP passwords are case- sensitive, so “ccna” and “CCNA” are not the same password.

After replacing the new database entry with the original and reopening the interface, the debug shows our link is again working properly.

R1(config)#no username R3 pas R1(config)#username R3 passwo R1(config)#int s1 R1(config-if)#no shut

20:38:09: %LINK-3-UPDOWN: Int 20:38:09: Se1 CHAP: O CHALLEN 20:38:09: Se1 CHAP: I CHALLEN 20:38:09: Se1 CHAP: O RESPONS 20:38:09: Se1 CHAP: I RESPONS 20:38:09: Se1 CHAP: O SUCCESS 20:38:09: Se1 CHAP: I SUCCESS 20:38:10: %LINEPROTO-5-UPDOWN

Success!

That’s why you want to practice with debugs in a lab environment when things are working properly. You see exactly what’s going on “behind the command” and it gives you a HUGE leg up when real-world troubleshooting time comes

around.

If you get the username wrong, the output of that debug will be slightly different. I’ll remove the working username/password entry and replace it with one that has the right password but a mistyped username.

R1(config)#no username R3 pas R1(config)#username R33 passw

After resetting the interface, this is the output of debug ppp authentication.

20:41:35: Se1 CHAP: O CHALLEN

20:41:35: Se1 CHAP: I CHALLEN 20:41:35: Se1 CHAP: Username 20:41:35: Se1 CHAP: Unable to

That output is doing everything except fixing the problem for you! If the username isn’t found, that means there’s no entry for that username in the username/password database. Put one there and the problem is solved.

R1(config)#no username R33 pa R1(config)#username R3 passwo 20:47:52: Se1 CHAP: O CHALLEN 20:47:52: Se1 CHAP: I CHALLEN 20:47:52: Se1 CHAP: O RESPONS 20:47:52: Se1 CHAP: I RESPONS

20:47:53: Se1 CHAP: O SUCCESS 20:47:53: Se1 CHAP: I SUCCESS

The commands for PAP are much the same. PAP requires a username/password database exactly like the one we’ve already built, so we’ll continue to use that one. We’ll remove the CHAP configuration with no ppp authentication chap on both routers’ Serial1 interfaces. (There are exceptions, but you can usually negate a Cisco command simply by repeating the command with the word no in front of it.)

R1(config)#int s1 R1(config-if)#no ppp authenti R3(config)#int s1 R3(config-if)#no ppp authenti

Now we’ll put PAP into action on R1 first, and then run debug ppp authentication while configuring PAP on R3.

R1(config)#int s1 R1(config-if)#ppp authenticat R3(config)#int s1 R3(config-if)#ppp authenticat Here’s the result of the debu 2d05h: Se1 PAP: I AUTH-REQ i 2d05h: Se1 PAP: O AUTH-REQ i 2d05h: Se1 PAP: Authenticatin 2d05h: Se1 PAP: O AUTH-ACK i 2d05h: Se1 PAP: I AUTH-ACK i

With PAP, there is no series of challenges.

I’m always reminding you to use IOS Help even when you don’t need to, just to see what other options a given command has. I used it at the end of ppp authentication pap, and here are the results:

R3(config-if)#ppp authenticat callback Authenticate remote callin Authenticate remote on callout Authenticate remote chap Challenge Handshake Aut ms-chap Microsoft Challenge optional Allow peer to refus <cr>

According to IOS Help, we can still enter CHAP in this command, even though we’ve already specified PAP as the authentication protocol to use.

Now that’s interesting!

Both of the following commands are actually legal:

R1(config-if)#ppp authenticat

R3(config-if)#ppp authenticat

This option allows the local router to attempt a secondary authentication protocol if the

primary one (the first one listed) is not in use by the remote router.

This does not mean the second protocol will be used if authentication fails via the first protocol. For example, if we configured the following on

R3….

R3(config-if)#ppp authenticat

… here are the possible results.

If R3’s remote partner is not using PAP, R3 will then send CHAP messages.

If R1 does respond to the PAP messages and the result is failed authentication, R3 will

If R1 does respond to the PAP messages and the result is failed authentication, R3 will *not* try CHAP.

Why CHAP Over PAP?

The drawback with PAP: The username and password are sent over the WAN link in clear text. If a potential network intruder intercepts that information, they’re going to become an actual network intruder in no time, since they can easily read the username and password.

Both routers have to know the password in CHAP, but neither will ever send the actual

Both routers have to know the password in CHAP, but neither will ever send the actual password over the link. Earlier, we saw a CHAP router challenge the other router to prove its identity.

This challenge takes the form of a three-way handshake, but it’s not the TCP three-way handshake!

This challenge takes the form of a three-way handshake, but it’s not the TCP three-way handshake! Here’s the overall process:

The authenticating router challenges the peer via a CHALLENGE packet, as discussed previously. Contained in that

challenge is a random number.

The challenged router runs a hash algorithm against its password, using that random number as part of the process. The challenged router passes that value back to the authenticating router in a RESPONSE packet.

The authenticating router looks at the algorithm result, and if it matches the answer the

authenticating router came up with using the same algorithm and the same random number, authentication has succeeded! The authenticating router sends an ack to the challenged router in the form of a SUCCESS message.

In earlier labs, we had R3 authenticating R1 and R1 authenticating R3. When authentication was properly configured, we saw the

CHALLENGE and RESPONSE packets, followed by SUCCESS!

22:11:22: Se1 CHAP: O CHALLEN 22:11:22: Se1 CHAP: I CHALLEN 22:11:22: Se1 CHAP: O RESPONS 22:11:22: Se1 CHAP: I RESPONS 22:11:22: Se1 CHAP: O SUCCESS 22:11:22: Se1 CHAP: I SUCCESS

“Who’s Causin’ All This?”

A better way to ask this question is “Who’s handling all of these PPP capabilities?” The answer — the Link Control Protocol (LCP).

Just as the Session layer is the “manager” of the entire OSI model, LCP is really the manager of PPP — the “control protocol”, technically.

LCP handles the configuration, maintenance, and eventual teardown of any PPP

connection. All the features that make PPP so attractive to network admins — looped link detection, PAP and CHAP authentication, PPP multilink (load balancing), and error detection — are negotiated and handled by LCP.

When a PPP link is up and running, both physically and logically, you’ll see “LCP Open” in the output of show interface serial.

R3#show int serial 1 Serial1 is up, line protocol Hardware is HD64570

Internet address is 172.12.1

MTU 1500 bytes, BW 1544 Kbit reliability 255/255, txload 1 Encapsulation PPP, loopback Keepalive set (10 sec)

LCP Open

Just to cause trouble, I configured ppp authentication chap on R3’s S1 interface without doing so on R1. Note the “LCP TERMsent” message. When you see LCP TERMsent or LCP Closed there, you’ve got a problem. Of course, line protocol is down tells us there’s a problem as well!

R3(config)#int s1 R3(config-if)#ppp authenticat

R3(config-if)#^Z

R3#

1w0d: %LINEPROTO-5-UPDOWN: Li 1w0d: %SYS-5-CONFIG_I: Config R3#show int s1 Serial1 is up, line protocol Hardware is HD64570 Internet address is 172.12.1

MTU 1500 bytes, BW 1544 Kbit Encapsulation PPP, loopback Keepalive set (10 sec)

LCP TERMsent

Let me introduce you to debug ppp authentication’s talkative relative, debug ppp negotiation. You’ll still

see the authentication output, it’ll just be in the middle of the entire negotiation output. I’m showing you this large debug output primarily so you can see how busy LCP is during the entire PPP negotiation process, starting with the 3 rd line after the debug is turned on.

R3#debug ppp negotiation PPP protocol negotiation debu 22:11:22: Se1 PPP: Phase is E 22:11:22: Se1 LCP: O CONFREQ 22:11:22: Se1 LCP: AuthProto

22:11:22: Se1 LCP: MagicNumbe 22:11:22: Se1 LCP: I CONFREQ 22:11:22: Se1 LCP: AuthProto 22:11:22: Se1 LCP: MagicNumbe 22:11:22: Se1 LCP: O CONFACK 22:11:22: Se1 LCP: AuthProto 22:11:22: Se1 LCP: MagicNumbe 22:11:22: Se1 LCP: I CONFACK 22:11:22: Se1 LCP: AuthProto 22:11:22: Se1 LCP: MagicNumbe 22:11:22: Se1 LCP: State is O 22:11:22: Se1 PPP: Phase is A < CHAP authentication is then

There’s even more output after the authentication, but you get the point. LCP’s a busy protocol!

Keep the Link Control Protocol separate in your mind from another set of protocols that run over PPP, the Network Control Protocol. While both run at Layer 2, NCP does the legwork of negotiating options for our L3 protocols to run over the PPP link. For example, IP’s options are negotiated by the Internet Protocol Control Protocol.

Now on to Frame Relay!

Frame Relay

Point-to-point networks are nice, but there’s a limit to scalability. It’s just not practical to build a dedicated PTP link between every single router in our network, nor is it cost- effective. It would be a lot easier (and cheaper) to share a network that’s already in place, and that’s where Frame Relay comes in!

A frame relay network is a nonbroadcast multi-access (NBMA) network.

“nonbroadcast” means that broadcasts are not transmitted over frame relay by default, not that they cannot be sent. “multiaccess” means the frame relay network will be shared by multiple devices.

The frame provider’s collection of frame relay switches has a curious name — frame relay cloud. You’ll often see the frame provider’s switches represented with a cloud drawing in network diagrams, much like this:

We have two kinds of equipment in this network: The Frame Relay switches, AKA the Data

We have two kinds of equipment in this network:

The Frame Relay switches, AKA the Data Communications Equipment (DCE). These belong to the frame relay provider, and we don’t have anything to do with their configuration.

The routers, AKA the Data

Terminal Equipment. We have a lot to do with their configuration!

Each router will be connected to a Frame Relay switch via a Serial interface connected to a leased line, and the DCE must send a clockrate to that DTE. If the clockrate isn’t there, the line protocol will go down.

Terminal Equipment. We have a lot to do with their configuration! Each router will be connected

Those two frame switches are not going to be the only switches in that cloud. Quite the contrary, there can be hundreds of them! For simplicity’s sake, the following diagram will have less than that.

Those two frame switches are not going to be the only switches in that cloud. Quite

You and I, the network admins, don’t need to list or even know

every possible path in that cloud. Frankly, we don’t care. The key here is to know that not only will there be multiple paths through that cloud from Router A to Router B, but data probably will take different paths through that cloud.

That’s why we call this connection between the routers a virtual circuit. We can send data over it anytime we get ready, but data will not necessarily take the same path through the provider’s switches every time.

Frame relay is a packet- switching protocol. The packets may take different physical paths to the remote destination, at which point they will be reassembled and will take the form of the original message. In contrast, circuit- switching protocols have dedicated paths for data to travel from one point to another.

There are two types of virtual circuits, one much more popular than the other. A permanent virtual circuit (PVC)

is available at all times, where a switched virtual circuit (SVC) is up only when certain criteria are met. You’re going to see PVCs in most of today’s networks, and that’s the kind of virtual circuit we’ll work with throughout this section.

An SVC can be appropriate when data is rarely exchanged between two routers. For example, if you have a remote site that only needs to send data for 5 minutes every week, an SVC may be more cost- effective than a PVC. An SVC is

really an “on-demand” VC, as it’s built when it’s needed and torn down when that need ends.

A PVC can be used to build a full-mesh or partial-mesh network. A full mesh describes a topology where every router has a logical connection to every other router in the frame relay network.

The problem with full-mesh networks is that they’re simply not scalable. As the network grows, it

The problem with full-mesh networks is that they’re simply not scalable. As the network grows, it becomes less and less feasible to maintain a full mesh. If we added just a single

router to the above network, we’d have to configure each router to have a VC to the new router.

Stepping back to dedicated leased lines for a moment — if full-mesh networks aren’t terribly scalable, dedicated lines are even worse! Can you imagine putting in a dedicated line between every router in a 20-router network? Forget it!

More common is the partial- mesh topology, where a single router (the hub) has a logical connection to every other

router (the spokes). The spokes do not have a logical connection to each other. Communication between spokes will go through the hub.

router (the spokes). The spokes do not have a logical connection to each other. Communication between

You can see where this would beat the heck out of dedicated lines, especially as your network grows. Imagine the cost if you add seven more routers to that network and then try to connect them all to each other with dedicated lines!

With PVCs, particularly in a hub-and-spoke network, you could quickly have that network up and running in minutes once your Frame Relay provider gives you the information you need to create your mappings.

We’ll get to that info and those mappings soon. Right now, let’s talk about the keepalive of our Frame Relay network!

The LMI: The Heartbeat Of Frame Relay

Local Management Interface (LMI) messages are sent between the DCE and the DTE. The “management” part of the message refers to PVC management, and information regarding multicasts, addressing, and VC status is contained in the LMI.

A particular kind of LMI message, the LMI Status message, serves as a keepalive for the logical connection

between the DTE and DCE. If these keepalives are not continually received by both the DCE and DTE, the line protocol will drop. The LMI also indicates the PVC status to the router, reflected as either active or inactive.

The LMI types must match on the DTE and DCE for the PVC to be established. There are three types of LMI:

Cisco (the default, AKA the “Gang Of Four” LMI)

ansi

q933a

The “Gang Of Four” refers to the four vendors involved in its development. (Cisco, StrataCom, DEC, NorTel)

The LMI type can be changed with the frame lmi-type command. Before doing anything with the frame relay commands, we have to enable frame relay on the interface with the encapsulation frame- relay command. Remember, the default encapsulation type on a Cisco Serial interface is HDLC.

R1(config)#interface serial0 R1(config-if)#encapsulation ?

atm-dxi ATM-DXI encapsulation frame-relay Frame Relay netwo hdlc Serial HDLC synchronous lapb LAPB (X.25 Level 2) ppp Point-to-Point protocol smds Switched Megabit Data Se x25 X.25

R1(config-if)#encapsulation

R1(config-if)#frame-relay lmi

cisco

ansi

q933a

LMI Autosense will take effect when you don’t specify an LMI

type manually. When you open that interface, LMI Autosense has the router send out an LMI Status message for all three LMI types.

type manually. When you open that interface, LMI Autosense has the router send out an LMI

The router then waits for a response for one of those LMI types from the DCE. When the router sees the response to its

LMI Autosense messages, the router will then send only the same LMI type it received from the DCE.

LMI Autosense messages, the router will then send only the same LMI type it received from

The Frame Relay LMI isn’t exactly something we change on a regular basis, so once it’s up and running, mismatches

between the DTE and DCE are rare.

To be sure we can spot one, and to be fully prepared for exam success, we’ll create an LMI mismatch between the DTE and DCE in our lab, and follow that with some debugging and troubleshooting.

We’ll go through several full Frame Relay labs in this section, including some topics we haven’t covered here yet, but I want you to see the LMI info now. To that end, I’ve configured a working Frame

Relay network, which we’ll soon make not work.

Our router is R1, and show frame lmi verifies it’s running Cisco LMI. The top line of output tells us both the interface and the LMI running on that interface.

R1#show frame lmi LMI Statistics for interface Invalid Unnumbered info 0

In

In

In

In

I

Invalid dummy Call Ref 0

Invalid Status Message 0

Invalid Information ID 0

Invalid Report Request 0

Num Status Enq. Sent 1390

Nu

The fields we’re most interested in are the two bolded fields and the “Num Status Timeouts” value. As the LMIs continue to be exchanged, the “Enq Sent” and “Msgs Rcvd” should continue to increment and the Timeouts value should remain where it is. Let’s take another look at this output just a few minutes later. (From this point forward, I’ll cut the “invalid” fields out of this output.)

R1#show frame lmi LMI Statistics for interface

Num Status Enq. Sent 64 Num Update Status Rcvd 0

Num

Num

show interface serial 0 verifies the interface is physically up and the line protocol (the logical state of the interface) is up as well. The keepalive for Frame Relay is set to 10 seconds — that’s how often LMI messages are going out.

R1#show int s0

Serial0 is up, line protocol Internet address is 172.12.1 MTU 1500 bytes, BW 1544 Kbit reliability 255/255, txload 1

Encapsulation FRAME-RELAY, l Keepalive set (10 sec)

Now that we know how things look when the LMI matches, let’s set the LMI type on the router to ansi and see what happens.

Now that we know how things look when the LMI matches, let’s set the LMI type

R1(config)#int serial0 R1(config-if)#frame lmi-type About 30 seconds later, the l R1(config)#int serial0 R1(config-if)#frame lmi-type

R1(config-if)#

3d04h: %LINEPROTO-5-UPDOWN: L

R1#show int s0 Serial0 is up, line protocol

You and I know why the line protocol is down, since we did it deliberately. But what if you had just walked into a client site and their Frame Relay link is down? The first step in Frame troubleshooting is show interface serial, which we just ran. We see the line protocol is down and the interface is running Frame Relay.

The “Serial0 is up” part of the

show int s0 output tells us that everything is fine physically, but there is a logical problem. Let’s run show frame lmi twice, a few minutes apart, and see what we can see.

R1#show frame lmi LMI Statistics for interface

Num Status Enq. Sent 121 Num

Num Update Status Rcvd 0

Num

R1#show frame lmi LMI Statistics for interface Num Status Enq. Sent 134 Num Num Update Status Rcvd 0 Num

LMI messages are still going out, so that’s good. The bad

part is the timeout counter incrementing while the msgs rcvd counter stands still. Let’s dig a little deeper and run debug frame lmi.

R1#debug frame lmi Frame Relay LMI debugging is Displaying all Frame Relay LM 3d04h: Serial0(out): StEnq, m 3d04h: datagramstart = 0xE32 3d04h: FR encap = 0x00010308 3d04h: 00 75 95 01 01 00 03 0

3d04h:

3d04h: Serial0(out): StEnq, m 3d04h: datagramstart = 0xE244 3d04h: FR encap = 0x00010308 3d04h: 00 75 95 01 01 00 03 0 3d04h: Serial0(out): StEnq, m 3d04h: datagramstart = 0xE245

3d04h: FR encap = 0x00010308 3d04h: 00 75 95 01 01 00 03 0

R1#undebug all All possible debugging has be

When myseq continues to increment but yourseen does not, that’s another indicator of an LMI mismatch. I’ll turn the debug back on, change the LMI type back to Cisco, and we’ll see the result. Warning: A lot of info ahead!

R1#debug frame lmi Frame Relay LMI debugging is Displaying all Frame Relay LM R1#conf t

Enter configuration commands, R1(config)#int s0 R1(config-if)#frame lmi-type

R1(config-if)#

3d04h: Serial0(out): StEnq, m 3d04h: datagramstart = 0xE018 3d04h: FR encap = 0x00010308 3d04h: 00 75 95 01 01 00 03 0

3d04h:

R1(config-if)#

3d04h: Serial0(out): StEnq, m 3d04h: datagramstart = 0xE01A 3d04h: FR encap = 0xFCF10309 3d04h: 00 75 01 01 00 03 02 4

3d04h:

3d04h: Serial0(in): Status, m 3d04h: RT IE 1, length 1, typ 3d04h: KA IE 3, length 2, you 3d04h: PVC IE 0x7 , length 0x 3d04h: PVC IE 0x7 , length 0x

R1(config-if)#

3d04h: Serial0(out): StEnq, m 3d04h: datagramstart = 0xE01C 3d04h: FR encap = 0xFCF10309 3d04h: 00 75 01 01 01 03 02 4

3d04h:

3d04h: Serial0(in): Status, m 3d04h: RT IE 1, length 1, typ 3d04h: KA IE 3, length 2, you

R1(config-if)#

3d04h: Serial0(out): StEnq, m 3d04h: datagramstart = 0xE23B 3d04h: FR encap = 0xFCF10309 3d04h: 00 75 01 01 01 03 02 4

3d04h:

3d04h: Serial0(in): Status, m 3d04h: RT IE 1, length 1, typ 3d04h: KA IE 3, length 2, you 3d04h: PVC IE 0x7 , length 0x 3d04h: PVC IE 0x7 , length 0x 3d04h: %LINEPROTO-5-UPDOWN: L

R1(config-if)#^Z

R1#

3d04h: Serial0(out): StEnq, m 3d04h: datagramstart = 0xE23 3d04h: FR encap = 0xFCF10309 3d04h: 00 75 01 01 01 03 02 4

3d04h:

3d04h: Serial0(in): Status, m 3d04h: RT IE 1, length 1, typ 3d04h: KA IE 3, length 2, you R1#undebug all All possible debugging has be

As yourseq and yourseen begin to increment, the line protocol comes back up. Once you see that, you should be fine, but always stick around for a minute or so and make sure the line protocol stays up.

Verify the line protocol with show interface serial. Note you can see other information relating to the LMI in this output.

R1#show int s0 Serial0 is up, line protocol Internet address is 172.12.1 Encapsulation FRAME-RELAY, l Keepalive set (10 sec) LMI enq sent 180, LMI stat LMI enqrecvd 0, LMI stat sen LMI DLCI 1023 LMI type is CIS

Before you leave the client site, turn off your debugs, either individually or with the

undebug all command.

All possible debugging has be

The LMI must match in order for our line protocol to stay up, but so must the Frame encapsulation type. The encapsulation type must be agreed upon by the DTEs at each end of the connection; the DCE does not care which Frame encap type is used.

We have two Frame encapsulation choices: Cisco (the default ) IETF (the industry standard) Interestingly enough,

We have two Frame encapsulation choices:

Cisco (the default ) IETF (the industry standard)

Interestingly enough, IOS Help does not mention the Cisco default, only the option to

change the Frame encap to IETF.

R1(config)#int s0 R1(config-if)#encap frame ? ietf Use RFC1490/RFC2427 enca <cr>

DLCIs, Frame Maps, and Inverse ARP

Frame Relay VCs use Data-Link Connection Identifiers (DLCIs) as their addresses. A DLCI is simply a Frame Relay Layer 2 address, but it’s a bit different from other addresses in that they can be reused from one

router to another in the same network.

The reason DLCIs have local significance only is that DLCIs are not advertised to other routers.

I know this sounds odd, but it will become clearer after we work through some examples of Frame Relay mapping, both dynamic and static.

On that topic, stick with me while I tell you a short story.

Years ago, my girlfriend-now- wife and I decided to take in a

movie. This being the 80s, we had to refer to an ancient information-gathering document called a newspaper to see what time the movie started. We saw the time the next show started, figured we had just enough time to make it, and hit the road.

(This was very unusual for me. I’m one of those people who feels he’s late for something if he’s not at least 15 minutes early.)

We walk in, there’s hardly anyone in the lobby, and I walk

up to the box office and ask for two tickets for that show. The fellow behind the counter tells me the movie started 20 minutes ago, which was 20 minutes earlier than the newspaper said it would start.

I informed him of this.

He looked me dead in the eye and said, “The paper ain’t always right.”

Hmpf.

What does this have to do with frame relay mapping, you ask?

Just as the paper ain’t always

right, the theory ain’t always right.

You know I’m all for the letting the routers and switches do their work dynamically whenever possible. Not only does that save our valuable time, but using dynamic address learning methods is usually much more effective than static methods.

Without the right frame map statements, the rest of our frame relay work is useless, and we have two choices when it comes to Frame mapping:

Inverse ARP, the protocol that enables dynamic mapping

Static frame map statements, which you and I have to write

We’re going to continue this discussion as we build our first frame relay network. This network will be a hub-and- spoke setup.

The hub router, R1, has two DLCIs. DLCI 122 will be used for mapping a PVC to R2, and DLCI 123 will be used for

mapping a PVC to R3.

The subnet used by all routers is 172.12.123.0 /24, with the router number as the last octet. This lab contains no subinterfaces and all routers are using their Serial0 interfaces.

We have to get this L2 network up and running, because it’s the same network we’ll use as a foundation for our static routing, OSPF, and EIGRP labs, and you can’t have a successful L3 lab if L2 isn’t working perfectly!

Inverse ARP

Inverse ARP is enabled by default on a Cisco interface running Frame Relay. When you enter the encapsulation frame- relay command and then open the interface, you’re running Inverse ARP. It’s that easy!

What’s supposed to happen next: The routers each send an Inverse ARP packet announcing its IP address. The receiving router opens the packet, sees the IP address and a DLCI, which will be one of the local

DLCIs on the receiving router. The receiving router then maps that remote IP address to the local DLCI, and puts that entry in its Frame Relay mapping table.

That entry will be marked “dynamic”.

That’s great if it works, but Inverse ARP can be quirky and tough to work with. Many network admins chose a long time ago to put static frame relay map statements in their networks, and once those static entries go in, they tend to stay

there.

Again, nothing against Inverse ARP or the admins who use it. Theoretically, it’s great. In the real world, it doesn’t always work so well and you’ll wish you knew how to use static map statements.

And after this next section, you will!

I’ve removed all earlier configurations from the routers, so let’s configure R1 for frame encapsulation and then open the interface.

R1#conf t Enter configuration commands, R1(config)#int s0 R1(config-if)#ip address 172.

R1(config-if)#encapsulation

R1(config-if)#no shutdown R1 00:10:43: %SYS-5-CONFIG_I: Co 00:10:45: %LINK-3-UPDOWN: Int 00:10:56: %LINEPROTO-5-UPDOWN

The line protocol’s up, so we’re looking good. Let’s see if Inverse ARP has done anything by running show frame map. (This command displays both static and dynamic mappings.)

R1#show frame map Serial0 (up): ip 0.0.0.0 dlci

broadcast, CISCO, status defined, inac Serial0 (up): ip 0.0.0.0 dlci broadcast, CISCO, status defined, inac

This mapping to “0.0.0.0” occasionally happens with Inverse ARP. These mappings don’t really hurt anything (except in the CCIE lab, of course), so if you want to leave them there, leave ’em. The only way I’ve ever seen to get rid of them is to disable Inverse ARP and reload the router.

You can turn Inverse ARP off

with the no frame-relay inverse-arp command.

R1(config)#int s0 R1(config-if)#no frame-relay

If you decide to turn it back on, use the frame-relay inverse-arp command.

R1(config)#int s0 R1(config-if)#frame inverse-a

It won’t surprise you to learn that we’ll use the frame map command to create frame maps, but you must be careful with the syntax of this

command. That goes for the exam room and working with the real thing!

Let’s take another look at the network.

command. That goes for the exam room and working with the real thing! Let’s take another

The key to writing successful frame map statements is simple and straightforward:

Always map the local DLCI to the remote IP address.

When you follow that simple rule, you’ll always write correct frame map statements in the field and nail every Frame Relay question in the exam room. There are a few more details you need to learn about these statements, but the above rule is the key to success with the frame map command.

Now let’s write some static frame maps! I’ve removed all previous configurations, so we’re starting totally from scratch. We’ll start on R1 and use IOS Help to continually view our options with the frame map command. I have not opened this interface, and all Cisco router interfaces are closed by default.

R1(config)#int s0 R1(config-if)#ip address 172. R1(config-if)#encap frame R1(config-if)#no frame invers R1(config-if)#frame map ? appletalk AppleTalk

bridge Bridging decnetDECnet ip IP ipx Novell IPX llc2 llc2

The first option is to enter the protocol we’re using, and that’s IP. Simple enough!

R1(config-if)#frame map ip ? A.B.C.D Protocol specific a

“protocol specific address” isn’t much of a hint, so we better know that we need to enter the remote IP address we’re mapping to. We’ll create this

map to R2’s IP address,

172.12.123.2.

R1(config-if)#frame map ip ? A.B.C.D Protocol specific a R1(config-if)#frame map ip 17 The next value needed is the R1(config-if)#frame map ip 17 <16—1007> DLCI

… and we’re not given much of a hint as to which DLCI we’re supposed to enter — the one on R1 or on R2!

Following our simple DLCI rule, we know to enter a local DLCI here. Never enter the remote router’s DLCI. The router will

accept the command, but the mapping will not work.

R1(config-if)#frame map ip 17 broadcast Broadcasts should cisco Use CISCO Encapsulati compress Enable TCP/IP and ietf Use RFC1490/RFC2427 Enc

nocompress

Do not compress T

payload-compression Use payl

rtp

RTP header compression p

tcp TCP header compression

<cr>

We’re getting somewhere, since we see a <cr> at the bottom, telling us what we’ve entered to this point is a legal command. Let’s go with this

command as it is, and write a similar map to R3 using DLCI

123.

R1(config-if)#frame map ip 17 R1(config-if)#frame map ip 17 R1(config-if)#no shut 00:14:32: %SYS-5-CONFIG_I: Co 00:14:33: %LINK-3-UPDOWN: Int 00:14:44: %LINEPROTO-5-UPDOWN

After opening the interface, we’ll check our mappings with show frame map.

R1#show frame map Serial0 (up): ip 172.12.123.2 CISCO, status deleted Serial0 (up): ip 172.12.123.3

CISCO, status deleted

Note static in this output. Mappings created with the frame map command will be denoted as static in the output of show frame map. If these mappings had been created by Inverse ARP, we’d see the word dynamic there.

We also see status deleted, and that doesn’t sound good! In this case, we’re seeing that because we haven’t configured the spokes yet. IP addresses haven’t even been assigned to those routers yet, so let’s do

that and configure the appropriate mappings at the same time.

R2(config)#int s0 R2(config-if)#ip address 172. R2(config-if)#encap frame

R2(config-if)#no frame invers R2(config-if)#frame map ip 17 R2(config-if)#frame map ip 17 R2(config-if)#no shutdown 00:21:27: %SYS-5-CONFIG_I: Co 00:21:28: %LINK-3-UPDOWN: Int

00:21:38: %FR-5-DLCICHANGE: I

00:21:39: %LINEPROTO-5-UPDOWN

There’s a message about DLCI 221 changing to ACTIVE, so let’s run show frame map and

see what’s going on.

R2#show frame map Serial0 (up): ip 172.12.123.1 CISCO, status defined, active Serial0 (up): ip 172.12.123.3 CISCO, status defined, acti

Looks good! Let’s configure R3 and then see where things stand.

R3(config)#int serial0 R3(config-if)#ip address 172. R3(config-if)#encap frame R3(config-if)#no frame inver R3(config-if)#frame map ip 17 R3(config-if)#frame map ip 17 R3(config-if)#no shutdown

00:24:38: %LINEPROTO-5-UPDOWN R3#show frame map Serial0 (up): ip 172.12.123.1 CISCO, status defined, acti Serial0 (up): ip 172.12.123.2

The mappings on both spokes are showing as active. Let’s check the hub!

R1#show frame map Serial0 (up): ip 172.12.123.2 Serial0 (up): ip 172.12.123.3

Each router can now ping the other, and we have IP connectivity. I’m showing only the pings from the hub to both

spokes, but I did go to each router and make sure I could ping the other two routers.

R1#ping 172.12.123.2 Type escape sequence to abort Sending 5, 100-byte ICMP Echo !!!!! Success rate is 100 percent R1#ping 172.12.123.3 Type escape sequence to abort Sending 5, 100-byte ICMP Echo !!!!! Success rate is 100 percent

If I have 100% connectivity, why did I make kind of a big deal of leaving the broadcast option off the frame map

statements? Let’s configure OSPF on this network and find out.

If you don’t know anything about OSPF yet, that’s fine --

you will by the end of this course. All you need to know for now is that OSPF-enabled interfaces will send Hello packets in an attempt to create neighbor relationships with downstream routers, and those Hello packets are multicast to

224.0.0.5.

The key word there is “multicast”. Frame Relay treats a multicast just like a broadcast —

The key word there is “multicast”. Frame Relay treats a multicast just like a broadcast — these traffic types can only be forwarded if the broadcast option is configured on the frame map statements. Pings

went through because they’re unicasts, but routing protocol traffic can’t operate over Frame Relay if the broadcast option is left off the map statements.

R1(config-if)#frame map ip 17

broadcast Broadcasts should

<cr> R3(config-if)#frame map ip 17

If you’re having trouble with routing protocol Hellos or other multicasts and broadcasts not being received by routers on a Frame Relay network, I can practically guarantee you the problem is a missing broadcast

statement.

You’ll usually see the broadcast statement on the end of all frame map statements. It’s so common that many admins think it’s required!

You don’t have to put the broadcast option on spoke-to- spoke mappings, since all spoke-to-spoke traffic goes through the hub, and the hub will not forward those broadcasts. In our lab, R2’s mapping to R3 doesn’t require broadcast, and vice versa. It doesn’t hurt anything, but it’s

not a requirement.

Subinterfaces And Frame Relay

Up to now, we’ve used physical Serial interfaces for our Frame Relay networks. Using a physical Serial interface can lead to some routing complications, particularly on the hub router. One of those complications is split horizon.

If we’re running OSPF on our network, there’s no problem. On EIGRP networks, split horizon can be a problem, as illustrated by the following

network topology.

(I know we haven’t hit EIGRP in this course yet. No advance knowledge of EIGRP is needed to understand this lab.)

network topology. (I know we haven’t hit EIGRP in this course yet. No advance knowledge of

The three routers are using

their physical interfaces for Frame Relay, and each router is running EIGRP on that same physical interface. R2 is advertising its loopback address via EIGRP. Does R1 have the route?

R1#show ip route eigrp 2.0.0.0/32 is subnetted, 1 s

  • D 2.2.2.2 [90/2297856] via

Yes! R3 is receiving EIGRP packets from R1 — does R3 have the route?

R3#show ip route eigrp

R3#

As I often say, “When a show command doesn’t show you anything, it has nothing to show you!” R3 has no EIGRP routes.

The reason R3 doesn’t have that route is split horizon. This routing loop prevention feature prevents a router from advertising a route back out the same interface that will be used by that same router as an exit interface to reach that route.

Or as I’ve always put it, “A

router can’t advertise a route out the same interface that it used to learn about the route in the first place.” Since R1 will send packets out Serial0 to reach the next-hop address for 2.2.2.2, it can’t send advertisements for that route out Serial0.

R1#show ip route eigrp 2.0.0.0/32 is subnetted, 1 s

  • D 2.2.2.2 [90/2297856] via

We have three solutions to this problem: Create a logical full mesh between all routers Use

We have three solutions to this problem:

Create a logical full mesh between all routers

Use the interface-level command no ip split- horizon

Use multipoint and/or point-to-point subinterfaces

With three solutions, you just know there have to be at least two with some shortcomings!

A logical full mesh wouldn’t be so bad between three routers, but not many production networks are made up of three routers. As you add dozens and/or hundreds of routers to this, you quickly understand that a logical full mesh is simply not a scalable solution.

The second solution, disabling split horizon, is simple enough in theory. We do that at the interface level in an EIGRP config with the no ip split- horizon eigrp command.

R1(config)#int s0 R1(config-if)#no ip split-ho

As a result, R1 advertises the missing route to R3, and it appears in R3’s route table.

R3#show ip route eigrp 2.0.0.0/32 is subnetted, 1 s

Simple enough, right? Welll….

Split horizon is enabled by default for a reason, and even though you may get the route advertisement that you do want after disabling it, you may quickly find routing loops that you don’t want. Should you ever disable SH in a production network, be ready for unexpected routing issues to pop up.

Using subinterfaces is a better solution, since those subinterfaces are seen by split horizon as totally separate

interfaces. It also gives us a chance to practice using subinterfaces for our exam success, and I’ll also use this lab to introduce you to the frame interface-dlci command.

We’ll start by re-enabling SH with the ip split eigrp command.

R1(config)#int s0 R1(config-if)#ip split eigrp

We’re going to assign a different subnet to each of our subinterfaces on R1, and change the addressing on R2

and R3 accordingly.

and R3 accordingly. We have two choices for Frame subinterfaces, multipoint and point-to-point. Since both of

We have two choices for Frame subinterfaces, multipoint and point-to-point. Since both of our subinterfaces on R1 are going to communicate with one and

only one other router, we’ll make these point-to-point links. You must define the interface type when you create the interface.

R1(config)#int s0.12 ? multipoint Treat as a multip point-to-point Treat as a po

Here’s the configuration for R1. All frame relay commands from earlier labs have been removed. Note encapsulation frame-relay is still configured on R1’s Serial0 physical interface and the frame

interface-dlci command is used on point-to-point links.

I’m disabling Inverse ARP at the interface level, so it’ll be disabled on all subinterfaces as well.

R1:

R1(config)#int s0 R1(config-if)#encap frame R1(config-if)#no frame invers

R1(config)#int s0.12 point-to R1(config-subif)#ip address 1

R1(config-subif)#frame-relay

R1(config)#int s0.13 point-to R1(config-subif)#ip address 1

R1(config-subif)#frame-relay

Don’t try to use the frame map command on a point-to-point interface — the router will not accept the command. The router will even tell you the right command to use on a PTP interface, but it’s a safe bet the exam isn’t gonna tell you!

R1(config)#int s0.12 R1(config-subif)#frame map ip

FRAME-RELAY INTERFACE-DLCI co

The configurations on R2 and R3 are not using subinterfaces, so we’ll use frame map

statements.

R2:

R2(config)#int s0 R2(config-if)#ip address 172. R2(config-if)#encap frame R2(config-if)#no frame invers R2(config-if)#frame map ip 17

R3:

R3(config)#int s0

R3(config-if)#ip address 172.

R3(config-if)#encap frame

R3(config-if)#no frame invers

R3(config-if)#frame map ip 17

Off screen, I’ve configured all three routers with EIGRP, including a loopback of 2.2.2.2 /32 on R2. R1 has the route and can ping 2.2.2.2, as verified by show ip route eigrp and ping.

R1#show ip route eigrp 2.0.0.0/32 is subnetted, 1 s D 2.2.2.2 [90/2297856] via 17 R1#ping 2.2.2.2 Type escape sequence to abort Sending 5, 100-byte ICMP Echo !!!!! Success rate is 100 percent

What about R3? Let’s check R3’s EIGRP table and find out!

R3#show ip route eigrp 2.0.0.0/32 is subnetted, 1 s

  • D 2.2.2.2 [90/2809856] via 17 172.12.0.0/30 is subnetted,

  • D 172.12.123.0 [90/2681856] v

R3#ping 2.2.2.2

Type escape sequence to abort Sending 5, 100-byte ICMP Echo !!!!! Success rate is 100 percent

R3 has the route and can ping 2.2.2.2. R1 has no problem

advertising the route to R3, because split horizon never comes into play.

The route came in on R1’s s0.12 subinterface and then left on s0.13. Split horizon considers subinterfaces on the same physical interface to be totally separate interfaces, so there’s no reason for split horizon to prevent R1 from receiving a route on one subinterface and then advertising it back out another subinterface.

Whew! To recap, we have three ways to circumvent the rule of Split Horizon: Create a

Whew! To recap, we have three ways to circumvent the rule of Split Horizon:

Create a logical full mesh. Disable split horizon at the interface level with no ip split-horizon.

Use subinterfaces, either point-to-point or multipoint.

Generally, you’ll use the last method, but it’s always a good idea to know more than one way to do things in CiscoLand!

Configuring Multipoint Subinterfaces

Had I chosen to configure multipoint subinterfaces in that lab, I would have configured them with the same command I use with physical interfaces — frame map. I’ll create an additional subinterface to illustrate:

R1(config)#int s0.14 multipoi R1(config-subif)#ip address 1 R1(config-subif)#frame map ip

When it comes to deciding

whether a subinterface should be point-to-point or multipoint, it really depends on the network topology and the number of remote routers a subinterface will be communicating with. There’s no “one size fits all” answer to that question, but for both exam room and server room success, it’s vital to know:

Subinterfaces are often used to work around split horizon.

You have to define subinterfaces as

multipoint or point-to- point.

Always, always, always use the frame interface- dlci command with ptp subinterfaces.

Frame Relay Congestion Notification Techniques (With Bonus Acronyms!)

Frame Relay uses two different values to indicate congestion:

FECN — Forward Explicit Congestion Notification

BECN — Backward Explicit Congestion Notification

As I’m sure you can guess by the names, the main difference between the two is the direction! But what direction?

Glad you asked!

Glad you asked! The frame relay cloud shown consists of multiple Frame Switches, but for clarity’s

The frame relay cloud shown consists of multiple Frame Switches, but for clarity’s sake, I’ll only illustrate one. If that switch encounters transmission delays due to network congestion, the switch will set the FECN bit on the frames heading for Router B, since

that’s the direction in which the frames are traveling. The BECN bit will be set on frames being sent back to Router A.

that’s the direction in which the frames are traveling. The BECN bit will be set on

When a frame arrives at a router with the FECN bit set, that means congestion was encountered in the direction in which the frame was traveling.

When a frame arrives at a router with the BECN bit set, congestion was encountered in the opposite direction in which the frame was traveling.

The Discard Eligible bit is considered a Frame Relay congestion notification bit, but the purpose is a bit different from the BECN and FECN. Frames are sometimes dropped as a result of congestion, and frames with the DE bit set will be dropped before frames without that bit set. Basically, setting the DE bit on a frame

indicates data that’s considered less important than data without the DE bit set.

The FECN, BECN, and DE values can be seen with show frame pvc.

R1#show frame pvc PVC Statistics for interface

Local

Switched

Unused

Active

2

0

0

Inactive

0

0

0

DLCI = 122, DLCI USAGE = LOCA

input pkts 30

output pkt

out bytes 0

dropped p

in BECN pkts 0

out FECN p

in DE pkts 0out DE pkts 0

o

bytes 0 pvc create time 00:07:45, las

And speaking of PVC Status messages….

It’s Your Fault (Or Possibly Yours, But It Sure Ain’t Mine)

When you check PVCs with show frame-relay pvc, you’ll see one of three status messages for each PVC:

active

inactive

deleted

Active is what we’re after, and that’s what we saw in the previous example. But what’s

the difference between inactive and deleted? I’ll close R3’s Serial0 interface and see the result on R1. For clarity, I’m removing the information regarding the DLCI to R2.

R3(config)#int s0

R3(config-if)#shut

R1#show frame pvc

PVC Statistics for interface

Active

Local

Switched

1

0

Inactive

1

0

DLCI = 123, DLCI USAGE = LOCA

input pkts 159 out bytes 0 in BECN pkts 0 in DE pkts 0 out bcast bytes 0

output pkts dropped p out FECN p out DE pkts

pvc create time 00:38:46, las

The DLCI to R3 has gone inactive because there’s a problem on R3 — in this case, the Serial interface is administratively down. On the other hand, deleted means the PVC isn’t locally present.

Personally, I’ve always kept those two straight like this:

inactive means it’s the other guy’s fault (the problem is remote)

deleted means it’s your fault (the problem is local)

And You Thought I Had Forgotten

Now about those cables….

I mentioned “leased lines” early in this section, and this is one of those terms that has about

47 other names. I usually call them “serial lines”, “serial links”, or if I’m tired and can’t spare the extra word, “serial”.

Others call them T1s, T1 links, or just plain WAN links. One name or the other, they’re still leased lines.

To get those leased lines to work in a production network, we need a device to send clocking to our router (the DTE). That device is going to

be the CSU/DSU, which is generally referred to as “the CSU”. Collectively, the DTE and CSU make up the Customer Premise Equipment (CPE).

Your network may not have an external CSU. Many of today’s Cisco routers use WAN Interface Cards with an embedded CSU/DSU, which means you don’t need the external CSU. Believe me, that’s a good thing — it’s one less external device that could go down.

Here’s where acronym confusion comes in on occasion: The CSU/DSU acts as a DCE (Data Circuit-terminating Equipment; also called Data Customer Equipment on occasion). The DCE supplies clocking to the DTE, and in doing so tells the DTE — our Cisco router — when to send data and how fast it can do so.

The DCE basically says “When I say JUMP, you’re gonna say HOW HIGH?”

In this case, it’s really “HOW FAST?”, and that depends on how much money we’re giving the provider. There are three Digital Speed values you should know, since they might show up on your exams and will show up during conversations with your provider:

Digital Signal Zero (DS0) channels are 64Kbps each. According to Wikipedia, that’s enough for one digitized phone call, the purpose for which this channel size was originally

designed.

Digital Signal One (DS1) channels run at 1.544 Kbps, and if that sounds familiar, that’s because we usually refer to DS1 lines as T1 lines.

Digital Signal Three (DS3) channels run at 44.736 Mbps (sometimes rounded up to 45 Mbps in sales materials). T3 lines can carry 28 DS1 channels or 672 DS0 channels.

We’re not locked into these three speeds. If we need more than DS0 but less than DS1, we can buy speed in additional units of 64Kbps. Since we’re buying a fraction of T1 speed, this is called fractional T1.

If we need more speed than a T1 line offers, but don’t need or want to pay for T3 speed, we can purchase additional speed in units of 1.536 Mbps. It’s no surprise this is called fractional

T3.

The info you’ll find on the Wikipedia link below is beyond the scope of the CCENT and CCNA exams, but it does have important information on other Tx options (including T2) and the international differences between the channels and their speeds. You’ll also find excellent info on the overhead involved and much more detail on how these lines work.

It’s worth a read!

I Like EWANs. I Hate EWOKs.

That’s strictly an editorial comment. If you like both, that’s fine with me.

What’s that? You’ve never heard of an EWAN? That’s an Ethernet WAN, and according to Cisco, it’s a pretty sweet deal!

“Ethernet has evolved from just a LAN technology to a scalable, cost-effective and manageable WAN solution for businesses of all sizes. Ethernet offers numerous cost and operational advantages over conventional WAN solutions. An EWAN offers robust and extremely scalable high-quality services that are superior to any traditional WAN technology.”

Try getting an Ewok to do that!

The connection to an EWAN is similar to connecting to our Ethernet LAN, really. We’ll use an Ethernet interface to connect rather than the Serial interface we used in this section for our HDLC, PPP, and Frame Relay WANs.

Here’s the source of that quote from earlier in this section, and an excellent guide on choosing the right router and/or switch for your EWAN:

Those router choices include the popular Integrated Services Router (ISR):

Neither of those links are required reading for the CCENT or CCNA exams, but it’s good material to have handy when you’re the one making these

choices!

A (Very) Little About MPLS

Multiprotocol Label Switching (MPLS) is a complex topic, and we’re not going to go very far into it here. I do want to point out that where Frame Relay and EWANs run at Layer 2, MPLS VPNs can run at Layer 2 or 3, but when you hear someone mention “MPLS VPN”, they mean the Layer 3 variety.

Our MPLS VPN endpoints and midpoints consist of Customer Edge, Provider Edge, and Provider devices, all sending and forwarding IP packets to their proper destination. (We hope!)

Our MPLS VPN endpoints and midpoints consist of Customer Edge, Provider Edge, and Provider devices, all

There’s just a wee bit more to

this process, but we’ll save that for your future studies.

By the way, I receive messages regularly from students telling me how popular MPLS is getting in their networks, so this is a topic well worth studying on your own when you’re done with your CCENT and CCNA!

Next up, we’ll review important IP addressing and routing concepts from your ICND1

studies before tackling OSPF and EIGRP!

Routing And IP Addressing Fundamentals:

A Review

Before we head into our OSPF and EIGRP studies, spend some time with this chapter from my ICND1 Study Guide. When you’re comfortable with the routing fundamentals in this section, charge forward!

For one host to successfully send data to another, the sending host needs two destination addresses:

destination MAC address (Layer 2)

destination IP address (Layer 3)

In this section, we’re going to concentrate on Internet Protocol (IP) addressing. IP addresses are often

In this section, we’re going to concentrate on Internet Protocol (IP) addressing. IP addresses are often referred to as “Network addresses” or “Layer 3 addresses”, since that is the OSI layer at which these addresses are used.

The IP address format you’re familiar with — addresses such as “192.168.1.1” — are IP version 4 addresses. That address type is the focus of this section. IP version 6 addresses are now in use, and they’re radically different from IPv4 addresses. I’ll introduce you to IPv6 later in this course, but unless I mention IPv6 specifically, every address you’ll see in this course is IPv4.

The routing process and IP both operate at the Network layer of the OSI model, and the routing

process uses IP addresses to move packets across the network in the most effective manner possible. In this section, we’re going to first take a look at IP addresses in general, and then examine how routers make a decision on how to get packet from source to destination.

The routing examples in this section are not complex, but they illustrate important fundamentals that you must have a firm grasp on before moving on to more complex

examples. To do any routing, we’ve got to understand IP addressing, so let’s start there!

IP Addressing And An Introduction To Binary Conversions

If you’ve worked as a network admin for any length of time, you’re already familiar with IP addresses. Every PC on a network will have one, as will other devices such as printers. The term for a network device with an IP address is host, and I’ll try to use that term as often as possible to get you used to

it!

The PC…err, the host I’m creating this document on has an IP address, shown here with the Microsoft command ipconfig.

C:\>ipconfig Windows IP Configuration Ethernet adapter Local Area C IP Address: 192.168.1.100 Subnet Mask: 255.255.255.0 Default Gateway: 192.168.1.1

All three values are important, but we’re going to concentrate on the IP address and subnet mask for now. We’re going to

compare those two values, because that will allow us to see what network this particular host belongs to. To perform this comparison, we’re going to convert both the IP address and the subnet mask to binary strings.

You’ll find this to be an easy conversion with practice.

First we’ll convert the IP address 192.168.1.100 to a binary string. The format that we’re used to seeing IP addresses take, like the 192.168.1.100 shown here, is a

dotted decimal address.

Each one of those numbers in the address are decimal representations of a binary string, and a binary string is simply a string of ones and zeroes.

Remember — “it’s all ones and zeroes”!

We’ll convert the decimal 192 to binary first. All we need to do is use the following series of numbers and write the decimal that requires conversion on the left side:

192

128

64

32

16

8

4

2

All you have to do now is work from left to right and ask yourself one question:

“Can I subtract this number from the current remainder?”

Let’s walk through this example and you’ll see how easy it is! Looking at that chart, ask yourself “Can I subtract 128 from 192?” Certainly we can. That means we put a “1” under

“128”.

 
  • 128 64

32

16

8

4

2

  • 192 1

Subtract 128 from 192 and the remainder is 64. Now we ask ourselves “Can I subtract 64 from 64?” Certainly we can! Let’s put a “1” under “64”.

 
 
  • 128 32

64

16

8

4

2

  • 192 1

1

Subtract 64 from 64, and you have zero. You’re practically done with your first binary conversion. Once you reach zero, just put a zero under

every other remaining value, and you have your binary string!

 

128

64

32

16

8

4

2

192

1

1

0

0

0

0

0

The resulting binary string for the decimal 192 is 11000000. That’s all there is to it!

If you know the basics of binary and decimal conversions, AND practice these skills diligently, you can answer any subnetting question Cisco asks you.

I’ll go ahead and show you the

entire binary string for 192.168.1.100, and the subnet mask is expressed in binary directly below it.

192.168.1.100

= 11000000

10101000 00000001 01100100

255.255.255.0

= 11111111

11111111 11111111 00000000

The subnet mask indicates where the network bits and host bits are. The network bits of the IP address are indicated by a “1” in the subnet mask, and the host bits are where the subnet mask has a “0”. This address has 24 network bits,

and the network portion of this address is 192.168.1 in decimal.

Any IP addresses that have the exact same network portion are on the same subnet. If the network is configured correctly, hosts on the same subnet should be found on one “side” of the router, as shown below.

Assuming a subnet mask of 255.255.255.0 for all hosts, we have two separate subnets, 192.168.1.x and

Assuming a subnet mask of 255.255.255.0 for all hosts, we have two separate subnets, 192.168.1.x and 192.168.4.x. What you don’t want is the following:

This could lead to a problem, since hosts in the same subnet are separated by a

This could lead to a problem, since hosts in the same subnet are separated by a router. We’ll see why this could be a problem when we examine the routing process later in this section, but for now keep in mind that having hosts in the same subnet separated by a

router is not a good idea! The IP Address Classes Way back in the ancient times of technology — September

1981, to be exact — IP address classes were defined in RFC

791.

RFCs are Requests For Comments, which are technical proposals and/or documentation. Not always the most exciting reading in the world, but it’s well worth reading the RFC that deals with the subject you’re studying. Technical exams occasionally

refer to RFC numbers for a particular protocol or network service.

To earn your CCENT and CCNA certifications, you must know these address classes and be able to quickly identify what class an IP address belongs to. Here are the three ranges of addresses that can be assigned to hosts:

Class A: 1 — 126 Class B: 128 — 191 Class C: 192 — 223

The following classes are reserved and cannot be assigned to hosts:

Class D: 224 — 239. Reserved for multicasting, a topic not covered on the CCENT or CCNA exams, although you will need to know a few reserved addresses from that range. You’ll find those throughout the course.

Class E: 240 — 255. Reserved for future use, also called “experimental addresses”.

Any address with a first octet of 127 is reserved for loopback interfaces. This range is *not* for Cisco router loopback interfaces.

For your exams, I strongly recommend that you know which ranges can be assigned to hosts and which ones cannot. Be able to identify which class a given IP address belongs to. It’s straightforward, but I guarantee those skills will serve you well on exam day!

The rest of this section concentrates on Class A, B, and C networks. Each class has its own default network mask, default number of network bits, and default number of host bits. We’ll manipulate these bits in the subnetting section, and you must know the following values in order to answer subnetting questions successfully — in the exam room or on the job!

Class A:

Number of network bits: 8 Number of host bits: 24

Class B:

Network mask:

255.255.0.0

Number of network bits:

16

Number of host bits: 16

Class C:

Network mask:

255.255.255.0

Number of network bits:

24

Number of host bits: 8

The RFC 1918 Private Address Classes

If you’ve worked on different production networks, you may have noticed that the hosts at different sites use similar IP addresses. That’s because certain IP address ranges have been reserved for internal networks — that is, networks with hosts that do not need to communicate with other hosts outside their own internal

network.

Address classes A, B, and C all have their own reserved range of addresses. You should be able to recognize an address from any of these ranges immediately.

Class A: 10.0.0.0 —

10.255.255.255

Class B: 172.16.0.0 —

172.31.255.255

Class C: 192.168.0.0 —

192.168.255.255

You should be ready to identify

those ranges in that format, with the dotted decimal masks, or with prefix notation. (More about prefix notation later in this section.)

Class A: 10.0.0.0 255.0.0.0, or 10.0.0.0 /8

Class B: 172.16.0.0 255.240.0.0, or 172.16.0.0 /12

Class C: 192.168.0.0 255.255.0.0, or 192.168.0.0 /16

You may already be thinking

“Hey, we use some of those addresses on our network hosts and they get out to the Internet with no problem at all.” (It’s a rare network that bans hosts from the Internet today — that approach just isn’t practical.)

The network services NAT and PAT (Network Address Translation and Port Address Translation) make that possible, but these are not default behaviors. We have to configure NAT and PAT manually. We’re going to do

just that later in this course, but for now, make sure you know those three address ranges cold!

Introduction To The Routing Process

Before we start working with routing protocols, we need to understand the very basics of the routing process and how routers decide where to send packets.

We’ll take a look at a basic network and follow the decision-making process from the point of view of the host, then the router. We’ll then examine the previous example in this section to see why it’s a

bad idea to have hosts from the same subnet separated by a router.

Let’s take another look at a PC’s ipconfig output.

C:\>ipconfig Windows IP Configuration Ethernet adapter Local Area C IP Address: 192.168.1.100 Subnet Mask: 255.255.255.0 Default Gateway: 192.168.1.1

When this host is ready to send packets, there are two and only two possibilities regarding the destination address:

It’s on the 192.168.1.0 255.255.255.0 network.

It’s not.

If the destination is on the same subnet as the host, the packet’s destination IP address will be that of the destination host. In the following example, this PC is sending packets to 192.168.1.15, a host on the same subnet, so there is no

need for the router to get involved. In effect, those packets go straight to

192.168.1.100 now wants to send packets to the host at 10.1.1.5, and 192.168.1.100 knows it’s not

192.168.1.100 now wants to send packets to the host at 10.1.1.5, and 192.168.1.100 knows it’s not on the same subnet as 10.1.1.5. In that case, the host will send the packets to its default gateway − in this case, the router’s

ethernet0 interface. The transmitting host is basically saying “I have no idea where this address is, so I’ll send it to my default gateway and let that device figure it out. In Cisco Router I trust!”

When a router receives a packet, there are three possibilities regarding its destination: Destined for a

When a router receives a packet, there are three possibilities regarding its destination:

Destined for a directly

connected network.

Destined for a non- directly connected network that the router has an entry for in its routing table.

Destined for a non- directly connected network that the router does not have an entry for.

Let’s take an illustrated look at each of these three possibilities.

How A Router Handles A Packet Destined For A Directly Connected Network

We’ll use the following network in this section:

How A Router Handles A Packet Destined For A Directly Connected Network We’ll use the following

The router has two Ethernet interfaces, referred to in the rest of this example as “E0” and “E1”. The switch ports will

not have IP addresses, but the router’s Ethernet interfaces will — E0 is 10.1.1.2, E1 is 20.1.1.2.

Host A sends a packet destined for Host B at 20.1.1.1. The

router will receive that packet on its E0 interface and see the destination IP address of

20.1.1.1.

not have IP addresses, but the router’s Ethernet interfaces will — E0 is 10.1.1.2, E1 is

The router will then check its routing table to see if there’s an entry for the 20.0.0.0 255.0.0.0 network. Assuming no static routes or dynamic routing protocols have been configured, the router’s IP routing table will look like this:

R1#show ip route Codes: C — connected, S — sta

Gateway of last resort is not

  • C 20.0.0.0/8 is directly co

  • C 10.0.0.0/8 is directly co

See the “C” and the “S” next to

the word “codes”? You’ll see anywhere from 15 — 20 different types of routes listed there, and I’ve removed those for clarity’s sake.

You don’t see the mask expressed as “255.0.0.0” — you see it as “/8”. This is prefix notation, and the number simply represents the number of 1s at the beginning of the network mask when expressed in binary. That “/8” is pronounced “slash eight”.

255.0.0.0 = binary string 11111111 00000000 00000000

00000000 = /8

The “C” indicates a directly connected network, and there is an entry for 20.0.0.0. The router will then send the packet out its E1 interface and Host B will receive it.

00000000 = /8 The “C” indicates a directly connected network, and there is an entry for

Simple enough, right? Of course, the destination network

will not always be directly connected. We’re not getting off that easy!

How The Router Handles A Packet Destined For A Remote Network That Is Present — Or Not — In The Routing Table

Here’s the topology for this example:

will not always be directly connected. We’re not getting off that easy! How The Router Handles

If Host A wants to transmit packets to Host B, there’s a problem. The first router that packet hits will not have an entry for the 30.0.0.0 /8 network, will have no idea how to route the packets, and the packets will be dropped.

There are no static routes or dynamic routing protocols in action on a Cisco router by default. Once we apply those IP addresses and then open the interfaces, there will be a connected route entry for each

of those interfaces with IP addresses, but that’s it.

When R1 receives the packet destined for 30.1.1.2, R1 will perform a routing table lookup to see if there’s a route for 30.0.0.0. The problem is that there is no such route, since R1 only knows about the directly connected networks 10.0.0.0 and 20.0.0.0.

R1#show ip route

Codes: C — connected, S — sta Gateway of last resort is not

C

10.0.0.0/8 is directly co

Without some kind of route to 30.0.0.0, the packet will simply be dropped by R1.

C 10.0.0.0/8 is directly co Without some kind of route to 30.0.0.0, the packet will simply

We can use a static route or a dynamic routing protocol to resolve this. Let’s go with static

routes, which are created with the ip route command. The interface named at the end of the command is the local router’s exit interface. (Plenty more on this command coming in a later section!)

R1(config)#ip route 30.0.0.0

The routing table now displays a route for the 30.0.0.0 /8 network. The letter “S” indicates a static route.

R1#show ip route

Codes: C — connected, S — sta

C

10.0.0.0/8 is directly co

S

30.0.0.0/8 is directly co

R1 now has an entry for the 30.0.0.0 network, and sends the packet out its E1 interface. R2 will have no problem forwarding the packet destined for 30.1.1.2, since R2 is directly connected to that network.

C 10.0.0.0/8 is directly co S 30.0.0.0/8 is directly co R1 now has an entry for

If Host B wants to respond to Host A’s packet, there would be a problem at R2, since the incoming destination address of the reply packet would be 10.1.1.1, and R2 has no entry for that network. A static route or dynamic routing protocol would be needed to get such a route into R2’s routing table.

The moral of the story: Just because “Point A” can get packets to “Point B”, it doesn’t mean B can get packets back to A!

Why We Want To Keep Hosts In One Subnet On One Side Of The Router

Earlier in this section, the following topology served as an example of how not to configure a network.

Why We Want To Keep Hosts In One Subnet On One Side Of The Router Earlier

Now that we’ve gone through some routing process examples, we can see why this is a bad setup. Let’s say a packet destined for 192.168.1.17 is coming in on another router interface.

Now that we’ve gone through some routing process examples, we can see why this is a

The router receives that packet and performs a routing table lookup for 192.168.1.0 255.255.255.0, and sees that network is directly connected via interface E0.

The router will then send the packet out the E0 interface, even though the destination IP address is actually found off the E1 interface!

In future studies, you’ll learn ways to get the packets to 192.168.1.17. For your CCENT and

In future studies, you’ll learn ways to get the packets to 192.168.1.17. For your CCENT and CCNA exams, keep in mind that it’s a good practice to keep all members of a given subnet on one side of a router. It’s

good practice for production networks, too!

Now that we have a firm grasp on IP addressing and the overall routing process, let’s move forward and tackle wildcard masking and OSPF!

The Wildcard Mask

ACLs use wildcard masks to determine what part of a network number should and should not be examined for matches against the ACL.

Wildcard masks are written in binary, and then converted to dotted decimal for router configuration. Zeroes indicate to the router that this particular bit must match, and ones are

used as “I don’t care” bits — the ACL does not care if there is a match or not.

In this example, all packets that have a source IP address on the 196.17.100.0 /24 network should be allowed to enter the router’s Ethernet0 interface. No other packets should be allowed to do so.

We need to write an ACL that allows packets in if the first 24 bits match 196.17.100.0 exactly, and does not allow any other packets regardless of source IP address.

1 st Octet — All bits must match.

00000000

2 nd Octet — All bits must match.

00000000

3 rd Octet — All bits must match.

00000000

4 th Octet — “I don’t care”

11111111

 

00000000

Resulting

00000000

Wildcard

00000000

Mask:

11111111

Use this binary math chart to convert from binary to dotted decimal:

 

128

64

32

16

 
  • 8 2

4

   

1

st

Octet:

  • 0 0

 
  • 0 0

   
  • 0 0

0

   

2

nd

Octet:

  • 0 0

 
  • 0 0

   
  • 0 0

0

   

3

rd

Octet:

  • 0 0

 
  • 0 0

   
  • 0 0

0

   

4

th

Octet:

  • 1 1

 
  • 1 1

   
  • 1 1

1

   

Converted to dotted decimal, the wildcard mask is 0.0.0.255. Watch that on your exam. Don’t choose a network mask of 255.0.0.0 for an ACL when you mean to have a wildcard mask of 0.0.0.255.

I grant you that this is an easy wildcard mask to determine without writing everything out. You’re going to run into plenty of wildcard masks that aren’t as obvious, so practice this method until you’re totally comfortable with this process.

We also use wildcard masks in

EIGRP and OSPF configurations. Consider a router with the following interfaces:

serial0: 172.12.12.12 /28 (or in dotted decimal,

255.255.255.240)

serial1: 172.12.12.17 /28

The two interfaces are on different subnetworks. Serial0 is on the 172.12.12.0 /28 subnet, where Serial1 is on the 172.12.12.16 /28 subnet. If we wanted to run OSPF on serial0 but not serial1, using a wildcard mask makes this possible.

The wildcard mask will require the first 28 bits to match 172.12.12.0; the mask doesn’t care what the last 4 bits are.

1 st Octet: All bits must match.

00000000

2 nd Octet: All bits must match.

00000000

3 rd Octet: All bits must match.

00000000

4 th Octet:

 

First four bits

00001111

must match.

 
 

00000000

Resulting

Wildcard

Mask:

00000000

00000000

00001111

Converted to dotted decimal, the wildcard mask is 0.0.0.15.

Let’s tackle and conquer OSPF!

OSPF And Link-State Protocols

Link-State Protocol Concepts

A major drawback of distance vector protocols is their transmission of full routing tables far too often. When a RIP router sends a routing update packet, that packet contains every single RIP route that router has!

This takes up valuable bandwidth and puts an unnecessary drain on the receiving router’s CPU.

Sending full routing updates on a regular basis is unnecessary. You’ll see very few networks that have a change in their topology every 30 seconds, but that’s how often a RIP-enabled interface will send a full routing update.