Você está na página 1de 182

DAY ONE: INSIDE JUNOS NODE SLICING

With the introduction of Junos node slicing, Juniper Networks offers the best virtualization solu-
tion for both control and data planes. Indeed, it’s quite well known that X86 CPUs are the best for
routing protocol implementations, but they are very limited in terms of network packet processing
input/output performances. With Junos node slicing, network engineers can take advantage of
DAY ONE: INSIDE JUNOS® NODE SLICING

DAY ONE: INSIDE JUNOS NODE SLICING


X86 capabilities for control plane applications, while, at the same time, continuing to enjoy TRIO
programmable network processors benefits in terms of packet processing performances. This book
goes inside to show you how to create and operate a Junos node slicing setup.

“Change begins at the end of the comfort zone and the Junos node slicing engineering team stepped
out of their comfort zone and had no fear taking MX technology to the next level. Massimo did the
same in this book. He stepped out of the comfort zone and had no fear in writing this excellent book
that will help customers understand and deploy Junos node slicing technology. A great example of
leadership and hard work.”
Javier Antich Romaguera, Product Line Manager, Automation Software Team, Juniper Networks

“You true believers in the no-compromise, brute performance, and power efficiency built on network
ASICs may have felt a disturbance in the Force with “network slicing,” a wavering in your faith. Worry
not! This book shows you how to surgically carve a physical router into “node slices,” each running its
Understand, verify, deploy.
own version of the OS, administered by its own group, running its own protocols – advanced “CPU
virtualization” for networking devices. Massimo Magnani makes the process easy with this step-by-
step Day One book!”
- Dr. Kireeti Kompella, SVP and CTO Engineering, Juniper Networks

IT’S DAY ONE AND YOU HAVE A JOB TO DO, SO LEARN ABOUT:
n Junos node slicing architecture and how it works
n How to install and set up a new Junos node slicing solution
n Advanced topics such as Abstracted Fabric Interfaces and inter-GNF connectivity
n How to correctly set up MX chassis and server to deploy Junos node slicing in a lab or a
production environment
n How to migrate an existing chassis to a Junos node slicing solution
n How to design, deploy, and maintain Junos node slicing
By Massimo Magnani
Magnani

ISBN 978-1941441923
52500

Day One Books are focused on network reliability and efficiency.

9 781941 441923 Peruse the complete library at www.juniper.net/books.


DAY ONE: INSIDE JUNOS NODE SLICING

With the introduction of Junos node slicing, Juniper Networks offers the best virtualization solu-
tion for both control and data planes. Indeed, it’s quite well known that X86 CPUs are the best for
routing protocol implementations, but they are very limited in terms of network packet processing
input/output performances. With Junos node slicing, network engineers can take advantage of
DAY ONE: INSIDE JUNOS® NODE SLICING

DAY ONE: INSIDE JUNOS NODE SLICING


X86 capabilities for control plane applications, while, at the same time, continuing to enjoy TRIO
programmable network processors benefits in terms of packet processing performances. This book
goes inside to show you how to create and operate a Junos node slicing setup.

“Change begins at the end of the comfort zone and the Junos node slicing engineering team stepped
out of their comfort zone and had no fear taking MX technology to the next level. Massimo did the
same in this book. He stepped out of the comfort zone and had no fear in writing this excellent book
that will help customers understand and deploy Junos node slicing technology. A great example of
leadership and hard work.”
Javier Antich Romaguera, Product Line Manager, Automation Software Team, Juniper Networks

“You true believers in the no-compromise, brute performance, and power efficiency built on network
ASICs may have felt a disturbance in the Force with “network slicing,” a wavering in your faith. Worry
not! This book shows you how to surgically carve a physical router into “node slices,” each running its
Understand, verify, deploy.
own version of the OS, administered by its own group, running its own protocols – advanced “CPU
virtualization” for networking devices. Massimo Magnani makes the process easy with this step-by-
step Day One book!”
- Dr. Kireeti Kompella, SVP and CTO Engineering, Juniper Networks

IT’S DAY ONE AND YOU HAVE A JOB TO DO, SO LEARN ABOUT:
n Junos node slicing architecture and how it works
n How to install and set up a new Junos node slicing solution
n Advanced topics such as Abstracted Fabric Interfaces and inter-GNF connectivity
n How to correctly set up MX chassis and server to deploy Junos node slicing in a lab or a
production environment
n How to migrate an existing chassis to a Junos node slicing solution
n How to design, deploy, and maintain Junos node slicing
By Massimo Magnani
Magnani

ISBN 978-1941441923
52500

Day One Books are focused on network reliability and efficiency.

9 781941 441923 Peruse the complete library at www.juniper.net/books.


Day One: Inside Junos® Node Slicing
by Massimo Magnani

Chapter 1: Introducing Junos Node Slicing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Chapter 2: Junos Node Slicing, Hands On. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

Chapter 3: GNF Creation, Bootup, and Configuration. . . . . . . . . . . . . . . . . . . . . . . . 67

Chapter 4: GNF AF Interfaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

Chapter 5: Lab It! EDGE and CORE Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

Chapter 6: From Single Chassis to Junos Node Slicing. . . . . . . . . . . . . . . . . . . . . . . 137

Appendix: Node Slicing Lab Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163


iv

© 2019 by Juniper Networks, Inc. About the Author


All rights reserved. Juniper Networks and Junos are Massimo Magnani is a consulting engineer at Juniper
registered trademarks of Juniper Networks, Inc. in the Networks, where he works with many Italian operators on
United States and other countries. The Juniper Networks design, testing, and implementation of their networks. He
Logo and the Junos logo, are trademarks of Juniper started his IT career as Linux system administrator for the
Networks, Inc. All other trademarks, service marks, Tetrapak Sweden group in 1998, then fell in love with IP
registered trademarks, or registered service marks are the networking and started working for Blixer S.p.A., the first
property of their respective owners. Juniper Networks Italian IP covered provider in 2000, and then worked as
assumes no responsibility for any inaccuracies in this system integrator in the Hewlett-Packard consulting
document. Juniper Networks reserves the right to change, department before joining Juniper Networks in 2008.
modify, transfer, or otherwise revise this publication Massimo then worked at Lutech S.p.A., Cisco Gold Partner
without notice. and Juniper Networks Elite Partner, as a network engineer
providing consultant services to many large enterprises and
Published by Juniper Networks Books service providers. During this time he gained deep
Author: Massimo Magnani knowledge and experience on IP/MPLS, security, and
Editor in Chief: Patrick Ames switching technologies. He has always been a Linux
Copyeditor: Nancy Koerbel enthusiast and an insane retro-gamer. He had the chance to
Illustrator: Karen Joice be a Junos node slicing pioneer, and decided to become a
Day One author.
ISBN: 978-1-941441-93-0 (print)
Printed in the USA by Vervante Corporation. Author’s Acknowledgments
I would like to thank my wife, Katiuscia, and my daughter,
ISBN: 978-1-941441-92-3 (ebook) Sara, for their lovely support and for all the time taken
away from the family because of this book. A big thank you
Version History: v1, May 2019 also to Javier Antich Romaguera, Junos node slicing
2 3 4 5 6 7 8 9 10 Product Manager, for supporting my idea to write this
book. Many thanks to Jan van der Galien and Sean Clarke,
http://www.juniper.net/dayone from Juniper Networks Amsterdam PoC Lab, for
maintaining my testbed for so long. A special thank you
also goes to Shraman Adhikari and Abhinav Tandon, from
Juniper Networks Engineering, for replying to all my
annoying questions. And last, but absolutely not least, my
warmest thank you to Patrick Ames, Juniper Networks
Books Editor in Chief, for all his wise advice, which I
haven’t always followed, then regretted later, and for his
guidance and support during the whole writing process.
Thank you to my copyeditor, Nancy Koerbel, and my
illustrator, Karen Joice, for their time, patience, and artistic
passion.

Feedback? Comments? Error reports? Email them to


dayone@juniper.net.
v

Welcome to Day One


This book is part of the Day One library, produced and published by Juniper Net-
works Books. Day One books cover the Junos OS and Juniper Networks network-
administration with straightforward explanations, step-by-step instructions, and
practical examples that are easy to follow. You can obtain the books from various
sources:
 Download a free PDF edition at http://www.juniper.net/dayone.

 PDF books are available on the Juniper app: Junos Genius.

 Ebooks are available at the Apple iBooks Store.

 Ebooks are available at the Amazon Kindle Store.

 Purchase the paper edition at Vervante Corporation (www.vervante.com) for


between $15-$40, depending on page length.

A Key Junos Node Slicing Resource


The Juniper TechLibrary has been supporting Junos node slicing since its inception
and they feature the Junos® Node Slicing Feature Guide, with all the material you
need to get started with Junos node slicing. This book is not a substitute for that
body of work, so you need to take the time to review the documentation: https://
www.juniper.net/documentation/en_US/junos/information-products/pathway-
pages/junos-node-slicing/junos-node-slicing.html.
vi

What You Need to Know Before Reading This Book


The author is assuming you have knowledge and experiece in the following net-
working fields:
 Basics of Linux system administration
 Network operational experience with the Junos CLI

 Some understanding of virtualization technologies such as hypervisors, virtual


switches, and virtual network information collectors (NICs): what they are and
how they work
 Some understanding of the taxonomy of Juniper Networks routers

 Basic understanding of routing and switching main concepts such as next hops
and VLANs
 A basic knowledge of subscriber management and BGP protocol may be help-
ful to better grasp the use cases illustrated in Chapter 3
 And, familiarity with the technical documentation about the MX Series, Junos
node slicing, and routing in general, available at https://www.juniper.net/docu-
mentation.

MORE? The Day One library has dozens of books on working with Junos:
http://www.juniper.net/books.

What You Will Learn by Reading This Book


You’ll learn :
 All about Junos node slicing architecture and how it works

 How to install and set up a new Junos node slicing solution

 Advanced topics such as AF Interfaces and inter-GNF connectivity

 How to correctly set up MX chassis and server to deploy Junos node slicing in
a lab or a production environment
 How to migrate an existing chassis to a Junos node slicing solution

 How to design, deploy, and maintain Junos node slicing


vii

About This Book


Welcome to Junos node slicing. This book provides the reader with useful infor-
mation to successfully deploy node slicing in a Junos production environment, ex-
ploiting all the benefits that the solution can provide:
 Network slicing and Junos node slicing, how they are crucial to modern net-
working infrastructures, and why they are enabling upcoming technological
revolutions, such as 5G and IoT.
 Junos node slicing, its building blocks, the hardware and software prerequi-
sites it requires, and the components necessary to deploy it.
 A deep dive into a practical implementation, showing how to activate the fea-
ture, how to set up new partitions on a physical chassis, and how to manage it
all.
 A practical use case in detail, common in today’s networks, on how to turn an
existing MX Series router into a Junos node slicing device minimizing network
outages and the impacts of operator administration.

Junos Node Slicing Terms and Concepts


Base System: When the node slicing feature is configured on any MX Series router,
it becomes a Base System (B-SYS); when the set chassis node-slicing command is
used in Junos, the MX Chassis is turned into a B-SYS. The B-SYS holds the MX
line cards that constitute the Junos node slicing data plane, as well as all the hard-
ware components of the MX Chassis.
System Control Board Enhanced 2 (SCBe2): features the third-generation fabric
board on MX960/480/240 routers; it also provides the physical slot where the
Routing Engine is installed.
Control Board – Routing Engine (CB-RE): is the module that constitutes the host
subsystem hardware function on MX2000 series router chassis; they are installed
in pairs in the chassis for redundancy and provide the Routing Engine slots as well.
Junos Device Manager (JDM): is the orchestrator that manages the virtual net-
work functions running on the Junos node slicing external servers.
Virtual Network Functions (VNFs): are virtual machines acting as a virtual Rout-
ing Engine, configured by the JDM and provide the Junos node slicing control
plane. At the time of this writing, VNFs run on external servers, although they will
also be supported, on a lesser scale, on the MX internal Routing Engine in upcom-
ing Junos releases.
viii

Guest Network Function (GNF): represents a full slice composed of a control


plane element, the virtual Routing Engine, and of a data plane element constituted
by one, or a set, of line cards.
Abstracted Fabric Interfaces (AF Interfaces): are pseudo-interfaces modeled as
plain Ethernet interfaces, and used to interconnect different GNFs to avoid wast-
ing precious revenue ports to make communication between GNFs possible. AF
Interfaces use the MX Series router fabric infrastructure to forward traffic, so they
share the very same non-blocking performances of the crossbar itself.
Chapter 1

Introducing Junos Node Slicing

Before starting, it’s important to understand why Junos node slicing is so remark-
able and what benefits it can bring to a modern network infrastructure.

5G and IoT: How Will the Network Cope with Them?


Juniper Networks has been active in networking for more than 20 years and has
witnessed many transformations in the market. But to be sure, the transformations
happening today related to the adoption of massive virtualization technologies,
the Internet of Things (IoT), and the upcoming 5G are the most exciting and game
changing developments in a long time!
The combination of IoT and the 5G will produce unprecedented changes in the
way networks are consumed: for the first time in history a relatively “small” part
of connected devices will be operated by humans (on the order of billions of de-
vices), while most of the devices (on the order of trillions) will be machines of some
kind providing the most heterogenous set of services ever seen. For instance, think
smart farming, smart cities, weather monitoring, human health monitoring, life-
saving and life-support services, transportation and many, more examples. The
only limit is the human imagination! Along with these new applications, humans
will use their devices more intensively and perform their entire day-by-day tasks
relying on the telecommunication infrastructure.
But all of this diversity in services and applications will work properly if, and only
if, the underlying network infrastructure is able to strictly provide the required
characteristics: latency, bandwidth, and reliability to each of the services as if they
were running on a dedicated infrastructure. The higher the service sensitivity, the
more reliable the resource allocation machinery must be.
10 Chapter 1: Introducing Junos Node Slicing

Of course, building physically separated networks for each service, or even groups
of bandwidth and latency homogeneous services, is neither physically, practically,
nor economically sustainable.
So how can a service provider segment its physical infrastructure to provide strict
resource allocation and protection, the availability demanded by sensible services,
and economic sustainability? The 3GPP committee explicitly created the concept
of “network slicing” to solve all these problems. Network slicing, according to
3GPP, is a set of technologies that allows an operator to partition his physical net-
work into multiple virtual networks providing the needed resources protection,
availability, and reliability to each service to make it act like it is a dedicated net-
work all to itself.
To further reinforce this same concept, but at the most granular level that is the
single node, Juniper Networks introduced Junos node slicing on its flagship MX
Series line of products.

Junos Virtualization Toolkit: From the Beginning to Node Slicing


Since its earliest release, Junos has always provided a rich toolkit to create multiple
and logically separated instances to allow network operators to tailor the routing
in the most flexible ways. The very first instrumentation introduced in the late
nineties was the “virtual-router.” It basically created a new routing table inside the
same routing protocol process (RPD) running on the OS. This is a very powerful
routing tool as each virtual router resolves the routing information separately
from others, but it doesn’t provide any sort of resource reservation, management
separation, fate sharing protection, or scale up benefits.
The next router virtualization step in Junos was the creation of the so called “Log-
ical Systems.” This was a disruptive functionality because for the first time in net-
working history, a router control plane could be partitioned to look like different
devices. Indeed, with Logical Systems, different instances of the RPD (up to 16
running concurrently) were spawned and each of them could provide the network
engineer with an (almost) full set of Junos features, as would happen on a separate
router. For instance, it was possible to create virtual routers inside each Logical
System as if they were running on different nodes. In subsequent releases file sys-
tem and command-line interface (CLI) localizations were added, providing the
Logical System the capability to have different management contexts. Despite the
Logical System tool providing a better resiliency solution when compared to sim-
ple virtual routers (for example, a fault inside a single RPD instance doesn’t affect
others in the same node), it can neither solve fate sharing inside the same routing
engine (as all the RPDs run on top of the very same free BSD kernel) nor control
plane scale out. Moreover, even in the latest Junos release, the management sepa-
ration and the resource protection characteristics are a lot lighter than the ones
offered by different devices.
11 Junos Virtualization Toolkit: From the Beginning to Node Slicing

To solve these two problems, back in 2007 Juniper Networks released a solution,
available on T-Series routers only, named “Juniper Control System” (JCS). The
JCS was a custom chassis, connected to the T Series router through Ethernet-re-
dundant links, hosting up to twelve X86 physical boards that could be configured
separately or Master/Backup routing engines to control up to eight partitions (if
routing engine redundancy was not a strict requirement, otherwise up to six were
achievable) inside the same T Series router. In this way, scaling out, resource reser-
vation and protection, and single Routing Engine fate sharing challenges were per-
fectly addressed. The major drawbacks of the solution were that the hardware was
completely proprietary and very different from standard X86 servers, the blades
were quite expensive, and overall, the TCO of the solution, once all twelve slots
were populated, was quite high due to the high total power consumption of the
external chassis. Moreover, to interconnect different partitions (named Protected
System Domains, or PSDs, just for the record) the customer had to use either phys-
ical ports or logical tunnel interfaces that needed dedicated tunnel cards, which
wasted slots on the Flexible PIC Concentrators (FPCs) and had some performanc-
es limitations in terms of available throughput. JCS provisioning was another
weak spot because it had to be performed using a sort of off-line BIOS-like soft-
ware that was not integrated in any way with Junos. Bottom line: the solution was
not fully optimized.
JCS, on paper, was already quite impressive at that time, but the real problem was
very simple: the technology needed for the underlay wasn’t ready. With the intro-
duction of multi-core X86 CPUs, memory price drops, increased storage sizes with
very low cost per stored bit, the adoption of modern virtualization technologies on
the control plane side, and the advancements in programmability of network pro-
cessors in the data plane side, all the technological shortcomings JCS had to fight
are now solved, and the time has finally come to implement Junos node slicing.
With Junos node slicing, all the challenges addressed by the JCS are solved, with
none of the downsides that afflicted it. Indeed, Junos node slicing:
 Provides control plane virtualization using modern virtualization machineries
running over computing off the shelf (COTS) X86 servers running standard
Linux distributions (Ubuntu and Red Hat Enterprise Linux at the moment);
 The Routing Engines run as virtual machines (VMs) on top of the Linux OS
and not as dedicated hardware blades;
 Resource reservation and protection are provided using full-fledged VMs;

 A slice is composed of a virtualized control plane (two virtual Routing Engines


in a Master/Backup configuration) and one or more line cards physically host-
ed by the router chassis but logically paired with the external, instead of the
internal, virtual Routing Engines;
12 Chapter 1: Introducing Junos Node Slicing

 Scaling out is achieved because each slice inherits the scale and performances
of a single chassis;
 A failure in any part of a slice can’t affect the others in any way; the perceived
fault on a remote slice will be the same as if a remote node goes down;
 Junos node slicing provides a fully integrated VM management orchestrator
named Junos Device Manager (JDM) providing northbound APIs and a Junos-
like CLI to easily handle all the operations needed during the whole lifecycle of
a VM;
 Slices can be interconnected using AF Interfaces, which are logical interfaces
created leveraging the MX Chassis fabric; no revenue ports are lost, and per-
formances are the same for the underlying crossbar when different partitions
are connected;
With the introduction of Junos node slicing, Juniper Networks offers the best vir-
tualization solution for both control and data planes. Indeed, it’s quite well known
that X86 CPUs are the best for routing protocol implementations, but they are
very limited in terms of network packet processing input/output performances.
With Junos node slicing network engineers can take advantage of X86 capabilities
for control plane applications, while at the same time continuing to enjoy TRIO
programmable network processor benefits in terms of packet processing
performances.

Juniper Networks Node Slicing


Let’s lift the curtain on the MX Series Junos node slicing, the star of this book!
From a physical standpoint, today’s Junos node slicing is just an MX Chassis con-
nected to two X86 servers. Simple, isn’t it?
But hardware is just one side of the coin: the other, and most interesting one, is the
software, which is the usual in this software-defined world! Nevertheless, the right
approach is to fully understand both sides, so let’s start with what a node slice
physically looks like, then we’ll be able to delve into the software implementation.

IMPORTANT Juniper Networks has already released a well-written Junos Node


Slicing Feature Guide that explains both the hardware and software requirements
in great detail: https://www.juniper.net/documentation/en_US/junos/information-
products/pathway-pages/junos-node-slicing/junos-node-slicing.pdf. Therefore,
this Day One book will highlight the hardware and software requirements only
where they are important and beneficial to its context.
13 Node Slicing — Physical High-Level View

Node Slicing — Physical High-Level View


To deploy the node slicing solution an MX Series router must be connected to two
external X86 bare metal servers, which will run the external virtual Routing En-
gines powering the GNF control plane as illustrated in Figure 1.1.
To connect the servers to the router, two 10 Gigabit Ethernet links from each serv-
er will attach to the ports available on the MX SCBe2 or CB-RE boards. For this
reason, they are a fundamental hardware prerequisite to running Junos node slic-
ing on the MX480/960, and the MX2008/2010/2020, respectively. The two X86
servers will be designated as “Server0” and “Server1” when JDM is started for the
first time, and they will run virtual Routing Engines 0, and 1, respectively.

Figure 1.1 MX Junos Node Slicing High-Level Hardware Scheme

Figure 1.2 shows each X86 server connected to both MX SCBe2s to provide need-
ed redundancy to the most important underlay control plane component. In fact,
these 10Gbe links have the responsibility to deliver all the control plane signaling
(for example, the routing protocol PDUS), configuration commands, and syslog
messages between the base system and the bare metal servers running the virtual
Routing Engine instances. A single point of failure is just not an option!
It’s very important to the underlay that, from a purely logical standpoint, there is
no difference between how an internal Routing Engine and the external X86 serv-
ers are connected to the data plane component of a router chassis. In fact, one of
14 Chapter 1: Introducing Junos Node Slicing

the architectural pillars of Juniper Networks devices is that control plane and data
plane components are physically separated from each other. Routing Engines han-
dle control traffic, while Packet Forwarding Engines forward transit traffic.
Nevertheless, among many other duties, the control plane must handle routing
information to calculate the routing information base (RIB - also known as rout-
ing table) and then download it into the forwarding information base (FIB – also
known as forwarding table) to provide the Packet Forwarding Engines all the cor-
rect next hops used to forward transit traffic.
But if there is no physical connection between the Packet Forwarding Engines and
the Routing Engines inside a chassis, how could this goal be achieved? The answer
is quite simple: inside every Juniper Network device chassis, a fully dedicated in-
frastructure takes care of all the control plane traffic and the communications be-
tween the Routing Engines and the line cards. The term line cards is used because
this information exchange doesn’t happen between the Routing Engines and the
various Packet Forwarding Engines, but between the Routing Engines and a dedi-
cated control CPU installed on each of them; in fact, this CPU is in charge of ex-
changing control plane information with the routing engines and to program the
local Packet Forwarding Engines for traffic forwarding.
It’s understood that in order for this information exchange to happen, Routing
Engines and line card control CPUs need some kind of network infrastructure; in-
deed, a port-less switching component is installed in the chassis to provide the in-
terconnection between all the control plane CPUs, either in the line cards or on the
Routing Engines. In all MX chassis, this switching chip is physically installed on
the Control Board (SCBe2 for MX240/480/960 or CB-RE on MX2000 series), so
it is pretty straightforward to provide the two physical Ethernet ports on the CBs
themselves to extend this infrastructure outside the chassis.
To better understand how this layer works, you can check how all the internal
components are connected through this management switch, issuing the show chas-
sis ethernet-switch command:

magno@MX960-4-RE0> show chassis ethernet-switch 

Displaying summary for switch 0
---- SNIP ----

Link is good on GE port 1 connected to device: FPC1
  Speed is 1000Mb
  Duplex is full
  Autonegotiate is Enabled
  Flow Control TX is Disabled
  Flow Control RX is Disabled

---- SNIP ----

Link is good on GE port 6 connected to device: FPC6
  Speed is 1000Mb
15 Node Slicing — Physical High-Level View

  Duplex is full
  Autonegotiate is Enabled
  Flow Control TX is Disabled
  Flow Control RX is Disabled

<SNIP>

Link is good on GE port 12 connected to device: Other RE
  Speed is 1000Mb
  Duplex is full
  Autonegotiate is Enabled
  Flow Control TX is Disabled
  Flow Control RX is Disabled

Link is good on GE port 13 connected to device: RE-GigE
  Speed is 1000Mb
  Duplex is full
  Autonegotiate is Enabled
  Flow Control TX is Disabled
  Flow Control RX is Disabled

<SNIP>

Link is good on XE port 24 connected to device: External-Ethernet
  Speed is 10000Mb
  Duplex is full
  Autonegotiate is Disabled
  Flow Control TX is Disabled
  Flow Control RX is Disabled

Link is good on XE port 26 connected to device: External-Ethernet
  Speed is 10000Mb
  Duplex is full
  Autonegotiate is Disabled
  Flow Control TX is Disabled
  Flow Control RX is Disabled

NOTE This output comes from an MX960 B-SYS chassis and for the sake of
brevity only the connected ports are shown.

You can see in the output that in this particular MX960 chassis:
 Ports 1/6 are connected to FPC1/6, respectively.

 Port 12 is connected to Other RE meaning the Routing Engine physically in-


stalled on the other SCBe2, namely RE1 in this case, as the output comes from
RE0 (please note the CLI prompt above).
 Port 13 is connected to RE-GigE , which means the Routing Engine physically
installed on the local SCBe2, namely RE0.
 Ports 24 and 26 are connected to External-Ethernet devices; these two ports are
used by the external servers to connect the MX Chassis through the external
SFP+ plugs installed on the SCBe2 cards.
16 Chapter 1: Introducing Junos Node Slicing

NOTE On MX2020/2010/2008 chassis, the ports connected to External-Ethernet


devices are 26 and 27.

The just observed CLI outputs are depicted in Figure 1.2.

Figure 1.2 Management Physical Infrastructure

From a pure line cards perspective, there is no difference in physical connections


between an internal Routing Engine and an external one besides the different con-
nection speeds, which are 1Gbps, and 10Gbps, respectively.

NOTE In the lab setup used to write this Day One book, Routing Engines
RE-S-1800x4-32G were installed so the connection speed on the management
switch is still 1Gpbs. Newer Routing Engines, such as the RE-S-X6 (MX480/960)
or the RE-S-X8 (MX2020/2010/2008) have 10Gbps links instead.

Traditionally, it is through this control plane component that all the needed infor-
mation is exchanged between the Routing Engine and the line cards installed in the
chassis. These communications have been happening using our well known and
beloved IP protocol since Junos 10.2R1, so it was straightforward to simply con-
nect some more ports to this very same switch and ‘extend’ the control plane con-
nectivity outside a physical chassis.
17 Choose the Right X86 Servers

IMPORTANT Even though there are no real technical constraints when deploy-
ing 10Gpbs Ethernet links using copper or fiber cables, Juniper Networks strongly
suggests deploying the links using optical media. Juniper Networks Systems
engineers themselves use fiber connections to perform node slicing quality assur-
ance tests. Whichever media is chosen, it’s mandatory that all the connections be
delivered using the same media: mixed fiber and copper setups are not supported.

Choose the Right X86 Servers


Choosing the right base system to deploy node slicing is pretty straightforward, as
it is supported only on the following MX Series:
 MX480 Series Routers with SCBe2 cards;

 MX960 Series Routers with SCBe2 cards;

 MX2008 Series Routers;

 MX2010 Series Routers;

 MX2020 Series Routers.

But when the choice comes to the X86 bare metal servers, the options are almost
endless, so it may be beneficial to clearly understand all the characteristics a server
must provide to host the Junos node slicing control plane. The two main criteria
are:
 Hardware Characteristics

 Scaling requirements

Let’s discuss each one separately.

Mandatory X86 External Servers Features


In order to be suitable for becoming a bare metal server for Junos node slicing
some mandatory hardware features must be provided by the chosen X86 servers,
especially relating to CPU, storage, and network interface cards. Let’s examine
them and also check to see if your lab setup has all the needed components to de-
ploy the Junos node slicing feature.

NOTE The two external servers should have similar, or even better, identical
technical specifications.

X86 CPUs: These must be Intel Haswell EP microarchitecture (Xeon E5-1600 v3 /


Xeon E5-2600 v3) or newer.
18 Chapter 1: Introducing Junos Node Slicing

WARNING When CPU cores are accounted for, only physical cores are consid-
ered. Indeed, hyperthreading must be deactivated in the X86 server BIOS because,
at the moment, JDM is hyperthreading unaware, hence vCPUs belonging to the
same physical core might be pinned to different virtual Routing Engine instances,
eventually causing suboptimal performance.

Performance mode and virtualization hardware acceleration capabilities (Intel


VT-x / VT-d) must be activated in the BIOS.
To maintain steady and predictable CPU core performances, all power manage-
ment features must be disabled in BIOS, and C-State reports should be set to only
C0/C1 states.
To ensure that the server CPU is performing as expected, let’s collect some infor-
mation from the Linux CLI:
1. Check the CPU model you have in your server:
administrator@server-6d:~$ lscpu 
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                32
On-line CPU(s) list:   0-31
Thread(s) per core:    1
Core(s) per socket:    16
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 79
Model name:            Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz
Stepping:              1
CPU MHz:               2099.479
CPU max MHz:           2100.0000
CPU min MHz:           1200.0000
BogoMIPS:              4201.82
Virtualisation:        VT-x
………………………………………
<SNIP>

You can get many useful insights (highlighted in bold) from using this command:
 The CPU model is a Xeon E5-2683 v4 @ 2.10Ghz, based on the Broadwell mi-
croarchitecture, which is a successor to Haswell.
 This CPU model has 16 cores per socket.

 Just a single thread per CPU is reported to the OS, so hyperthreading is dis-
abled.
19 Choose the Right X86 Servers

 The server has two NUMA nodes; in other words, two physical CPUs are in-
stalled on the server motherboard.
 As the server has two NUMA nodes and each of them can provide 16 cores, a
total of 32 cores is available.
 Intel VT-x virtualization extensions are enabled.

2. To ensure the power management features are not impairing CPU performanc-
es, check that all the cores run at (around) the expected nominal frequency, in this
case 2.10Ghz:
administrator@server-6d:~$ cat /proc/cpuinfo | grep MHz
cpu MHz         : 2100.432
cpu MHz         : 2100.432
cpu MHz         : 2099.808
cpu MHz         : 2100.427
cpu MHz         : 2101.213
cpu MHz         : 2100.391
cpu MHz         : 2100.058
cpu MHz         : 2100.693
cpu MHz         : 2098.522
cpu MHz         : 2101.095
cpu MHz         : 2099.541
cpu MHz         : 2101.460
cpu MHz         : 2103.776
cpu MHz         : 2100.058
cpu MHz         : 2098.889
cpu MHz         : 2104.051
cpu MHz         : 2100.680
cpu MHz         : 2101.299
cpu MHz         : 2100.111
cpu MHz         : 2100.059
cpu MHz         : 2100.458
cpu MHz         : 2106.318
cpu MHz         : 2109.576
cpu MHz         : 2105.722
cpu MHz         : 2101.276
cpu MHz         : 2100.212
cpu MHz         : 2100.083
cpu MHz         : 2100.277
cpu MHz         : 2101.365
cpu MHz         : 2101.076
cpu MHz         : 2098.523
cpu MHz         : 2103.841

Perfect! From a CPU perspective we’re all set!


3. Storage must be locally attached and based on solid-state drive (SSD). Storage
space is allocated per-GNF from two main mount points:
 / (root) – It must have at least 50GB space allocated.

 /vm-primary – It is the mount point where all the virtual Routing Engine im-
ages and files will be stored and must have at least 350GB space allocated.
20 Chapter 1: Introducing Junos Node Slicing

NOTE A RAID-1 (mirroring) configuration is recommended if storage resiliency


is required.

Setting up a /vm-primary dedicated partition is also suggested, although not


mandated.
Let’s check to see if the lab servers have the right storage. As usual, we’ll rely on
Linux CLI to perform some checks. First of all, let’s check if the disk is an SSD or
not:
administrator@server-6d:~$ lsblk -d -o name,rota
NAME ROTA
sda     0

The server has a single disk whose attribute ROTA (which means rotational de-
vice) is 0. So, it’s an SSD.

WARNING With older disk controllers, and if any virtualized storage is in place,
this Linux command might report wrong information. Nevertheless, with modern
disk controllers and direct attached storage, it should be reliable. Anyway, it’s also
possible to check the kernel boot messages using “dmesg” to find the exact disk
model / manufacturer and then browser search for it. For example:
administrator@server-6d:~$ dmesg | grep -i -e scsi | grep ATA
[   10.489298] scsi 5:0:0:0: Direct-Access     ATA      Micron_1100_MTFD U020 PQ: 0 ANSI: 5

4. Check the disk size using familiar Linux commands such as df:
administrator@server-6d:~$ df -H
Filesystem                    Size  Used Avail Use% Mounted on
udev                          169G     0  169G   0% /dev
tmpfs                          34G   27M   34G   1% /run
/dev/mapper/server1--vg-root  503G   36G  442G   8% /
tmpfs                         170G     0  170G   0% /dev/shm
tmpfs                         5.3M     0  5.3M   0% /run/lock
tmpfs                         170G     0  170G   0% /sys/fs/cgroup
/dev/sda1                     495M  153M  317M  33% /boot
cgmfs                         103k     0  103k   0% /run/cgmanager/fs
tmpfs                          34G     0   34G   0% /run/user/1000

In this setup, the disk size used is ~ 512GBs.


5. Network Interface Cards
The node slicing networking is built over Linux bridges and VIRTIO technology at
the moment, so any network interface cards supported by the X86 server underly-
ing the Linux distribution should be just fine.
Nevertheless, even though VIRTIO is the main I/O platform in KVM, its forward-
ing performances can’t match other technologies such as Intel SR-IOV. To eventu-
ally support future-scaled scenarios involving dozens of GNFs, VIRTIO might not
be powerful enough to deliver the required bandwidth between the X86 external
21 Choose the Right X86 Servers

servers and the B-SYS, hence SR-IOV might be implemented instead. Juniper Net-
works therefore suggests equipping X86 external servers with NICs based on Intel
X710 chipset because these will be the only ones officially supported with SR-IOV.
Of course, in case Junos node slicing is deployed today using VIRTIO, it will be
perfectly supported to replace existing network cards with Intel X710-based ones
in the future in order to be able to use the SR-IOV-based solution.
6. RAM Memory
There is no specific requirement about RAM memory besides its size. Indeed, we
will see in the next paragraph that RAM is one of the fundamental factors to prop-
erly dimension X86 servers.

How to Dimension X86 External Servers


Now that all the mandatory the X86 requirements are clear, let’s understand the
correct way to dimension Junos node slicing external servers.
Each X86 server must provide enough CPU cores, memory, storage, and network
resources to satisfy two main requirements:
 Shared hardware resources: These are the hardware resources that must be re-
served for the underlaying operating system and shared between all the GNFs
running on the server;
 Per GNF hardware resources: These are the resources that must be reserved for
each GNF to properly run as a VM on the X86 server.

Table 1.1 Shared Hardware Resources

Shared Hardware Resources Minimum Requirements

CPU Four cores allocated for JDM and Linux host OS


RAM Memory 32 GB DRAM minimum for JDM and Linux Host OS
Storage Minimum 64GB storage for JDM and Linux Host OS
Network Interfaces • Two 10GB Ethernet interfaces for X86 server to B-SYS
connections;
• Minimum one NIC card, recommended two cards to
achieve per-card link redundancy
• Three 1Gbps ports for management purposes
• One port for Linux OS Host management
• One port for JDM direct management access
• One port for GNFs direct management access
• An out of band server access such as iDRAC/IPMI for
server access
22 Chapter 1: Introducing Junos Node Slicing

Each virtual Routing Engine running on the X86 external server is built according
to a template dictating which hardware resources will be required by the virtual
instances. At the time of this writing, four virtual Routing Engine resource tem-
plates exist and they mimic real 64-bit MX Series Routing Engines already re-
leased by Juniper Networks. The four virtual Routing Engine templates are
summarized in Table 1.2.

Table 1.2 The Four Virtual Routing Engine Templates

Resource Template Name CPU Cores Memory MX RE Equivalent

2core-16G 2 Cores 16GB RAM RE DUO 16G


4core-32G 4 Cores 32GB RAM RE 1800X4 32G
6core-48G 6 Cores 48GB RAM RE X6 64G
8core-64G 8 Cores 64GB RAM RE X8 64G

NOTE Each virtual Routing Engine, regardless of its resource template, will also
need about 55GB of storage space, so do account for this as well.

A Practical Example
Now that all the pieces of the puzzle have come together, it’s time to apply what we
have just learned with a real-world example. Assume that customer ACME has
three aging devices and they want to consolidate all of them into a single MX Se-
ries chassis using Junos node slicing. The three devices provide:
 BNG Services

 Business Edge Services

 Internet Peering GW

First of all, let’s choose the resource template that best fits our use cases:
 8core-64G for BNG, as it is the most control plane intensive service.

 After an assessment, considering the expected scale of the business edge ser-
vice, a 4core-32g template was chosen.
 For Internet peering GW, the number of routes and peerings suggest that 4core-
32g template should just fit the bill.
Let’s now calculate what the minimum requirements for our server will look like:
23 Software Requirements

Minimum Requirements = GNF dedicated resources + Shared hardware resources
GNF Dedicated Resources:
GNF vRE CPUs = GNF BNG (8) + GNF BE (4) + GNF GW (4) Cores = 16;
GNF vRE GB RAM = GNF BNG (64) + GNF BE (32) + GNF GW (32) RAM = 128G;
Total Storage Needed by vREs = 55G * 3 = 165G;
Shared HW Resources:
Linux OS & JDM Cores = 4;
Linux OS & JDM RAM = 32GB;
Linux OS & JDM Storage = 64GB;
B-SYS – vRE connectivity = 2 x 10GE Ports
Management (Server + JDM + GNFs) = 3 x 1 GE ports
Total Minimum X86 Server Resources:
CPU Cores: 16 + 4 = 20
RAM Memory = 128GB + 32GB = 160GB RAM
Storage = 165GB + 64GB = 229 GB
Networking = 2 x 10GE Ports + 3 x 1GE ports;

As you can see, calculating X86 server’s minimum requirements is a simple,


straightforward, and predictable process. Nevertheless, it’s also important to un-
derline that, in real world scenarios, the minimum requirements calculation pro-
vides a starting baseline in terms of X86 server dimensioning. But to eventually
accommodate future upgrades or to exploit virtualization benefits (for example,
spinning up another virtual Routing Engine with a newer OS software while keep-
ing the old one to achieve a very quick and simple rollback escape strategy if any-
thing goes wrong) may require the spawning of new GNFs, and your
administration should account for additional spare resources for unforeseen uses.
As a piece of advice, a rule of thumb may be to use the pre-configured virtual
Routing Engine templates to account for how many additional cores and RAM
may be suitable to optimize the X86 server resources. For instance, leaving eight
cores and 32 GB RAM as spare resources would allow to spin up a 4core-32G
RAM virtual Routing Engine, but the remaining four would remain unused.

Software Requirements
Now let’s discuss the software side. There are four main software components re-
quired to deploy Junos node slicing:
 Bare Metal Server host Operating System

 The Junos Device Manager (JDM) package

 Junos OS on the B-SYS

 Junos OS Image for the virtual Routing Engines

We will also discuss the multi-version feature, which provides the capability to de-
ploy different Junos versions between the B-SYS and the Junos node slicing GNFs
(and between different GNFs as well).
24 Chapter 1: Introducing Junos Node Slicing

Bare Metal Server Host Operating System


The operating system that must run on the external X86 servers is Linux because
the virtualization technology used by the Junos node slicing solution is KVM.
At the time of this writing, two Linux distributions are officially fully supported:
 Ubuntu server 16.04

 Red Hat Enterprise Linux (RHEL) 7.3

In both cases, the virtualization packages must be installed.

NOTE For detailed information about host OS installation requirements, please


refer to the Junos Node Slicing Feature Guide: https://www.juniper.net/documen-
tation/en_US/junos/information-products/pathway-pages/junos-node-slicing/
junos-node-slicing.pdf.

The servers should have Internet connectivity to download Linux distros updates
and to eventually install additional packages if needed after the initial server setup.

NOTE Despite there being no technical constraints preventing the server from
correctly operating, even in case of a missing Internet connection, it’s mandatory
to have one to maintain the host OS is always updated, especially because of
important updates that may fix security flaws.

Junos Device Manager Package


The Junos Device Manager (JDM) is the orchestrator that handles the whole life
cycle of the VMs powering the Junos node slicing solution control plane. Its pur-
pose is to provide the users with a Junos-like CLI (and all the related APIs, just as it
happens on the full Junos OS) to manage all the stages of the VM life.
The JDM runs inside a Linux container, which is automatically installed by a nor-
mal Linux package that exists in two flavors, depending on the Linux distribution
of choice, either Ubuntu (.deb package format) or Red Hat Enterprise Linux (.rpm
package format).
The JDM version numbering is just borrowed from the Junos OS, so the very first
version of JDM is 17.2 as Junos node slicing was introduced starting with that ver-
sion. The package file naming convention is the following:
jns-jdm-$VERSION.$ARCHITECTURE.[deb|rpm]
where:
VERSION=Junos OS Version (e.g.: 18.1-R1.7)
ARCHITECTURE=x86_64 (JDM is compiled for 64 bit only X86 CPUs)
25 Software Requirements

For instance, in our lab, we are going to use JDM version 18.3-R1.9 on Ubuntu,
hence the JDM file will be:
jns-jdm-18.3-R1.9.x86_64.deb

Junos OS on the B-SYS


It’s the same Junos software that operates on the MX Series router chassis. Junos
node slicing was introduced in 17.2R1, so this is the minimum version that must
be installed to convert an MX chassis into a B-SYS. It’s worth noting that some
features intimately connected to the B-SYS hardware need precise Junos versions
in order to be supported. Table 1.3 lists some of the most important Junos
versions.

Table 1.3 B-SYS Junos Versions

B-SYS Junos Version Feature HW Supported Notes

17.2R1 Node slicing MX2020/


MX2010/
MX960
17.3R1 Node slicing chassis added MX480

17.4R1 AF Interfaces MPC2e-NG/ AF Interfaces feature parity with


MPC3e-NG/ Junos 17.2
MPC7e/MPC8e/
MPC9e

18.1R1 Node slicing chassis added MX2010 No AF Interface support on


Service Card support MS-MPC can be MS-MPC
installed inside a
GNF
18.3R1 Node slicing chassis added MX2008 AF Interfaces feature parity 17.2
AF Interfaces MPC5e/MPC6e

WARNING Beware that a mix of AF Intereface-capable and non-capable MPCs


is not supported in the same GNF. Indeed, if a line card is AF Intereface non-capa-
ble, all the next hops pointing to such an interface will be downloaded to that line
card’s FIB with a ‘discard’ forwarding next hop, causing blackholing of all the
traffic pointing to any AF Interface.

NOTE It’s important to state that starting with Junos 19.2R1, all the TRIO
MPCs, including Multiservice MPC, will be made AF Interefaces capable, so the
aforementioned limitation will go away.
26 Chapter 1: Introducing Junos Node Slicing

Junos OS Image for the Virtual Routing Engine


As already mentioned, Junos node slicing requires the control plane component to
run as a VM on external X86 servers. To initially spin up these virtual Routing En-
gines, a Junos OS image must be deployed on the bare metal servers.

NOTE The image file is only needed for the first virtual Routing Engine spin up.
Once the Routing Engine is running, the Junos OS file and the software upgrade
procedure will be exactly the same as on a traditional MX Series router.

When working with the Junos Image for B-SYS and the Junos Image for virtual
Routing Engines, there’s an important point to note regarding naming conven-
tions. The same version of B-SYS and virtual Routing Engine Junos files contain
the very same software, just packaged in two different ways. In fact, where the B-
SYS Junos is delivered as a package that can be installed on top of an already run-
ning operating system, the node slicing Junos contains a full disk image that will be
used by the KVM infrastructure to spin up a new virtual machine. Because of that,
to distinguish one version from the other, a different naming convention is
adopted.

NOTE A full explanation about Junos file naming conventions is outside the
scope of this book, so it will only cover what is relevant to distinguish a Junos for
B-SYS from a Junos for virtual Routing Engine file.

B-SYS Junos file name:


junos-install-mx-x86-64-$Junos_VERSION.tgz

Junos node slicing virtual Routing Engine Junos file name:


junos-install-ns-mx-x86-64-$Junos_VERSION.tgz

So, by looking at the ns, which of course stands for node slicing, it’s possible to dis-
tinguish which kind of image is contained in the package.

A Practical Example
In our Day One book lab we used Junos 18.3R1, therefore we will install the fol-
lowing packages:
 BSYS: junos-install-mx-x86-64-18.3R1.9.tgz

 Node slicing virtual Routing Engines: junos-install-ns-mx-x86-


64.18.3R1.9.tgz
27 A Practical Example

NOTE For the sake of completeness, it must be noted that the B-SYS filename
can change if the Routing Engines installed in the MX Chassis are the new RE-X6
(MX960/480/240) or RE-X8 (MX2020/2010/2008). In this case, the Junos OS
runs as a VM over a Linux host OS, and the file name becomes:
junos-vmhost-install-mx-86-64-$Junos_VERSION.tgz.

Just as in the node slicing setup, two RE-S-1800X4s are installed in the MX
chassis; we are using the bare metal Junos version, therefore it’s the only one
explicitly mentioned.

WARNING Both servers must run the same JDM version. Nevertheless, running
different JDM versions during upgrade activities is supported as long as the commit
synchronization feature is disabled until both servers run the updated software.
Once both are upgraded, the commit sync can be reenabled. Be aware that even in
case synchronization is not turned off, the software checks if the JDM version of
the incoming change request matches the one running on the local server, and if it
is not the case, it gets refused.

Multi-Version Consideration
One of the most value-added features that Junos node slicing brings to network
administrators is the inherent possibility to run different Junos versions between
B-SYS and GNFs and amongst different GNFs. Nevertheless, when Junos node
slicing was first introduced, all the solution components had to run the very same
Junos OS version. Starting from Junos 17.4R1, a new feature called multi-version
was added to actually allow to run different Junos software versions between the
B-SYS and the GNFs; and between different GNFs.

NOTE Starting with Junos 17.4R1, the multi-version feature is activated by


default, hence no special commands must be configured anywhere.

Despite the introduction of the multi-version capability, not all the version combi-
nations are supported; in fact, for a certain combination to be supported, it needs
to satisfy some rules implemented by the multi-version feature itself, which will be
explained in a minute.

WARNING The aforementioned rules are enforced as the software will perform a
series of checks that could point out compatibility issues, which will prevent the
new GNFs from being successfully activated.
28 Chapter 1: Introducing Junos Node Slicing

Multi-Version Design Principles


The multi-version feature is designed around some general rules that will deter-
mine whether certain combinations of software versions between B-SYS and GNFs
are supported or not.
The starting point is the B-SYS Junos version. As previously mentioned, the multi-
version feature was introduced in release 17.4R1, therefore the B-SYS must run at
least this version or newer.
Once the B-SYS version is chosen, depending on which Junos version runs on the
GNFs, two main scenarios can happen:
 “+” (Plus) Support: if GNF runs a Junos version higher than the B-SYS;

 “-“ (Minus) Support: if GNF runs a Junos version lower than the B-SYS;

NOTE It’s implied that there are no limitations whatsoever if B-SYS and GNFs
run the same Junos OS version!

Multi-version features always allow:


 +2 Support: it means the GNFs can run Junos up to two major releases higher
than the one running on the B-SYS;
 +/- 2 Support on one Junos version per year: it means the GNFs can run Junos
up to two major releases higher or lower than the one running on the B-SYS;

NOTE At the time of this writing, the 2018 Junos version chosen to allow the
“+ / - 2 Support” is Junos 18.2.

Let’s explain this concept with a couple of examples:


Standard “+2” Support
 Junos B-SYS version = 17.4

 Junos virtual Routing Engine versions supported = 17.4 + [0-2] that is 17.4, or
18.1 or 18.2
Extended “+ / - 2” Support
 Junos B-SYS version = 18.2

 Junos virtual Routing Engine versions supported = 18.2 + / - [0-2] that is 17.4,
18.1, 18.2, 18.3, or 18.4;
The multi-version rules only account for major version numbers. Any ‘R’ release
combination can be supported. For instance:
 B-SYS Junos 17.4R1 -> virtual Routing Engine Junos 17.4R2/18.1R2 are al-
lowed combinations.
29 A Practical Example

NOTE Different GNFs can run different Junos versions as long as these ones
satisfy the multi-version rule.

Multi-version is not just a set of ‘nice to have’ rules to fulfill to successfully deploy
Junos node slicing but it is enforced during GNF deployments. These BSYS – GNF
Junos version compatibility checks are performed:
 When the user configures a new GNF, or upgrades a running GNF;
 When the user upgrades the B-SYS Junos version;

 As JDM launches GNFs, the compatibility checks are run during the GNF
bring up process;
It’s important to underline that there are no special constraints related to JDM
software version. Indeed, this component implements the Junos node slicing man-
agement/orchestration plane only, which is completely orthogonal to the Junos OS
running on both B-SYS and GNFs. Indeed, on one hand JDM has no relationship
with the B-SYS at all and, on the other, it only uses GNF Junos OS as a vehicle to
spin up the virtual Routing Engines.

NOTE There is no limit in the deviation of the JDM versions from the B-SYS ones
and JDM can be upgraded completely independently without affecting in any way
the GNFs and B-SYS operativity.

How to Check if Multi-Version is Affecting GNF Creation


If the installation doesn’t comply to multi-version rules, the GNF will not be able
to successfully complete the activation process. To troubleshoot the issue there is a
very simple and straightforward Junos command, show chassis alarms, available on
the GNF:
FreeBSD/amd64 (Amnesiac) (ttyu0)

login: root
Last login: Wed Nov 21 22:25:36 from 190.0.4.5

--- JUNOSJunos 18.1R1.9 Kernel 64-bit  JNPR-11.0-20180308.0604c57_buil
root@:~ # cli
root> show chassis alarms 
bsys-re0:
--------------------------------------------------------------------------
1 alarm currently active
Alarm time               Class  Description
2018-11-21 21:19:53 UTC  Minor  GNF 4 Not Online

gnf4-re0:
--------------------------------------------------------------------------
30 Chapter 1: Introducing Junos Node Slicing

1 alarms currently active
Alarm time               Class  Description
2018-11-21 21:19:49 UTC  Major  System Incompatibility with BSYS

root> show version bsys | grep Junos:   
Junos: 18.3R1.9

This example shows a typical problem with multi-version, and that is the Junos
version incompatibility between the GNF and the B-SYS: in fact, the B-SYS is run-
ning Junos 18.3R1 while the GNF is running 18.1R1. As Junos 18.3 only allows a
“+ 2” support, the GNFs must run at least Junos 18.3 or up to two releases newer,
otherwise the multi-version most important rule is violated, and the GNF can’t
transition to online status.

NOTE The careful reader may have noticed the show chassis alarm command
output is composed of two sections, namely “bsys-re0” and “gnf4-re0”. The first
section actually shows the same output as if the show chassis command would
have been typed on the B-SYS itself, while the second is the local output received
by the GNF. You’ll see that there are indeed CLI GNF commands that also show
B-SYS information and how it is made possible in Chapter 2.
Chapter 2

Junos Node Slicing, Hands-On

Now that everything about what is needed to deploy Junos node slicing is (hope-
fully) clear, it’s finally time to touch the real thing and start setting it up in our lab.
Our objective is to create the first two slices, namely EDGE-GNF and CORE-
GNF, and connect them using AF Interefaces. And, as we set up, you can look
more closely at how Junos node slicing works under the hood.
Let’s start the lab!

The Physical Lab


Before deploying the first GNFs, Figure 2.1 shows the lab setup.

Figure 2.1 Junos Node Slicing Lab Setup


32 Chapter 2: Junos Node Slicing, Hands-On

The lab is composed of one MX960 with a hostname of MX960-4. Its hardware
components are:
 Two SCBe2s

 Two RE 1800x4 32G Ram

 A MPC7 40x10GE Line Card (FPC1)


 A MPC5e-Q Line Card 2x100GE + 4x10GE (FPC6)

NOTE The MX960 needs three SCBe2s to run MPC7 at full rate, nevertheless, to
deploy a fully redundant node slicing solution only two are strictly required. As
this is a lab and we’re not performance testing, this hardware configuration is
suitable to our purposes.

There are two X86 servers with the hostnames of jns-x86-0 and jns-x86-1. The
jns-x86-0 will become the JDM Server0 and the jns-x86-1 will become the JDM
Server1. The main hardware components of the servers are:
 Two CPUs Xeon E5-2863 v4 @ 2,10Ghz (16 Cores each)

 128 GB RAM DDR4

 Two SSD drives – 500GB each

 Two 10GE Intel 82599ES NIC

 Four 1GE Intel I350 NIC

Each device in the lab has two sets of connections (see Figure 2.1):
 Junos node slicing infrastructure links

 Management links

Node Slicing Servers to MX 10GE Links


Each server must be connected to both SCBe2s on the MX chassis. This require-
ment is strict because these links carry all the control plane messaging between the
MX chassis and the servers themselves. The three main communication streams
that use these links are:
1. Communications between the virtual Routing Engines running on the X86 serv-
ers and the MX chassis, such as all the traffic punted to the control plane, as well
as configuration updates.
2. Redundancy machinery between virtual Routing Engines such as control plane
keepalives, non-stop routing, and bridging messaging, as well as configuration
synchronization between master and backup virtual Routing Engine pairs.
33 The Physical Lab

3. Communications between the JDM Servers 0 and 1 are needed for configuration
synchronization, file transfers during GNF instantiation and, generally speaking,
to collect information from the remote JDM server every time the commands are
executed with the all server or server # keywords.
WARNING The correct connection scheme for the server to MX Chassis 10GE
links is pivoted to the server number and the SCBe2 10GE port number that must
match. So, Server0 10GE ports must be connected to both SCBe2 Port 0s, while
Server1’s must be connected to both SCBe2 Port 1’s.

Management Links
Some of the Junos node slicing management infrastructure components should be
familiar to the reader, as they are present in traditional environments, too.

NOTE In this book lab, the management for all the components of the solution
will be out-of-band. This term, in Juniper Networks jargon, refers to a configura-
tion where all the management interfaces are connected to ports that have no
access to the Packet Forwarding Engine transit path. Examples of out-of-band
ports are interface fxp0 on the Routing Engines.

Each component of the solution will be equipped with an out-of-band manage-


ment interface which will be used to carry all the management traffic, as well as to
access the CLIs of all of the components. Table 2.1 summarizes each of the compo-
nents, physical and logical, addressing all management interfaces involved.

Table 2.1 Lab Management IFs and Addresses

Component Physical IF Logical IF MGT IP


MX960-4 RE0 MX960-4 RE0 fxp0 MX960-4 RE0 fxp0 172.30.178.71
MX960-4 RE1 MX960-4 RE1 fxp0 MX960-4 RE1 fxp0 172.30.178.72
MX960-4 Master RE MX960-4 RE Master fxp0 MX960-4 RE Master fxp0 172.30.177.196
jns-x86-0 MGT jns-x86-0 eno1 jns-x86-0 eno1 172.30.200.218
jns-x86-1 MGT jns-x86-1 eno1 jns-x86-1 eno1 172.30.181.159
JDM 0 jns-x86-0 eno2 JDM0 jmgmt0 172.30.181.173
JDM 1 jns-x86-1 eno2 JDM1 jmgmt0 172.30.181.174
EDGE-GNF RE0 jns-x86-0 eno3 EDGE- GNF RE0 fxp0 172.30.181.176
EDGE-GNF RE1 jns-x86-1 eno3 EDGE-GNF RE1 fxp0 172.30.181.177
EDGE-GNF Master RE jns-x86-0 eno3 EDGE-GNF RE0 fxp0 172.30.181.175
CORE-GNF RE0 jns-x86-0 eno3 CORE-GNF RE0 fxp0 172.30.181.179
CORE-GNF RE1 jns-x86-1 eno3 CORE-GNF RE1 fxp0 172.30.181.180
CORE-GNF Master RE jns-x86-1 eno3 CORE-GNF RE1 fxp0 172.30.181.178
34 Chapter 2: Junos Node Slicing, Hands-On

Logical LAB Details


Now that the physical setup is clear, let’s examine what the final result will look
like from a logical standpoint. As mentioned before, the ultimate goal of our lab is
to create two GNFs, composed of a single MX line card each, and connecting
them using an AF Interface as depicted Figure 2.2.

Figure 2.2 Two Connected GNFs Using an AF Interface

The control and data plane of the two new GNFs will be modeled as:
 EDGE-GNF:

„ Control Plane: vRE 8 Cores – 64 GB RAM

„ Data Plane: FPC6 - MPC5e-Q 2CGE-4XGE

 CORE-GNF:

„ Control Plane: vRE 4 Cores – 32 GB Ram

„ Data Plane: FPC1 – MPC7e 40XGE

The lab is quite simple, there are just some important things to keep in mind:
 When a line card becomes a member of a GNF, it will not change its numbering
schema. This propriety is particularly useful when an MX router, already run-
ning in the field as a single chassis, will be reconfigured as a node slicing B-SYS.
Indeed, by maintaining the same line card numbering scheme, no modifica-
tions to the original configuration will be needed.
 When a line card joins a GNF, it will need to reboot to attach the new virtual-
ized control plane running on the external servers. So be aware of this behavior
to correctly calculate the expected down times during the conversion process.
 Last but not least, the AF Interface behaves like a plain core facing Ethernet
interface and it will be configured accordingly as you will see in the next sec-
tions of this chapter.
35 Node Slicing Lab Deployment

Node Slicing Lab Deployment


Now it’s time to start fiddling with our devices to create the final Junos node slicing
lab deployment. The whole process will be performed in steps, from the installation
of the required software to the final checks verifying the final result is working.
The five main steps needed to complete the process are:
1. MX standalone device to B-SYS initial conversion.
2. X86 servers preliminary preparation and JDM software installation.
3. JDM first run and initial configuration.
4. GNF creation, boot up, and configuration.
5. Final sanity checks.
Before starting with the process, some pre-requirements must be double-checked
on both the MX standalone device and the X86 servers to be sure all the required
software and initial configurations are already available on both sides of the
solution.

MX Standalone Device and X86 Servers Pre-requirements Checks


The first step to start the Junos node slicing deployment is to install the software
that will provide all the needed features to turn a plain-vanilla MX chassis into a
B-SYS device and the external X86 into JDM servers and virtual Routing Engines.

NOTE MX960 Chassis and X86 servers are already running the base software
needed to deploy Junos node slicing. In particular:

MX960 is running Junos 18.3R1.9 and redundancy machineries are already con-
figured (graceful-restart, nonstop-routing, nonstop-bridging, commit
synchronization);

NOTE To activate these redundancy features, the following commands should be


already present in the configuration:
set chassis redundancy graceful-switchover
set routing-options nonstop-routing
set system commit synchronize

Note that jns-x86-0 and jns-x86-1 are running the Ubuntu Linux 16.04.05 (up-
dated to the latest patches available at the time of writing) with “Virtual Machine
Host” software package installed.
Once both checks are passed, the Junos node slicing installation can begin.
36 Chapter 2: Junos Node Slicing, Hands-On

MX960 Standalone Chassis to B-SYS Initial Conversion


To turn a plain-vanilla MX into a B-SYS is a really simple task. It’s sufficient to
configure with a single command, that is set chassis network-slices guest-network-
functions and then commit.

This command does not affect service, it can be committed without any impact on
existing traffic.

WARNING If Junos node slicing is deployed on MX960 or MX480, be sure that


the set chassis network-services enhanced-ip command is also configured. Indeed,
the chassis must run in enhanced mode to support not only Junos node slicing, but
overall SCBe2s as well. MX2K only runs in enhanced mode, so this step may not
look like it’s strictly needed. Nevertheless, there are some Junos commit checks
that look for the network-services command to be explicitly configured, therefore
it’s advised to configure it on the MX2K as well. To check if the chassis is running
in enhanced mode, run the show chassis network-services command.

WARNING Beware! If network-services mode is changed the whole chassis must


be rebooted to apply the change!

Let’s check network-services and configure the MX to act as a B-SYS:


Last login: Thu Jan  3 14:18:35 2019 from 172.30.200.218
--- Junos 18.3R1.9 Kernel 64-bit  JNPR-11.0-20180816.8630ec5_buil
{master}
magno@MX960-4-RE0> show chassis network-services
Network Services Mode: Enhanced-IP

{master}
magno@MX960-4-RE0> edit 
Entering configuration mode

{master}[edit]
magno@MX960-4-RE0# set chassis network-slices guest-network-functions

{master}[edit]
magno@MX960-4-RE0# commit 
re0: 
configuration check succeeds
re1: 
configuration check succeeds
commit complete
re0: 
commit complete

{master}[edit]
magno@MX960-4-RE0#
37 Node Slicing Lab Deployment

Easy, isn’t it? But how can you check if the command works? Let’s examine what’s
happening under the hood. As explained previously, Junos node slicing leverages
the internal management switch installed on each MX SCBe2 (or MX2K SFB2) to
extend the links outside the chassis. So, the management switch is the right place
to investigate to find clues about the effects triggered by the previous commit.

NOTE To provide a faster and easier reading on one hand, and to reduce the
quantity of logging on the other, all the CLI snippets are only taken from the
Master Routing Engine and trimmed to show just the relevant information.

Before the chassis network-slices command:


{master}
magno@MX960-4-RE1> show chassis ethernet-switch 

Displaying summary for switch 0
--- SNIP ---

Link is good on GE port 1 connected to device: FPC1
  Speed is 1000Mb
  Duplex is full
  Autonegotiate is Enabled
  Flow Control TX is Disabled
  Flow Control RX is Disabled

--- SNIP ---
Link is good on GE port 6 connected to device: FPC6
  Speed is 1000Mb
  Duplex is full
  Autonegotiate is Enabled
  Flow Control TX is Disabled
  Flow Control RX is Disabled

--- SNIP ---

Link is good on GE port 12 connected to device: Other RE
  Speed is 1000Mb
  Duplex is full
  Autonegotiate is Enabled
  Flow Control TX is Disabled
  Flow Control RX is Disabled

Link is good on GE port 13 connected to device: RE-GigE
  Speed is 1000Mb
  Duplex is full
  Autonegotiate is Enabled
  Flow Control TX is Disabled
  Flow Control RX is Disabled

--- SNIP ---

Link is down on XE port 24 connected to device: External-Ethernet

Link is down on XE port 25 connected to device: External-Ethernet
38 Chapter 2: Junos Node Slicing, Hands-On

Link is down on XE port 26 connected to device: External-Ethernet

Link is down on XE port 27 connected to device: External-Ethernet

Before the command is applied, the only ports that are UP are the ones connected
to the existing internal components, such as the two Flexible PIC Concentrators
(FPCs) in slot 1 and 6 and the two Routing Engines. All the other ports are not
connected and are in the down state. Pay attention to XE ports 24 and 26, which
should be the ones connected to the external SFP+ plugs on the SCBe2: despite the
fact that their physical cabling is in place, they are shut down in the link down
state.
Now, let’s check the internal VLAN configured on the management switch:
{master}
magno@MX960-4-RE0> test chassis ethernet-switch shell-cmd “vlan show” 

vlan 1 ports cpu,ge,xe,hg (0x000000000000f81ffc0fc0ff), untagged ge,xe (0x000000000000f81f5c0fc0fe)


MCAST_FLOOD_UNKNOWN

As expected, all the ports are assigned to VLAN1, the native and untagged VLAN
that provides the internal messaging channel to carry all the communications
among chassis components.

WARNING The test chassis ethernet-switch shell-cmd “vlan show” command


shown above is just for clarity and explanation purposes. It should be used by
JTAC only and not by end customers on field devices!

And now, let’s re-check after the command is applied:


{master}
magno@MX960-4-RE0> show chassis ethernet-switch

Displaying summary for switch 0
--- SNIP ---

Link is good on GE port 1 connected to device: FPC1
  Speed is 1000Mb
  Duplex is full
  Autonegotiate is Enabled
  Flow Control TX is Disabled
  Flow Control RX is Disabled

--- SNIP ---

Link is good on GE port 6 connected to device: FPC6
  Speed is 1000Mb
  Duplex is full
  Autonegotiate is Enabled
  Flow Control TX is Disabled
  Flow Control RX is Disabled

--- SNIP ---
39 Node Slicing Lab Deployment

Link is good on GE port 12 connected to device: Other RE
  Speed is 1000Mb
  Duplex is full
  Autonegotiate is Enabled
  Flow Control TX is Disabled
  Flow Control RX is Disabled

Link is good on GE port 13 connected to device: RE-GigE
  Speed is 1000Mb
  Duplex is full
  Autonegotiate is Enabled
  Flow Control TX is Disabled
  Flow Control RX is Disabled

--- SNIP ---

Link is good on XE port 24 connected to device: External-Ethernet
  Speed is 10000Mb
  Duplex is full
  Autonegotiate is Disabled
  Flow Control TX is Disabled
  Flow Control RX is Disabled

Link is down on XE port 25 connected to device: External-Ethernet

Link is good on XE port 26 connected to device: External-Ethernet
  Speed is 10000Mb
  Duplex is full
  Autonegotiate is Disabled
  Flow Control TX is Disabled
  Flow Control RX is Disabled

Link is down on XE port 27 connected to device: External-Ethernet

For sure, the command had the effect of starting SCBe2 10GE physical ports. In-
deed, as the fiber cables were already connected, once the commit had taken place,
the management switch started 10GE links that now are in UP state.
But now, let’s also check what happened to the internal VLAN scheme:
{master}
magno@MX960-4-RE0> test chassis ethernet-switch shell-cmd “vlan show”    

vlan 1 ports cpu,ge,xe,hg (0x000000000000f81ffc0fc0ff), untagged ge,xe (0x000000000000f81f5c0fc0fe)


MCAST_FLOOD_UNKNOWN
vlan 4001 ports ge0-ge13,xe (0x0000000000000001540fc0fc), untagged ge0-ge13
(0x0000000000000001040fc0fc) MCAST_FLOOD_UNKNOWN

Another interesting thing happened: VLAN4001 was created and all the ports
were added to this VLAN. This is known as the B-SYS Master VLAN. It provides
the media for GNF to and from B-SYS communications. Hence, all the internal
components (B-SYS Routing Engines, FPCs, and the Switch Processor Mezzanine
Board (SPMB) on MX2020/2010/2008) and all the GNFs are members of this
VLAN.
40 Chapter 2: Junos Node Slicing, Hands-On

NOTE While all the traffic originated and received by the internal Routing
Engines is untagged, the external GNF Routing Engines send tagged traffic. The
tagging operations are performed on the B-SYS in the Ethernet management
switch, while they are done by the Junos network stack on the external virtual
Routing Engines.

VLAN 4001 is needed because there are circumstances where B-SYS may need to
communicate with all the GNFs. There are four use cases here:
1. SNMP and Traps forwarding;
2. Syslog messaging forwarding;
3. Chassis Alarms;
4. CLI command forwarding;
For the first three use cases, if a common chassis component breaks, for instance a
power supply, SNMP traps and syslog messages are sent to all the GNFs because
this kind of failure affects the whole solution. At the same time, a chassis alarm is
raised by the BSYS chassis daemon and forwarded to all the chassis daemons
(chassisd) running on the different GNF virtual Routing Engines. The principle is
always the same: the alarm is related to the chassis, which is the only resource
shared among all the GNFs, therefore it is mandatory to display it on all of them.

NOTE At the moment, the chassis alarms are not filtered. This means that all the
alarms forwarded by the BSYS will be visible to all the GNF users regardless of
whether the resource is shared (for example, power supply, fan, SCBe2s) or
dedicated (a line card, or pic assigned to a specific GNF).

For the fourth use case, think about a command such as show chassis hardware: this
command would be issued using the GNF CLI but then it would be forwarded to
the MGD daemon running on the BSYS, which would return the output to the
GNF MGD.
Let’s depict all the software architecture related to the communications between
the base system and the external servers, as shown in Figure 2.3, to clarify as much
as possible on this important topic.
41 Node Slicing Lab Deployment

Figure 2.3 Junos Node Slicing Basic Software Architecture

As explained, some daemons such as chassisd, mgd, and syslogd are still running
on the B-SYS as well as on the external virtual Routing Engine. The communica-
tion between the B-SYS daemons and all the GNFs are happening on VLAN4001,
while the command forwarding between the external and the internal Routing En-
gine mgd daemons are using a per-GNF dedicated VLAN. As mentioned, the con-
trol board internal management switch provides the Layer 2 infrastructure to all of
them.

NOTE The VLAN numbering scheme is valid at the time of this writing, but it
may change, even drastically, during the development cycle of the Junos node
slicing technology.

Now that the basic communication channel between the B-SYS and the external
servers is ready, let’s start the JDM installation!

X86 Servers Preliminary Preparation and JDM Software Installation


Now that the B-SYS is configured, you need to deploy the second component of
the solution, namely the JDM, an all-in-one virtual Routing Engine orchestrator
running on top of the X86 external Linux servers.
42 Chapter 2: Junos Node Slicing, Hands-On

They are already running a plain vanilla (and updated to the latest patches avail-
able) Ubuntu server, 16.04.5 Linux. You now need to perform all the activities to
make these servers suitable to deploy JDM and the virtual Routing Engines.

NOTE This section can’t provide all the in depth information that is already
available in the Junos Node Slicing Feature Guide: https://www.juniper.net/
documentation/en_US/junos/information-products/pathway-pages/junos-node-
slicing/junos-node-slicing.pdf, but is a quick setup guide with some tricks learned
during various Junos node slicing implementation experiences. So read the
excellent TechLibrary Junos node slicing docs, too!

Before installing the JDM package, you need to perform some activities on both
Linux servers to properly configure them to correctly work with the JDM itself.
The process is quite straightforward, and it is organized in different steps which
could be executed in different orders. Nevertheless, by executing them in this or-
der, the whole set of activities will be optimized and will take the shortest amount
of time possible.

NOTE All the steps are documented below but all the outputs are from one single
server to avoid unnecessary duplications. You should understand that all these
activities must be operated on both X86 servers.

Okay, let’s start the process, by checking that the distribution version is up to date:
Check Ubuntu Version:
administrator@jns-x86-0:~$ lsb_release  -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 16.04.5 LTS
Release:        16.04
Codename:       xenial
Check if any packages need updates:
administrator@jns-x86-0:~$ sudo apt update
Hit:1 http://nl.archive.ubuntu.com/ubuntu xenial InRelease
Get:2  http://nl.archive.ubuntu.com/ubuntu  xenial-updates  InRelease  [109  kB]                                  
Hit:3  http://security.ubuntu.com/ubuntu  xenial-security  InRelease                                            
Get:4 http://nl.archive.ubuntu.com/ubuntu xenial-backports InRelease [107 kB]      
Fetched 216 kB in 0s (729 kB/s)                             
Reading package lists... Done
Building dependency tree       
Reading state information... Done
All packages are up to date.

Check. Now, it’s time to double-check that hyperthreading is not active in the
BIOS. To do that, input the following:
administrator@jns-x86-0:/etc/default$ lscpu | grep Thread
Thread(s) per core:    1

Because just a single thread per core is present, it’s all set.
43 Node Slicing Lab Deployment

NOTE If the output reads 2, it means the server BIOS (or UEFI) must be config-
ured to disable hyperthreading. Refer to your X86 server documentation about
how to do that.

As mentioned before, you need to reserve some cores, namely core 0 and core 1, to
the host Linux OS. To do this you must configure the kernel boot parameter isol-
cpus to isolate them from the OS scheduler that will not allocate user space thread
on these CPU cores. To do that, the command will be added to the grub boot-load-
er configuration file, install the new grub parameters, and reboot the server.
First of all, check whether the isolcpus boot parameter is present or not:
administrator@jns-x86-0:~$ cat /sys/devices/system/cpu/isolated

administrator@jns-x86-0:~$ 

If the command reply is void, then no CPU cores are removed from the OS sched-
uling activities, hence it means the “isolcpus” parameter was not passed to the ker-
nel during the boot process. To add it properly, the file “/etc/defaults/grub” must
be edited, using a text editor of your choice, to add the following statement:
GRUB_CMDLINE_LINUX=”isolcpus=2-31”

NOTE The “GRUB_CMDLINE_LINUX” statement is normally already present


in the aforementioned file but it’s void (“ ”). So it’s normally sufficient to add the
“isolcpus” parameter between the apices. If the “GRUB_CMDLINE_LINUX”
statement is totally missing, then the whole string must be added.

NOTE The X86 servers used in this lab have dual-socket CPU motherboards,
with two Xeon CPUs installed, each of them providing 16 cores, for a total of 32
cores numbered from 0 to 31. As two cores must be reserved, core 0 and core 1
will be removed from the normal user-space scheduling activities by configuring
the ‘isolcpus’ parameter to start from core number 2. Therefore, in this lab setup,
the correct parameter to pass to the kernel is 2-31. Please tune yours according to
your setup.

Once the change is saved, run the following command to reflect the modification
to the actual Grub boot configuration:
administrator@jns-x86-0:/etc/default$ sudo update-grub
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.4.0-141-generic
Found initrd image: /boot/initrd.img-4.4.0-141-generic
Found linux image: /boot/vmlinuz-4.4.0-131-generic
Found initrd image: /boot/initrd.img-4.4.0-131-generic
Done
44 Chapter 2: Junos Node Slicing, Hands-On

NOTE Your output might be different. These are the most recent kernels released
by Ubuntu at the time of this writing.

At the next reboot, the “isolcpus” parameter should just be there. But it’s not yet
time to reboot the server. You can optimize the preparation process by rebooting
just once after the AppArmor service will be disabled, so let’s move to the next
step.
Check that the AppArmor service is deactivated and disabled. First check on the
current status:
administrator@jns-x86-0:~$ systemctl is-active apparmor
active

By default, apparmor is enabled in Ubuntu 16.04, so let’s stop it and check its new
status:
administrator@jns-x86-0:~$ sudo systemctl stop apparmor
administrator@jns-x86-0:~$ sudo systemctl is-active apparmor
inactive

Now that it is stopped, let’s disable apparmor so that Linux will not start it during
the next reboot and check the new state:
administrator@jns-x86-0:~$ sudo systemctl disable apparmor
apparmor.service is not a native service, redirecting to systemd-sysv-install
Executing /lib/systemd/systemd-sysv-install disable apparmor
insserv: warning: current start runlevel(s) (empty) of script `apparmor’ overrides LSB defaults (S).
insserv: warning: current stop runlevel(s) (S) of script `apparmor’ overrides LSB defaults (empty).
administrator@jns-x86-0:~$ sudo systemctl is-enabled apparmor
apparmor.service is not a native service, redirecting to systemd-sysv-install
Executing /lib/systemd/systemd-sysv-install is-enabled apparmor
Disabled

Now that apparmor is deactivated and also disabled, you can reboot the server.
Once it has come online again, check on the apparmor as well as the isolcpus status:
don’t forget this single reboot will affect both configurations.

WARNING It is absolutely paramount to follow these steps and then to reboot the
server. Indeed, if the installation will be performed by just disabling AppArmor
without a server reboot, nasty things may happen. Some of the most common
symptoms that the installation was not performed after a full reboot are: JDM CLI
can’t be run, the JDM initial configuration DB can’t be opened, and any configura-
tion manipulations (show, display set, compare, or commit) fail. So, if you see the
following behavior:
root@jdm:~# cli
error: could not open database: /var/run/db/juniper.data: No such file or directory
or any command on the JDM trying to manipulate the configuration ends with an error complaining about /
config/juniper.conf opening problems:
error: Cannot open configuration file: /config/juniper.conf
or any commit operation fails with the following:
45 Node Slicing Lab Deployment

root@jdm# commit
jdmd: jdmd_node_virt.c:2742: jdmd_nv_init: Assertion `available_mem > 0’ failed.
error: Check-out pass for Juniper Device Manager service process (/usr/sbin/jdmd) dumped core (0x86)
error: configuration check-out failed

Probably the installation was made without a full server reboot after AppArmor
deactivation. To solve all these problems, the JDM package must be uninstalled,
the server must be rebooted, and then the JDM package must be reinstalled.
So, let’s reboot the server now:
administrator@jns-x86-0:~$ sudo reboot

Great, the first part of the process is now ended. You can have a cup of coffee, or
whatever you’d like, and wait for the server to boot up…
Welcome to Ubuntu 16.04.5 LTS (GNU/Linux 4.4.0-141-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

0 packages can be updated.
0 updates are security updates.

New release ‘18.04.1 LTS’ available.
Run ‘do-release-upgrade’ to upgrade to it.

Last login: Mon Jan  7 12:03:18 2019 from 172.29.87.8
administrator@jns-x86-0:~$

Let’s check if our configurations took effect or not with an isolcpus parameter
check. Now, by issuing the same command used before the reboot, you can check
that the output has the parameter set to the 2-19 value:
administrator@jns-x86-0:~$ cat /sys/devices/system/cpu/isolated 
2-31

Perfect! As expected, the Linux OS will schedule user space tasks starting with
core number 2! Now let’s check on the AppArmor service. Use the familiar system-
ctl command:

administrator@jns-x86-0:~$ sudo systemctl is-active apparmor
inactive
administrator@jns-x86-0:~$ sudo systemctl is-enabled apparmor
disabled

Great. Everything is fine. From a kernel and operating system standpoint, the serv-
er is correctly configured to host both JDM and the virtual Routing Engines that
will power the node slicing solution. Before starting JDM installation, there are
some other activities to perform, but they are related to storage and distribution
package installation and configuration.
46 Chapter 2: Junos Node Slicing, Hands-On

Let’s begin with the storage.


As you may recall, to install JDM and virtual Routing Engines, solid state drives
are mandatory. Moreover, you need at least ~ 350GB of available space to host all
the images we need for the Junos node slicing control plane component. Consider-
ing that 50GB of space is also needed for the operating system, the servers will
need at least a single 400GB SSD.
Moreover, even though it is not strictly needed, it is absolutely advisable to create
a separate partition as a dedicated storage for virtual Routing Engine images and
configuration. This lab follows this suggestion. Indeed, the lab’s X86 servers are
equipped with two SSD units, which will be used as follows:
 A 500GB SSD unit hosting the root, /boot file systems mount points, and a
swap file;
 A 2000GB SSD unit hosting the dedicated 1000GB partition mounted in /vm-
primary path; it will host all the GNF images.

NOTE Despite the 2TB storage size, only 1T is used for /vm-primary path as this
amount of storage is more than enough to host up to 10 GNFs, the maximum
number supported at the time of this writing.

The storages are configured as logical volumes using the native Linux / Ubuntu
LVM2 storage management suite. How this configuration is performed is beyond
the scope of this book, but there are very easy-to-follow tutorials on the Internet
that show you how to perform a smooth installation. Moreover, Ubuntu server,
16.04, is already LVM2 capable, therefore no extra packages are required and the
bootup scripts are already configured to automatically detect, activate, and mount
LVM based block devices.
Here is the X86 server storage configuration:
administrator@jns-x86-0:~$ df -h -t ext4 -t ext2
Filesystem                                   Size  Used Avail Use% Mounted on
/dev/mapper/jatp700--3--vg-root              491G  6.7G  459G   2% /
/dev/sda1                                    720M  108M  575M  16% /boot
/dev/mapper/vm--primary--vg-vm--primary--lv 1008G   72M  957G   1% /vm-primary

The configuration reflects what’s expected so let’s move on to install the Junos De-
vice Manager!
In this lab, as already mentioned, the JDM version used is 18.3R1, therefore, being
the underlying Linux distribution Ubuntu server, the package is stored in this setup
under “/home/administrator/software”:
administrator@jns-x86-0:~$ ls Software/jns-jdm*
jns-jdm-18.3-R1.9.x86_64.deb
47 Node Slicing Lab Deployment

The installation process is exactly the same as all the other Ubuntu .deb packages
and it’s done using the dpkg command.

NOTE The jns-jdm package has two mandatory package dependencies: qemu-
kvm and libvirt-bin. These packages may or may not be part of the Ubuntu server
installation; indeed, if during the first installation the “Virtual Machine Host” box
is not explicitly checked, then these two packages will not be installed. How can
you check if they are already present on Ubuntu Server? The dpkg command is your
savior:
administrator@jns-x86-0:~/Software$ dpkg -l | grep qemu-kvm
ii qemu-kvm 1:2.5+dfsg-5ubuntu10.33 amd64 QEMU Full
virtualization
administrator@jns-x86-0:~/Software$ dpkg -l | grep libvirt-bin
ii libvirt-bin 1.3.1-1ubuntu10.24 amd64 programs for the
libvirt library

If the output returns the information about these packages and it shows the “ii” in
the desired/status columns, they are correctly installed. In case the status is differ-
ent from “ii”, the user will have to troubleshoot the issue and eventually reinstall
the packages, while if the output is null, then they must be added before attempt-
ing JDM installation using the following statements:
administrator@jns-x86-0:~/Software$ sudo apt install libvirt-bin
Reading package lists... Done
Building dependency tree       
Reading state information... Done
libvirt-bin is already the newest version (1.3.1-1ubuntu10.24).
0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.
administrator@jns-x86-0:~/Software$ sudo apt install qemu-kvm
Reading package lists... Done
Building dependency tree       
Reading state information... Done
qemu-kvm is already the newest version (1:2.5+dfsg-5ubuntu10.33).
0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.

The outputs show they are already present in the system. If they are not, a user
prompt will be returned to confirm the installation and all the needed
dependencies.

WARNING It is very important to check the presence of SNMP tools on the Linux
servers. Sometimes they are not installed by default on Ubuntu 16.04, so the
following packages should be installed:

 snmpd (Linux SNMP Daemon implementation)

 snmp (Linux SNMP Client)

 snmp-mibs-downloader (SNMP MIB auto-downloader tool)

The last two are not mandatory, they implement the SNMP client side and are use-
ful in case you want to perform any SNMP testing.
48 Chapter 2: Junos Node Slicing, Hands-On

It’s now time to install JDM, so let’s do it right away:


administrator@jns-x86-0:~/Software$ sudo apt install ./jns-jdm-18.3-R1.9.x86_64.deb 
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Note, selecting ‘jns-jdm’ instead of ‘./jns-jdm-18.3-R1.9.x86_64.deb’
The following NEW packages will be installed:
  jns-jdm
0 upgraded, 1 newly installed, 0 to remove and 1 not upgraded.
Need to get 0 B/291 MB of archives.
After this operation, 294 MB of additional disk space will be used.
Get:1 /home/administrator/Software/jns-jdm-18.3-R1.9.x86_64.deb jns-jdm amd64 18.3-R1.9 [291 MB]
(Reading database ... 95451 files and directories currently installed.)
Preparing to unpack .../jns-jdm-18.3-R1.9.x86_64.deb ...
Detailed log of jdm setup saved in /var/log/jns-jdm-setup.log
Unpacking jns-jdm (18.3-R1.9) ...
Setting up jns-jdm (18.3-R1.9) ...
Setup host for jdm...
Launch libvirtd in listening mode
Done Setup host for jdm
Installing /juniper/.tmp-jdm-install/juniper_ubuntu_rootfs.tgz...
Configure /juniper/lxc/jdm/jdm1/rootfs...
Configure /juniper/lxc/jdm/jdm1/rootfs DONE
Setup Junos cgroups...Done
Created symlink from /etc/systemd/system/multi-user.target.wants/jdm.service to /lib/systemd/system/
jdm.service.
Done Setup jdm

Perfect! The installation went through flawlessly. If you’re interested in more detail
about the installation process, a full log is stored in /var/log/jns-jdm-setup.log file.
So, before proceeding, let’s illustrate the JDM software architecture in Figure 2.4
and describe its main components.

Figure 2.4 JDM Software Architecture


49 JDM First Run and Initial Configuration

JDM is composed of two main software components:


 MGD: This is the daemon that provides all the configuration capabilities and
APIs as on a regular Junos device. It makes CLI and NETCONF available for
VM management;
 JDMD: This is the JDM daemon. It receives the inputs by MGD, and creates
and manages the GNF VMs using the libvirt toolkit.
Back to business – JDM is now installed and ready to go! Let’s proceed right away!

JDM First Run and Initial Configuration


Now that the JDM installation is finished, let’s check if it’s active…
administrator@jns-x86-0:~/Software$ sudo systemctl is-active jdm
inactive

Surprise, surprise, it’s not! Why? Is that correct? Is there any step or activity we
missed? No worries, it’s all absolutely working as expected!
JDM is still inactive because during its first boot, the end user must configure the
server identity on both servers. Indeed, the two X86 devices must be identified as
Server0 and Server1 and to do that, the user must assign this identifier by specify-
ing it in the JDM first run start command.
In this lab, server “jns-x86-0” will act as Server0, while “jns-x86-1” will act as
Server1. This step is only required during the first JDM run because the identity
configuration will be stored and used at every subsequent JDM restart operation.
As this is a fundamental step, both servers’ CLI outputs will be shown to highlight
the different startup ID used. Let’s start with JDM assigning the correct identities
to both servers by appending the “server=[0|1]” statement to the jdm start
command:
JNS-X86-0
administrator@jns-x86-0:~$ sudo jdm start server=0

Starting JDM

administrator@jns-x86-0:~$

JNS-X86-1
administrator@jns-x86-1:~$ sudo jdm start server=1

Starting JDM

administrator@jns-x86-1:~$

As explained, both servers started JDM with different server IDs, as depicted in the
CLI output. The JDM runs as a container, so it’s possible to access its console with
sudo jdm console. To exit from the console prompt, use the CTRL + ] key combina-
tion. Here’s the JDM Console sample output:
50 Chapter 2: Junos Node Slicing, Hands-On

administrator@jns-x86-0:~$ sudo jdm console
Connected to domain jdm
Escape character is ^]
---- SNIP ----
 * Check if mgd has started (pid:1960)                                   [ OK ]
 * Check if jdmd has started (pid:1972)                                  [ OK ]
 * Check if jinventoryd has started (pid:2021)                           [ OK ]
 * Check if jdmmon has started (pid:2037)                                [ OK ]
Device “jmgmt0” does not exist.
Creating new ebtables rule
Done setting up new ebtables rule
 * Stopping System V runlevel compatibility                              [ OK ]

Ubuntu 14.04.1 LTS jdm tty1

jdm login: 

And it’s clearly visible; JDM is running inside an Ubuntu 14.04.1 container. It’s
using a single core and up to 2GB of RAM:
administrator@jns-x86-0:~$ sudo virsh --connect lxc:/// dominfo jdm
Id:             8440
Name:           jdm
UUID:           c57b23c8-6286-4c87-a6f0-c358d2c07a53
OS Type:        exe
State:          running
CPU(s):         1
CPU time:       22.4s
Max memory:     2097152 KiB
Used memory:    150856 KiB
Persistent:     yes
Autostart:      disable
Managed save:   no
Security model: none
Security DOI:   0

This container must also be attached to some sort of connectivity in order to be


able to reach the outside world. At the moment, it is attached to the local host only
by a virtual tap interface, named vnet0, and ‘plugged’ into a virtual bridge named
“virbr0” created by the libvirt machinery during the first JDM start-up phase:
administrator@jns-x86-0:~$ sudo virsh --connect lxc:/// domiflist jdm
Interface  Type       Source     Model       MAC
-------------------------------------------------------
vnet0      bridge     virbr0        -           52:54:00:ec:ff:a1
vnet2      bridge     bridge_jdm_vm -           52:54:00:73:c6:c2

The start-up process has also automatically set the network connectivity on the
Linux Ubuntu Host side. Let’s check it out:
administrator@jns-x86-0:~$ sudo brctl show
bridge name             bridge id               STP enabled     interfaces
bridge_jdm_vm           8000.761a02971f95       no              vnet2
virbr0                  8000.5254005ac9e3       yes             virbr0-nic
                                                                vnet0
51 JDM First Run and Initial Configuration

The virbr0 provides IP connectivity acting like an integrated routing and bridging
(IRB) interface, using the IP address 192.168.2.254/30 as shown here:
administrator@jns-x86-0:~$ sudo ip addr list virbr0
11: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:54:00:5a:c9:e3 brd ff:ff:ff:ff:ff:ff
inet 192.168.2.254/30 brd 192.168.2.255 scope global virbr0
valid_lft forever preferred_lft forever

As the netmask is /30, the other side of the link can’t be anything different from
192.168.2.253, let’s try to ping it:
administrator@jns-x86-0:~$ ping 192.168.2.253 -c 1
PING 192.168.2.253 (192.168.2.253) 56(84) bytes of data.
64 bytes from 192.168.2.253: icmp_seq=1 ttl=64 time=0.060 ms

--- 192.168.2.253 ping statistics ---


1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms

To see who the owner of this IP address is, first of all, check the MAC address:
administrator@jns-x86-0:~$ arp -n 192.168.2.253
Address                  HWtype  HWaddress           Flags Mask            Iface
192.168.2.253            ether   52:54:00:ec:ff:a1   C                     virbr0

It looks familiar, doesn’t it? It’s indeed the JDM vnet0 MAC Address as seen in the
output above. To confirm it, there are two more clues:
 the file /etc/hosts was modified and now contains an entry named ‘jdm’ with ip
address 192.168.2.253:
administrator@jns-x86-0:~$ sudo cat /etc/hosts | grep 192.168.2.253
192.168.2.253   jdm

 if you SSH as root to this IP address (or better, using the JDM hostname) you
can reach the JDM container as expected:
administrator@jns-x86-0:~$ sudo ssh jdm
****************************************************************************
* The Juniper Device Manager (JDM) must only be used for orchestrating the *
* Virtual Machines for Junos Node Slicing                                  *
****************************************************************************
Last login: Tue Jan  8 23:16:58 2019 from 192.168.2.254
root@jdm:~# ifconfig bme1
bme1      Link encap:Ethernet  HWaddr 52:54:00:ec:ff:a1  
          inet addr:192.168.2.253  Bcast:192.168.2.255  Mask:255.255.255.252
---- SNIP ---

NOTE You may have noticed that the network interface on the container is
named bme1 and not vnet0. It’s indeed an interesting thing to explain: the con-
tainer is running a dedicated NIC driver (veth namely in this case) that exposes the
network interface as ‘bme1’ to the Linux kernel. The vnet0 interface acts as a
tunnel interface that receives the raw network frames and delivers them to the host
network stack.
52 Chapter 2: Junos Node Slicing, Hands-On

To better grasp the whole picture, let’s illustrate what has been observed using the
CLI:

Figure 2.5 Linux Host to JDM Container Connectivity

NOTE All the information provided is accurate as per today’s Junos node slicing
with external control plane productization. Juniper Networks may change the
underlying implementation during the solution lifecycle without any notice. Of
course, the look and feel of the solution will not change in its fundamental pillars
to provide a very smart and flexible way to partition the MX Series routers
maintaining the same user experience offered by a standalone solution.

You’ll see in a moment that all this technology is used to implement Junos node
slicing, therefore exploring in detail the simplest use case, that is the host to JDM
container connectivity will help to better understand the more complicated ones.
Let’s move on and perform the first JDM configuration.
Once logged in to the JDM container, a Junos-like CLI is available to perform the
initial setup to create the management plane between the JDM servers and the B-
SYS. Once this task is completed, the Junos node slicing solution will be up and
running and ready to start hosting our GNFs.
To access the JDM Junos CLI, simply type “cli” at the login prompt:
administrator@jns-x86-0:~$ sudo su -
root@jns-x86-0:~# ssh jdm
****************************************************************************
* The Juniper Device Manager (JDM) must only be used for orchestrating the *
* Virtual Machines for Junos Node Slicing                                  *
****************************************************************************
Last login: Tue Jan  8 23:34:53 2019 from 192.168.2.254
root@jdm:~# cli
root@jdm>
53 JDM First Run and Initial Configuration

The prompt looks familiar, eh? The first configurations must be performed on
both JDM Server0 and Server1 because the replication machinery is not working
yet. But after the first commit, which will create the IP management infrastructure
between the two JDM servers, it will be possible to configure everything using a
single server and then, by committing the configuration, it will sync automatically
between the JDMs.

NOTE Despite the mandatory requirement to perform the initial configuration


tasks on both Server0 and Server1, for the sake of brevity, only Server0 will appear
in the paragraphs. Indeed, the configurations are exactly the same on both servers,
therefore it is pretty unnecessary to duplicate outputs. Moreover, only the final
configuration in “set” format will be shown to save space and to give you the
opportunity to easily perform a cut and paste of the configuration onto your setup.

The first JDM configuration will set up:


 links between JDMs and MX B-SYS;

 management infrastructure to provide IP connectivity between the JDM serv-


ers;
 underlying connectivity infrastructure used by the external virtual Routing En-
gines to connect to the B-SYS component once the GNFs are configured and
booted up;
 configuration synchronization machinery.

Let’s examine what the first configuration looks like and how all the different com-
mands come together to provide all the aforementioned capabilities.

NOTE As a gentle reminder, please find all the information about which inter-
faces will be used to perform the different roles in the configuration recapped
below. Remember these are exactly the same on both Server0 and Server1 JDMs:

Table 2.2 Linux to JDM IF Mapping

X86 Linux IF Name Junos JDM IF Name Role


enp4s0f0 cb0 Link to CB0 Port#
enp4s0f1 cb1 Link to CB1 Port#
eno2 jmgmt0 JDM Management IF
eno3 N/A GNF virtual Routing Engine fxp0

NOTE Remember the X86 to CB link rule: # = 0 for Server0, 1 for Server1.
54 Chapter 2: Junos Node Slicing, Hands-On

NOTE No JDM interface is assigned to host eno3. Indeed, eno3 will attach to a
bridge with each of the virtual Routing Engine fxp0 interfaces to provide manage-
ment connectivity to all of them.

This is how the first configuration looks like on Server0:


[edit]
root@jdm# show | no-more | display set    
set version 18.3R1.9
set groups server0 system host-name JDM-SERVER0
set groups server0 interfaces jmgmt0 unit 0 family inet address 172.30.181.173/24
set groups server0 routing-options static route 0.0.0.0/0 next-hop 172.30.181.1
set groups server0 server interfaces cb0 enp4s0f0
set groups server0 server interfaces cb1 enp4s0f1
set groups server0 server interfaces jdm-management eno2
set groups server0 server interfaces vnf-management eno3
set groups server1 system host-name JDM-SERVER1
set groups server1 interfaces jmgmt0 unit 0 family inet address 172.30.181.174/24
set groups server1 routing-options static route 0.0.0.0/0 next-hop 172.30.181.1
set groups server1 server interfaces cb0 enp4s0f0
set groups server1 server interfaces cb1 enp4s0f1
set groups server1 server interfaces jdm-management eno2
set groups server1 server interfaces vnf-management eno3
set apply-groups server0
set apply-groups server1
set system commit synchronize
set system login user magno class super-user
set system login user magno authentication encrypted-password “$6$AYC3.$qcf0.cDc9GcU4FuO.VeU9NGrBBje
j70NPyWE2C.03rzgimtW8sctZLxbGU8zpub62LB7Q/8DU28zLzYEngBBa/”
set system root-authentication encrypted-password “$6$Cw.E1$rWs6rnBn0vxrSHgvHT.YFpXUltRnpEnJ9V.
vDKwcbV7l11vA0VCWCDoKpae3.Lu72mHQ1Ra4oD4732T0MT5Lc/”
set system services ssh root-login allow
set system services netconf ssh
set system services netconf rfc-compliant
set system services rest http

Let’s examine the different blocks in detail. The ‘server0’ and ‘server1’ are two re-
served group names that identify on which server the configuration should be ap-
plied. This configuration allows you to keep command separation between the
two servers if needed. This is exactly the same machinery used when two Routing
Engines are present in the same chassis; in that case the reserved group names are
“re0” and “re1”, but the principle doesn’t change.

So, when the configuration is committed on server0, only the commands listed
un- der the ‘group server0’ stanza are executed and, of course, the same applies to
server1, too. All the commands outside of these two stanzas will be applied to
both JDM servers at the same time.”

NOTE Because of the ‘server0’ and ‘server1’ group machinery, initial configura-
tions will be exactly the same on both servers as the commands will be selectively
applied based on the server identity configured when JDM was initially started.
55 JDM First Run and Initial Configuration

Now let’s examine relevant configurations under server0 group stanza (all the con-
siderations will apply to server1 group as well):
set groups server0 server interfaces cb0 enp4s0f0
set groups server0 server interfaces cb1 enp4s0f1
set groups server0 server interfaces jdm-management eno2
set groups server0 server interfaces vnf-management eno3

With these commands, Linux physical ports are mapped to JDM interfaces ac-
cording to what was summarized in Table 2.
The cb0 and cb1 interfaces, as per their names, identify the links to the MX B-SYS
control boards. These interfaces will carry all the control plane and management
traffic between the virtual Routing Engines and the B-SYS, as well as JDM Server0
to JDM Server1 traffic.
With the jdm-management command, the Linux physical server interface named
“eno2” becomes the JDM management interface. Indeed, a new interface, “jmg-
mt0”, appears in the JDM container, used as the out-of-band management port to
connect to JDM directly. It can be compared to the fxp0 port found on Juniper
Network devices.
In the same way, the vnf-management statement on the Linux server physical inter-
face eno3 becomes the out-of-band management port shared amongst all the
GNFs. All the traffic sourced by the fxp0 interfaces on each virtual Routing Engine
will be transmitted using the eno3 port.
Despite the homogenous CLI configuration between the service interfaces, (cb0
and cb1), and the management interfaces, it triggers very different settings in the
JDM container configuration. Indeed, the former are configured in virtual Ether-
net port aggregator (VEPA) mode, while the latter are configured in bridge mode,
meaning that all traffic carried by cb0 and cb1 interfaces, regardless of the actual
destination, is forced to pass through the external switch. This doesn’t hold true in
case of bridged ones, which can leverage the virtual switch if the destination con-
tainer is on the same host compute node. Long story short, all the control traffic
between the JDMs, and overall, between the virtual Routing Engine and the line
cards on the B-SYS, will transit through the management switch.

NOTE The X86 servers deployed in this lab all use the very same hardware,
hence all the interfaces share the same names. Therefore, these commands might
have been configured outside of the server group stanzas in this particular scenar-
io. Nevertheless, best practice is to maintain these statements within each server
group stanza to achieve the same configuration style regardless of the kind of
servers used. Indeed, having two twin servers is just recommended but not strictly
mandatory to deploy Junos node slicing as long as both of them can provide the
required hardware technical specifications.
56 Chapter 2: Junos Node Slicing, Hands-On

The next configuration block sets up Server0 JDM hostname, management inter-
face IP address, and the default route to the IP GW:
set groups server0 system host-name JDM-SERVER0
set groups server0 interfaces jmgmt0 unit 0 family inet address 172.30.181.173/24
set groups server0 routing-options static route 0.0.0.0/0 next-hop 172.30.181.1

The configuration should be self-explanatory.

NOTE You may have noticed the IP GW address is the same for both JDM
Server0 and Server1. Therefore, the routing-options static route 0.0.0.0/0 next-hop
172.30.181.1 command might have been configured outside the group configura-
tion stanza. Nevertheless, the choice to put it inside the server0 and server1 group
stanzas provides a more ordered and linear style where all the management
reachability statements are grouped per each server at a very low cost of an
additional line of text.

Then the groups are applied to the candidate configuration by using the following
commands:
set apply-groups server0
set apply-groups server1

Again, this is a well-known Junos configuration. Without the apply-groups state-


ments, all the commands sitting under the group stanzas wouldn’t have any effect
on the runtime configuration. So, if for some reason nothing seems to happen after
committing the Junos configuration, check to be sure that the groups are applied!
With the next block, one login user and the root password are set.

NOTE Although the JDM root login happens through the use of SSH keys (more
on that soon), it’s mandatory to commit any initial Junos configuration to config-
ure a root password, otherwise a commit error complaining about this missing
statement will happen:
set system commit synchronize

As it happens on the Junos we all know, this command will activate the synchroni-
zation features between the JDMs running on both servers. It is mandatory to acti-
vate this command to automatically synchronize JDM configurations:
set system login user magno class super-user
set system login user magno authentication encrypted-password “$6$AYC3.$qcf0.cDc9GcU4FuO.VeU9NGrBBje
j70NPyWE2C.03rzgimtW8sctZLxbGU8zpub62LB7Q/8DU28zLzYEngBBa/”
set system root-authentication encrypted-password “$6$Cw.E1$rWs6rnBn0vxrSHgvHT.YFpXUltRnpEnJ9V.
vDKwcbV7l11vA0VCWCDoKpae3.Lu72mHQ1Ra4oD4732T0MT5Lc/”

NOTE Non-root users are available starting with JDM 18.3.


57 JDM First Run and Initial Configuration

And, last but not least, the next commands will enable some system services need-
ed by the JDM to work properly:
set system services ssh root-login allow
set system services netconf ssh
set system services netconf rfc-compliant
set system services rest http

The first three statements enable the SSH and the NETCONF (over SSH in its
RFC-4571 compliant version: https://www.juniper.net/documentation/en_US/ju-
nos/topics/reference/configuration-statement/rfc-compliant-edit-system-services-
netconf.html ) access to JDM servers. NETCONF is widely used in JDM to apply
configurations between the JDM servers (synchronization and remote command)
and between JDM and B-SYS (command forwarding). The last command enables
HTTP Restful APIs, which can offer a very simple and straightforward north-
bound interface to automate JDM operations. This command is not strictly man-
datory to successfully configure the JDM.
Now that the initial configuration is loaded on both servers, it’s time to commit:
[edit]
root@jdm# commit 
commit complete
root@jdm#

The commit process went through but apparently nothing happened. For instance,
let’s take a closer look and check the JDM interfaces:
root@jdm# run show interfaces | match Physical 
Physical interface: lo , Enabled, Physical link is Up
Physical interface: cb0.4002, Enabled, Physical link is Up
Physical interface: cb1.4002, Enabled, Physical link is Up
Physical interface: bme1, Enabled, Physical link is Up
Physical interface: bme2, Enabled, Physical link is Up
Physical interface: cb0, Enabled, Physical link is Up
Physical interface: cb1, Enabled, Physical link is Up
Physical interface: jmgmt0, Enabled, Physical link is Up

root@jdm#
Wow, great! It looks like something has definitely happened during the commit!

NOTE You may have noticed the CLI prompt didn’t change despite the hostname
configuration. That’s because the user must disconnect, and connect back, for this
to happen. Let’s do that!
root@jdm# exit 

root@jdm> exit

root@jdm:~# cli
root@JDM-SERVER0>

As expected, once you disconnect from the JDM CLI and connect back, the host-
name changes!
58 Chapter 2: Junos Node Slicing, Hands-On

The configuration commit produced the desired effects, but you still need to per-
form deeper checks to verify that everything is working properly. Before doing
this, one of the most important tasks to be performed is related to mutual authen-
tication of the JDM servers. Configuration synchronization and remote command
machineries (that is, the possibility to run commands from one JDM server to the
other) use NETCONF over SSH as their communication channel. In order to al-
low these communications to happen, the servers must learn each other’s SSH
public key and store it in the authorized key file. To automatically perform this
task, the operational request server authenticate-peer-server command must be
executed on both JDM servers.

NOTE During the automated procedure, the user will be prompted for a root
password; the one configured with the system root-authentication plain-text-pass-
word command must be used:

SERVER0:
root@JDM-SERVER0> request server authenticate-peer-server 
The authenticity of host ‘192.168.2.245 (192.168.2.245)’ can’t be established.
ECDSA key fingerprint is d0:06:39:fa:02:1b:c3:b2:1a:e9:ed:0b:9e:02:4e:1d.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-
id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-
id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.2.245’s password: 

Number of key(s) added: 1

Now try logging in to the machine with ’ssh root@192.168.2.245’ and check to
make sure that only the key(s) you wanted were added:
The authenticity of host ‘192.168.2.249 (192.168.2.249)’ can’t be established.
ECDSA key fingerprint is d0:06:39:fa:02:1b:c3:b2:1a:e9:ed:0b:9e:02:4e:1d.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-
id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.

root@JDM-SERVER0>

SERVER1:
root@JDM-SERVER1> request server authenticate-peer-server 
The authenticity of host ‘192.168.2.246 (192.168.2.246)’ can’t be established.
ECDSA key fingerprint is 08:db:90:22:3e:65:88:3e:9a:95:c4:e5:78:36:d9:1b.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-
id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-
id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.2.246’s password: 

Number of key(s) added: 1
59 JDM First Run and Initial Configuration

Now try logging in to the machine with ssh root@192.168.2.246 and check to make
sure that only the key(s) you wanted were added:
The authenticity of host ‘192.168.2.250 (192.168.2.250)’ can’t be established.
ECDSA key fingerprint is 08:db:90:22:3e:65:88:3e:9a:95:c4:e5:78:36:d9:1b.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-
id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.

root@JDM-SERVER1>

The outputs are quite self-explanatory: the two JDM servers exchange their SSH
public keys between them. From now on, all the communication using SSH as
channel between the JDMs can be authenticated using SSH keys and this is par-
ticularly relevant for NETCONF as it is carried over the secure shell.
To double-check that the key exchange happened correctly, it’s sufficient to inspect
the /root/.ssh/authorized_keys:
root@JDM-SERVER0> file show /root/.ssh/authorized_keys | match jdm 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDxBxKEco9LoVjk/
j8DY9maeIXJ96Vwvzm6Ye6OEmKgpVYEfmJBbcsrpmw3ye6x7ncyo1z+TaSlgoEQ9BdG1enhBsKSVZJx/
f+5IKrxJOJl3tg+huV50CUNwlm5M3wjdUJV7/1pAR/8Ki3IKtrHHtudM7RzLskcheMPoI4ZS2Gd2UwPHiDwB2Ap7aWS/
ZIYJpPvQazfyHOiy/l9vTtRwKtY6lUScehfq97XHBEftJblCdenyr2KJ6ucf6RqzgKm51FEghmrbJTORRG4BRZYr+vaQiof3DCLE/
ZfvWqH6b38XNOzttLxDIxPG7456eZ/exOthTjjDAr2QMLsD+lDHzzz jdm

Now it’s time to perform the last sanity check to verify environment health. A very
useful command to perform this task is show server connectivity.
Let’s kill two birds with one stone and use this activity to also experiment, for the
first time, with the command forwarding machinery between the two JDM serv-
ers. Indeed, from the JDM Junos CLI, it’s possible to invoke all the commands not
only locally, but also on the remote server or on both servers at the same time. To
perform the command forwarding to a specific server, the server [0|1] command
must be appended. When a command is executed without the server statement, it’s
as if the server number is the one assigned to the local server.
If you want any command to be executed automatically on both servers, use the
all-servers keyword appended to the original command. Let’s try the all-servers
suffix, so that with a single command you can check: if the JDM environment is
working properly on both servers, and if NETCONF over SSH is working as ex-
pected. From Server0:
root@JDM-SERVER0> show server connections all-servers 
server0:
--------------------------------------------------------------------------
Component               Interface                Status  Comments
Host to JDM port        virbr0                   up     
Physical CB0 port       enp4s0f0                 up     
Physical CB1 port       enp4s0f1                 up     
60 Chapter 2: Junos Node Slicing, Hands-On

Physical JDM mgmt port  eno2                     up     
Physical VNF mgmt port  eno3                     up     
JDM-GNF bridge          bridge_jdm_vm            up     
CB0                     cb0                      up     
CB1                     cb1                      up     
JDM mgmt port           jmgmt0                   up     
JDM to HOST port        bme1                     up     
JDM to GNF port         bme2                     up     
JDM to JDM link0*       cb0.4002                 up      StrictKey peer SSH - OK
JDM to JDM link1        cb1.4002                 up      StrictKey peer SSH - OK

server1:
--------------------------------------------------------------------------
Component               Interface                Status  Comments
Host to JDM port        virbr0                   up     
Physical CB0 port       enp4s0f0                 up     
Physical CB1 port       enp4s0f1                 up     
Physical JDM mgmt port  eno2                     up     
Physical VNF mgmt port  eno3                     up     
JDM-GNF bridge          bridge_jdm_vm            up     
CB0                     cb0                      up     
CB1                     cb1                      up     
JDM mgmt port           jmgmt0                   up     
JDM to HOST port        bme1                     up     
JDM to GNF port         bme2                     up     
JDM to JDM link0*       cb0.4002                 up      StrictKey peer SSH - OK
JDM to JDM link1        cb1.4002                 up      StrictKey peer SSH – OK

That’s exactly the output that we were looking for! Everything is up on both serv-
ers. Pay particular attention to the last two lines of the output: these lines certify
that the two JDM servers can exchange messages between themselves using SSH!
As is clearly visible from the output, this command is quite informational. All the
physical and logical interfaces are explicitly shown, providing a comprehensive
full picture about the JDM connectivity machinery.
To better understand what each component is providing to the solution, let’s ex-
amine each line of the command output.

NOTE As usual, only one server’s output, namely Server0, will be analyzed to
avoid unnecessary redundant outputs. And of course, everything applies to both
servers:
Host to JDM port        virbr0                   up     

You should already be familiar with virbr0. That’s the bridge locally connecting
the Host OS (Linux Ubuntu) to JDM container. Its link is up and working as ex-
pected and it gets monitored by pings sent by both servers every second.

NOTE No big deal here as we used exactly that bridge to connect to JDM CLI, so
if we were able to start the SSH session, it will just mean that this bridge is work-
ing properly:
61 JDM First Run and Initial Configuration

Physical CB0 port       enp4s0f0                 up     
Physical CB1 port       enp4s0f1                 up     
Physical JDM mgmt port  eno2                     up     
Physical VNF mgmt port  eno3                     up     

As explained during the configuration stage, physical Linux Ubuntu 10GE inter-
faces, namely enp4s0f0 and enp4s0f1, are configured as links to the MX B-SYS
control board 0, and 1, respectively. The software automatically creates two new
interfaces named CB0 and CB1, which represent these connections inside JDM.
The same concept applies to the eno2 and eno3 Linux interfaces that are mapped,
as per our configuration, to the JDM (jmgmt0) and GNF management interfaces
inside JDM.

NOTE Please keep in mind these entries only display the link’s physical status:
JDM-GNF bridge          bridge_jdm_vm            up     

The JDM-GNF bridge, as you will see in the next few pages, provides the Layer 2
connectivity between all the virtual Routing Engines and JDM. This bridge is in
charge of carrying all the messaging between these two components, such as vir-
tual machine API calls (create, reboot, shutdown, etc.) and liveness detection traf-
fic. This bridge status must be up, otherwise no communications between the JDM
and virtual Routing Engine can happen. Nevertheless, it’s very important to con-
sider that if virtual Routing Engines are already running and the status is down, it
only affects communication between the JDM and virtual Routing Engines; pro-
duction services provided by any GNF (which is virtual Routing Engine plus line
cards inside B-SYS) are not affected at all:
JDM to HOST port        bme1                     up     
JDM to GNF port         bme2                     up     

These lines show the JDM to host and JDM to GNF ports’ physical status. They
are working as expected.

NOTE The bme2 is not yet used as no GNFs are configured:


JDM to JDM link0*       cb0.4002                 up      StrictKey peer SSH - OK
JDM to JDM link1        cb1.4002                 up      StrictKey peer SSH – OK

This output displays the JDM to JDM link logical status between the two contain-
ers running on Server0 and Server1. Clearly displayed, the two connections be-
tween the local JDM (remember, these are the outputs from Server0) and the
remote JDM (in this case Server1) are configured. Their status is up and that means
there is IP connectivity between the two JDMs on both links. The connectivity is
monitored by the JDM monitoring daemon (jdmmon), which maintains an open
TCP connection sending empty TCP keepalives every second.
62 Chapter 2: Junos Node Slicing, Hands-On

NOTE The “*” (asterisk) displayed with the JDM to JDM link0 line means this is
the active link. Should link0 fail, link1 will take over automatically. The behavior
is preemptive, so when link0 comes up again, it will assume the active role again.

There are two other interesting items displayed by these lines:


 interface name is cb[0|1].4002;
 the message StrictKey peer SSH – OK in the Comments column.
The CB0.4002 and CB1.4002 are the logical interfaces seen by each JDM server
to connect to the remote peer. They have IP addresses automatically configured
by JDM initial setup, let’s check them:
root@JDM-SERVER0> show server connections extensive server 0 | match “JDM to JDM link”   
JDM to JDM link0*       cb0.4002                 up      192.168.2.246    StrictKey peer SSH - OK
JDM to JDM link1        cb1.4002                 up      192.168.2.250    StrictKey peer SSH - OK

root@JDM-SERVER0> show server connections extensive server 1 | match “JDM to JDM link”   
JDM to JDM link0*       cb0.4002                 up      192.168.2.245    StrictKey peer SSH - OK
JDM to JDM link1        cb1.4002                 up      192.168.2.249    StrictKey peer SSH – OK

You can see that both commands were executed on Server0, but by specifying the
server keyword, the last output was collected on Server1. This is a good example
of how comfortable the command forwarding feature can be. As shown, server0
and server1 share two IP point-to-point connections, one for each control board.
These are the communication channels used by the two JDMs to connect to each
other. The suffix .4002 indicates the packets are tagged using this VLAN-ID num-
ber. But let’s check using some of our Linux Kung Fu:
root@JDM-SERVER0:~# ip addr list | grep ‘cb[0-1].4002’
root@JDM-SERVER0:~#

Hmmm, empty output was not expected. Let’s check to see what’s wrong, cb0
must be there after all. The mystery is soon unveiled; to further separate the JDM
connectivity, a dedicated network namespace holds the two CB interfaces!

NOTE A full explanation about Linux namespaces is outside the scope of this
book. But at a very high level conceptually, they can be thought of as machinery
to create a separate context to better isolate resources dedicated to containers
from the host. There are seven types of namespaces currently in Linux kernel
implementation, each one providing separation for different entities. We will
check the only one relevant to us, the network name space (netns).

Let’s check it out:


root@JDM-SERVER0:~# ip netns
jdm_nv_ns
host
root@JDM-SERVER0:~#
63 JDM First Run and Initial Configuration

So, two network namespaces are present on JDM: host is the default while jdm_nv_
ns is the one dedicated to the JDM. Let’s try the command again to inspect cb0 and
cb1 interfaces, but this time inside the appropriate namespace:
root@JDM-SERVER0:~# ip netns exec jdm_nv_ns ip addr list | grep cb[0-1].4002
2: cb0.4002@cb0: <BROADCAST,MULTICAST,ALLMULTI,UP,LOWER_
UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    inet 192.168.2.246/30 scope global cb0.4002
3: cb1.4002@cb1: <BROADCAST,MULTICAST,ALLMULTI,UP,LOWER_
UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    inet 192.168.2.250/30 scope global cb1.4002

Examine the namespace more closely, and you’ll find that the bme2 interface also
belongs to jdm_nv_ns:
root@JDM-SERVER0:~# ip netns exec jdm_nv_ns ip addr list | grep ‘@if’ -A2 | grep bme
14: bme2@if15: <BROADCAST,MULTICAST,UP,LOWER_
UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    inet 192.168.2.14/28 scope global bme2

This interface connects JDM to all the GNFs, and you’ll soon see which features it
provides.
Let’s go back to examine that StrictKey peer SSH – OK comment! As already ex-
plained, JDM configuration synchronization and remote command leverages
NETCONF over an SSH communication channel. That’s why we had to exchange
the public RSA keys between the two JDMs. It’s mandatory for the status to be OK,
otherwise neither of the aforementioned features will work.
If for any reason the SSH RSA key changes, this communication channel will
break. Let’s look at an example and, overall, how to fix it.
Let’s suppose JDM Server 0 has its RSA key changed after the two servers had al-
ready exchanged their SSH keys. Let’s check the status now:
root@JDM-SERVER0> show server connections    
Component               Interface                Status  Comments
Host to JDM port        virbr0                   up     
Physical CB0 port       enp4s0f0                 up     
Physical CB1 port       enp4s0f1                 up     
Physical JDM mgmt port  eno2                     up     
Physical VNF mgmt port  eno3                     up     
JDM-GNF bridge          bridge_jdm_vm            up     
CB0                     cb0                      up     
CB1                     cb1                      up     
JDM mgmt port           jmgmt0                   up     
JDM to HOST port        bme1                     up     
JDM to GNF port         bme2                     up     
JDM to JDM link0*       cb0.4002                 up      StrictKey peer SSH - Failed
JDM to JDM link1        cb1.4002                 up      StrictKey peer SSH – Failed
64 Chapter 2: Junos Node Slicing, Hands-On

If the status is failed, the NETCONF over SSH communication channel is gone
and it must be fixed at once! So, let’s re-run the request server authenticate-peer-
server command:

root@JDM-SERVER0> request server authenticate-peer-server 
/usr/bin/ssh-copy-
id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-
id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.2.245’s password: 

Number of key(s) added: 1

Now try logging in to the machine with SSH as root@192.168.2.245 and check to
make sure that only the key(s) you want were added:
/usr/bin/ssh-copy-
id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.

Pay attention to the WARNING here: as a key already exists for Server0 on the remote
Server1, the new one is skipped. So first you need to get rid of the old key, and then
you’ll be able to run the server authenticate command to restore the situation. Let’s
delete the old key from Server1 then execute the command again on Server0:

NOTE To remove the Server0 key from Server1, you need to delete it from the file
/root/.ssh/authorized_keys using a text editor. Look for the line starting with
“ssh-rsa” and ending with “jdm”, something like the following example:
ssh-rsa 
AAAAB3NzaC1yc2EAAAADAQABAAABAQCk7AAnIeHGZEh2EGc33Bjb82MrM9nrB6O/y1CXPA77dVn9wAUoJdQGTNFDZh1gjddf
r69PlJY4FQvgVH2dLMzRrnoBwXl9kPX2avvOPBBgeIS2WSphdufuQGV4rQnq62FLu94Z8BLevTjyYMfXEKnh3aVVpJUhajsd
vrrWyz4Xnb9xQmiWF+x7JiR8Ab3JPmq00dmUaaKvGELB07soGF8+GyIsJdTC9uCxvmhiQn7+UCDwHeJNLFSNhEoX48K7jfMt
VxlGDYPPW+TH4jYjR2s8S8eRLspKhy3FHZHVLGIBt
jdm

Once the line is removed, get back to Server0 and re-execute the request server au-
thenticate-peer-server command:

root@JDM-SERVER0> request server authenticate-peer-server 
/usr/bin/ssh-copy-
id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-
id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.2.245’s password: 

Number of key(s) added: 1

Log back in to the machine with ’ssh root@192.168.2.245’ and check to make sure
that only the key(s) you wanted were added:
/usr/bin/ssh-copy-
id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
65 JDM First Run and Initial Configuration

/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.

root@JDM-SERVER0>

This time, the old key was already deleted and the new key was correctly added.
Indeed, by re-checking the connection status, it’s clearly shown we are back in the
game:
root@JDM-SERVER0> show server connections 
Component               Interface                Status  Comments
Host to JDM port        virbr0                   up     
Physical CB0 port       enp4s0f0                 up     
Physical CB1 port       enp4s0f1                 up     
Physical JDM mgmt port  eno2                     up     
Physical VNF mgmt port  eno3                     up     
JDM-GNF bridge          bridge_jdm_vm            up     
CB0                     cb0                      up     
CB1                     cb1                      up     
JDM mgmt port           jmgmt0                   up     
JDM to HOST port        bme1                     up     
JDM to GNF port         bme2                     up     
JDM to JDM link0*       cb0.4002                 up      StrictKey peer SSH - OK
JDM to JDM link1        cb1.4002                 up      StrictKey peer SSH - OK

Great, the JDM to JDM communication channel has been successfully restored!
Figure 2.6 illustrates all the detailed connections so you can understand the envi-
ronment before starting to create your first GNFs.

Figure 2.6 JDM to JDM to B-SYS Connection Scheme


66 Chapter 2: Junos Node Slicing, Hands-On

WARNING On some installations, all of the server interfaces may not be up. This
can cause problems in B-SYS to X86 server connectivity, therefore it’s highly
advisable to configure the related bootup scripts to explicitly set all used interfaces
to up. On Ubuntu, this is accomplished by adding the following lines (for each
interface) to the “/etc/network/interfaces” configuration file:
auto $IF_NAME
iface $IF_NAME inet manual
up ifconfig $IF_NAME up

After the needed additions, the “/etc/network/interfaces” configuration file used


on both lab servers looks like this:
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eno1
iface eno1 inet static
        address 172.30.181.171
        netmask 255.255.255.0
        network 172.30.181.0
        broadcast 172.30.181.255
        gateway 172.30.181.1
        # dns-* options are implemented by the resolvconf package, if installed
        dns-nameservers 172.30.181.2
        dns-search poc-nl.jnpr.net

### Management Interfaces
auto eno2
iface eno2 inet manual
up ifconfig eno2 up

auto eno3
iface eno3 inet manual
up ifconfig eno3 up

### 10GE Interfaces
auto enp4s0f0
iface enp4s0f0 inet manual
up ifconfig enp4s0f0 up

auto enp4s0f1
iface enp4s0f1 inet manual
up ifconfig enp4s0f1 up

So far the focus has been on the connectivity aspect of the solution. The next pages
go inside the storage configuration to elongate our deep dive!
Chapter 3

GNF Creation, Bootup, and Configuration

Time to create the first guest network functions (GNFs)! The purpose of this Day
One book on Junos node slicing is to create, configure, interconnect, and test one
Edge and one Core GNF. And, as previously explained, the GNF is a logically sep-
arated partition of a single MX Chassis composed of two main components:
 A dedicated virtual control plane (aka virtual Routing Engine) running on two
X86 external servers;
 One or more dedicated data plane components that are line cards running in-
side the MX Chassis.
Each of these components must be configured and ‘stitched’ together to create a
working GNF. As a rule of thumb, the control plane component is configured us-
ing JDM while the data plane element is configured on the B-SYS. Although the
order in which these two elements are configured is not governed by any strict
technical guidelines, it is best practice to start with the virtual Routing Engine set
up on JDM and then proceed with the data plane component on the B-SYS.

WARNING When an MPC is configured to become part of a new GNF, it will be


reloaded automatically upon configuration commit. This is the only stage of the
GNF creation process that impacts services. The line card must reboot because it
must ‘attach’ to the new control plane component, that is the GNF virtual Routing
Engines, and receive the boot image to properly start up.
68 Chapter 3: GNF Creation, Bootup, and Configuration

So, following our own advice, let’s start creating our first GNF, namely EDGE-
GNF, starting from the control plane component. As explained, JDM is the or-
chestrator to manage the full lifecycle of our virtual Routing Engine, therefore it’s
a pretty straightforward choice as the starting point to create our first GNF con-
trol plane. But before logging in and starting the virtual Routing Engine spin up
process, let’s examine how it is architected.
The virtual Routing Engines can be thought as a KVM virtual machine running
Junos to provide their control plane features to the GNF data plane component, so
it obviously needs a boot image to be started. Adding a suitable image to our vir-
tual network function is the first step of the new virtual Routing Engine creation.
If we try to configure all our desired parameters before adding an image, the JDM
commit will not complete and it will return the following error:
 [edit]
root@JDM-SERVER0# commit check 
[edit]
  ‘virtual-network-functions EDGE-GNF’
    Adding Image is mandatory for EDGE-GNF
error: configuration check-out failed

[edit]
root@JDM-SERVER0#

Once the boot image is correctly installed, then the real VM configuration will
take place under the JDM CLI ‘virtual-network-functions’ stanza.
Now let’s log in to JDM and perform the two-step creation process to start our
first virtual Routing Engine:
JDM Login (using the JDM Server0 Management IP Address) and start the cli
****************************************************************************
* The Juniper Device Manager (JDM) must only be used for orchestrating the *
* Virtual Machines for Junos Node Slicing                                  *
****************************************************************************
Last login: Thu Jan 24 16:43:20 2019 from 192.168.2.254
root@JDM-SERVER0:~# cli
root@JDM-SERVER0>

Once the Junos-like JDM CLI prompt is received, it’s time to add the image to the
new GNF. Because JDM runs in a Linux container, the image file must be made
available to it. The file was copied using Secure Copy Protocol (SCP) from the
original location (server0 /home/administrator/Software/ directory) to JDM /var/
tmp:
root@JDM-SERVER0:~# scp administrator@172.30.181.171:~/Software/junos*-ns-* /var/tmp
The authenticity of host ‘172.30.181.171 (172.30.181.171)’ can’t be established.
ECDSA key fingerprint is 6a:29:60:18:35:9f:0e:ba:cf:30:98:3e:c2:b4:10:ba.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘172.30.181.171’ (ECDSA) to the list of known hosts.
junos-install-ns-mx-x86-64-18.3R1.9.tgz                                                                                                               
                                             100% 2135MB 112.4MB/s   00:19    
root@JDM-SERVER0:~#
69 Create the EDGE-GNF Virtual Routing Engine and Add the Boot Image

Create the EDGE-GNF Virtual Routing Engine and Add the Boot
Image
The mandatory step to assign a valid software image to a virtual network function
achieves the following goals:
1. Provides a valid software image to boot from;
2. Assigns the desired name to the new VNF;
3. Creates the directory structure under the ‘vm-primary’ mount point to store all
the files related to a specific VNF.
As you will see, the chosen name will be used in the JDM CLI to perform configu-
ration or operative tasks related to the VM itself. So let’s go to the JDM CLI and
assign the desired boot image to our soon-to-be-created EDGE-GNF virtual Rout-
ing Engine:
root@JDM-SERVER0> request virtual-network-functions EDGE-GNF add-image /var/tmp/junos-install-ns-mx-
x86-64-18.3R1.9.tgz all-servers   
server0:
--------------------------------------------------------------------------
Added image: /vm-primary/EDGE-GNF/EDGE-GNF.img

server1:
--------------------------------------------------------------------------
Added image: /vm-primary/EDGE-GNF/EDGE-GNF.img

root@JDM-SERVER0>

There are some things worth noting here:


 Do you recall the file naming convention? As already explained, the -ns- in the
filename identifies it contains a node slicing boot image;
 The all-servers command is shining in this use case! Indeed, it automatically
copies the Junos image and creates the needed file hierarchy on the remote
server.
 The image was installed into the /vm-primary/ path; to be more accurate, a new
directory, whose name was taken directly from the GNF, was specifically cre-
ated to store all the files related to this particular instance.
 As always, the magic word to perform operative tasks in Junos is request; used
with the statement virtual-network-functions, it’s possible to manage the whole
life cycle of a certain virtual machine, which is identified by its name! All the
available request virtual-network-functions options, along with their functions,
are listed in Table 3.2.
70 Chapter 3: GNF Creation, Bootup, and Configuration

Table 3.2 Request virtual-network-functions Options

Statement Function
add-image Add a boot image to a VNF
console Provide a console access to a VNF
start Start a VNF boot process
stop Stop a running VNF
restart Perform a stop-start cycle of a running VNF
delete-image Delete all the directory structure of a stopped VNF

As usual, let’s also take a look at what has happened under the hood! You know
that all the VNF-related files are stored under the /vm-primary path, so we should
expect to find something interesting in that place:
root@JDM-SERVER0> file list /vm-primary/ detail 

/vm-primary/:
total blocks: 56
drwxr-xr-x  2 root  root        4096 Feb 6  19:25 EDGE-GNF/
drwx------  2 root  root       16384 Jan 7  13:11 lost+found/
total files: 0

root@JDM-SERVER0>

As explained before, a new directory named EDGE-GNF was created, and it’s


now time to check what is stored inside:
root@JDM-SERVER0> file list /vm-primary/EDGE-GNF/ detail 

/vm-primary/EDGE-GNF/:
total blocks: 9145016
-rw-r--r--  1 root  root   18253676544 Feb 6  19:25 EDGE-GNF.img
-rw-r--r--  1 930   930    2350645248 Sep 21 03:09 EDGE-GNF.qcow2
-rw-r--r--  1 930   930            3 Sep 21 03:11 smbios_version.txt
total files: 3

root@JDM-SERVER0>
And there are three files present as the result of the boot-image installation pro-
cess. Let’s see what they are doing:
 EDGE-GNF.img file represents the virtual SSD drive of the virtual Routing En-
gine;
 EDGE-GNF.qcow2 file contains the image the virtual Routing Engine will use
to boot up;
 smbios_version.txt is a parameter passed to the VM BIOS (at the time of this
writing the version is v1) to correctly identify the booting Junos VM.

NOTE At the time of this writing, only v1 is supported. Should new capabilities
be added to Junos VM, which relies on the underlying virtual hardware, this
version may change.
71 Create the EDGE-GNF Virtual Routing Engine and Add the Boot Image

It’s now time to configure and start our new GNF! As usual, the JDM CLI will
provide the needed interface to perform the required configurations. The EDGE-
GNF VM will be modeled as follows:
 Eight Cores – 64 GB RAM Routing Engine

 Chassis-type mx960
 ID = 1

 No starting configuration

 No auto-boot

Let’s start the real configuration using the JDM CLI:


root@JDM-SERVER0> edit 
Entering configuration mode

[edit]
root@JDM-SERVER0# set virtual-network-functions EDGE-GNF id 1 resource-template 8core-64g chassis-
type mx960 no-autostart

[edit]
root@JDM-SERVER0# show virtual-network-functions 
EDGE-GNF {
    no-autostart;
    id 1;
    chassis-type mx960;
    resource-template 8core-64g;
}

[edit]
root@JDM-SERVER0#

[edit]
root@JDM-SERVER0# commit 
commit complete

[edit]
root@JDM-SERVER0#

Isn’t that easy? Just a single command and a commit! Et volia`, the new GNF is
ready to go! Before starting it, let’s examine each command and what it achieves:
 no-autostart:by default, the virtual Routing Engines spawn up on commit; us-
ing this command, this behavior can be changed and the VM will not boot up
until the operational request virtual-network-functions $GNF_NAME start com-
mand is submitted;
 id: the “id” command identifies the virtual Routing Engine with a numeric
value; today, ID value can range from 1 to 10, as the maximum number of sup-
ported GNF is 10; this value is very important as it will also identify on the B-
SYS the line cards assigned to a certain pair of virtual Routing Engines; and not
only… we’ll see in the next pages how this ID value is also involved in other
parts of the Junos node slicing implementation;
72 Chapter 3: GNF Creation, Bootup, and Configuration

 chassis-type: This identifies the B-SYS type of chassis. The value must match
the real B-SYS MX model otherwise the line cards will not be correctly at-
tached to the virtual Routing Engines.
 resource-template: This defines which kind of virtual Routing Engine is required
from a resource reservation standpoint.
There are some other notable commands not used in this configuration:
 base-config: This allows the user to provide a customer Junos startup configu-
ration from which the VNF will get its initial settings.
 physical-cores: This provides the capability to statically define which CPU
cores are bound to the virtual-network-function; there are commit checks that
prevent the user from setting a lower or a higher number of cores than the ones
specified in the resource-template and to reserve cores that are already in use by
other VNFs.
Let’s now go to JDM CI operational mode and check if our VNFs are actually
ready to start on both servers:
[edit]
root@JDM-SERVER0# exit 
Exiting configuration mode

root@JDM-SERVER0> show virtual-network-functions all-servers 
server0:
--------------------------------------------------------------------------
ID       Name                                              State      Liveness
--------------------------------------------------------------------------------
1        EDGE-GNF                                          Shutdown   down

server1:
--------------------------------------------------------------------------
ID       Name                                              State      Liveness
--------------------------------------------------------------------------------
1        EDGE-GNF                                          Shutdown   down

root@JDM-SERVER0>

The command output looks promising: indeed, on both JDM servers, you can see
the VNFs are created and, as expected, they have not booted yet, thus the liveness
status is obviously down. So it’s time to start our virtual Routing Engines up by us-
ing our beloved JDM CLI:
root@JDM-SERVER0> request virtual-network-functions EDGE-GNF start all-servers 
server0:
--------------------------------------------------------------------------
EDGE-GNF started

server1:
--------------------------------------------------------------------------
EDGE-GNF started

root@JDM-SERVER0>
73 Create the EDGE-GNF Virtual Routing Engine and Add the Boot Image

Once again, the all-servers knob eased our sys-admin’s life by booting both virtual
Routing Engines at the same time, brilliant!

NOTE Because we are spawning two Routing Engines, they will run the well-
known master / backup election process just as their real counterparts do every
time the MX physical chassis starts up. The JDM server number matches the
Routing Engine number, so RE0 runs on Server0 while RE1 runs on Server1. By
default, RE0 has the higher priority to become the Master. The set chassis redun-
dancy routing-engine command is supported on the GNF’s Junos to eventually
change this default behavior. All the well-known chassis redundancy machineries
available on standalone dual routing engine MX chassis are also implemented in a
Junos node slicing dual virtual Routing Engine setup. Graceful Routing Engine
Switchover (GRES) can be configured and periodic keepalives handled by chassisd
can be activated to detect master reachability so that, in case of lack of connectiv-
ity, the backup Routing Engine will take over automatically.

While the Routing Engine is booting, it’s possible to use the request virtual-net-
work-function $VNF-NAME console command to connect to the QEMU emulated serial
port. Let’s try it:
root@JDM-SERVER0> request virtual-network-functions EDGE-GNF console 
Connected to domain EDGE-GNF
Escape character is ^]
@ 1549558470 [2019-02-07 16:54:30 UTC] verify pending ...
Verified jfirmware-x86-32-18.9 signed by PackageProductionEc_2018 method ECDSA256+SHA256
Verified jdocs-x86-32-20180920 signed by PackageProductionEc_2018 method ECDSA256+SHA256
Verified jinsight-x86-32-18.9 signed by PackageProductionEc_2018 method ECDSA256+SHA256
Verified jsdn-x86-32-18.9 signed by PackageProductionEc_2018 method ECDSA256+SHA256
Verified jpfe-common-x86-32-20180920 signed by PackageProductionEc_2018 method ECDSA256+SHA256
---- SNIP ----
FreeBSD/amd64 (Amnesiac) (ttyu0)
login:
Perfect! Our virtual Routing Engine has finally booted and it’s ready to receive our
configuration. Because it’s clearly visible, the look and feel is exactly the same as a
Junos-powered hardware Routing Engine.

TIP To exit from the QEMU console use the “CTRL + ]” key combination.

Let’s take a quick look at the VM status:


root@JDM-SERVER0> show virtual-network-functions all-servers 
server0:
--------------------------------------------------------------------------
ID       Name                                              State      Liveness
--------------------------------------------------------------------------------
1        EDGE-GNF                                          Running    up

server1:
--------------------------------------------------------------------------
ID       Name                                              State      Liveness
-----------------------------------------------------------------------------
1        EDGE-GNF                                          Running    up

root@JDM-SERVER0>
74 Chapter 3: GNF Creation, Bootup, and Configuration

We can conclude the VNFs are running properly! But let’s keep up with good hab-
its and take a look at what happens under the hood. First of all, let’s take a look at
how the new VMs are connected to the B-SYS and to JDM by checking the net-
work interfaces on the new RE0 instance using JDM console access:
root@JDM-SERVER0> request virtual-network-functions EDGE-GNF console 
Connected to domain EDGE-GNF
Escape character is ^]

FreeBSD/amd64 (Amnesiac) (ttyu0)

login: root

--- JUNOS 18.3R1.9 Kernel 64-bit  JNPR-11.0-20180816.8630ec5_buil
root@:~ # cli
root> show interfaces terse 
Interface               Admin Link Proto    Local                 Remote
dsc                     up    up
fxp0                    up    up
--- SNIP ---
vtnet0                  up    up
vtnet0.32763            up    up   inet     190.0.1.1/2     
                                            190.0.1.4/2     
                                   tnp      0x3e000104      
vtnet0.32764            up    up   inet     190.0.1.1/2     
                                            190.0.1.4/2     
                                   tnp      0x3e000104      
vtnet1                  up    up
vtnet1.32763            up    up   inet     190.0.1.1/2     
                                            190.0.1.4/2     
                                   tnp      0x3e000104      
vtnet1.32764            up    up   inet     190.0.1.1/2     
                                            190.0.1.4/2     
                                   tnp      0x3e000104      
vtnet2                  up    up
vtnet2.0                up    up   inet     192.168.2.1/2

The interfaces relevant to our investigation are highlighted: one of them should be
quite familiar, fxp0, while the other three may look new but they really are not:
 fxp0:Most readers may have already guessed that it’s the out-of-band manage-
ment interface of the virtual Routing Engine.
 vtnet0 / vtnet1: It’s useful to first clarify that these are created by the FreeBSD
VirtIO kernel driver. As the examined instance is virtual, it shouldn’t be a sur-
prise. It’s important to recall that inside a single MX960 chassis there are two
interfaces connecting the Routing Engines to the SCB management switch.
Those interfaces are named ‘em0’ and ‘em1’ and, besides the different names
that depend on the FreeBSD underlying kernel driver (em interfaces are created
by the Intel NIC module), the ending numbers might ring a bell: bottom line,
vtnet0, and vtnet1, respectively, are the replacements for em0 and em1 inter-
faces and, at the same time, they achieve exactly the same goal as their physical
75 Create the EDGE-GNF Virtual Routing Engine and Add the Boot Image

counterparts (that is, to interconnect the virtual Routing Engines to the Junos
node slicing forwarding plane) and to attach the virtual instances to the master
VLAN shared by all the B-SYS control components, such as Routing Engine
CPUs and line card control CPUs. Moreover, we’ll see that vtnet0 uses cb0
10GE ports, while vtnet1 uses cb1 ones.
Okay, you may have already noticed that each vtnet0/1 interface has two logical
units, namely 32763 and 32764, sharing the same IP address. Let’s take a closer
look:
root> show interfaces vtnet0.32763    
  Logical interface vtnet0.32763 (Index 3) (SNMP ifIndex 504)
    Flags: Up SNMP-Traps 0x4000 VLAN-Tag [ 0x8100.4001 ]  Encapsulation: ENET2
    ----- SNIP -----
    Destination: 128/2, Local: 190.0.1.1, Broadcast: 191.255.255.255
      Addresses
        Destination: 128/2, Local: 190.0.1.4, Broadcast: 191.255.255.255
----- SNIP -----

root> show interfaces vtnet1.32763 
  Logical interface vtnet1.32763 (Index 5) (SNMP ifIndex 507)
    Flags: Up SNMP-Traps 0x4000 VLAN-Tag [ 0x8100.4001 ]  Encapsulation: ENET2
    ----- SNIP ------
        Destination: 128/2, Local: 190.0.1.1, Broadcast: 191.255.255.255
      Addresses
        Destination: 128/2, Local: 190.0.1.4, Broadcast: 191.255.255.255
    ------ SNIP ------

root> show interfaces vtnet0.32764    
  Logical interface vtnet0.32764 (Index 4) (SNMP ifIndex 505)
    Flags: Up SNMP-Traps 0x4000 VLAN-Tag [ 0x8100.4011 ]  Encapsulation: ENET2
    -----SNIP-----
        Destination: 128/2, Local: 190.0.1.1, Broadcast: 191.255.255.255
      Addresses
        Destination: 128/2, Local: 190.0.1.4, Broadcast: 191.255.255.255
----- SNIP ----

root> show interfaces vtnet1.32764    
  Logical interface vtnet1.32764 (Index 6) (SNMP ifIndex 508)
    Flags: Up SNMP-Traps 0x4000 VLAN-Tag [ 0x8100.4011 ]  Encapsulation: ENET2
    ----- SNIP -----
        Destination: 128/2, Local: 190.0.1.1, Broadcast: 191.255.255.255
      Addresses
        Destination: 128/2, Local: 190.0.1.4, Broadcast: 191.255.255.255
    ---- SNIP ----

Despite the fact that both logical interfaces have the same IP address, they are con-
nected to different VLANs. As previously explained, VLAN4001 is the B-SYS
Master VLAN, whereas VLAN4011 is a dedicated VLAN for the VNF itself.
Indeed, despite the undeniable fact that the Junos node slicing architecture is close
to that of a single chassis MX router, there is one main and fundamental differ-
ence: where there is a one-to-one relationship between a couple of physical Rout-
ing Engines and the chassis they control, with Junos node slicing this is no longer
76 Chapter 3: GNF Creation, Bootup, and Configuration

true. It’s possible, at the time of this writing, to have up to twenty virtual Routing
Engines controlling up to ten physical chassis partitions composed of one or more
MPCs, therefore there must be a machinery that can provide the right isolation so
that a line card belonging to a certain GNF can join only the correct virtual Rout-
ing Engine couple. As always, the simpler the better, so a dedicated VLAN is going
to be created for each couple of VNFs.

NOTE All the VLAN tagging and untagging operations are performed at the
control board management switching for the B-SYS. There is no VLAN awareness
in any chassis component involved (B-SYS RE CPUs, line card control CPUs).
Instead, they are done by the Junos network stack on the external virtual Routing
Engines.

There is a very simple algorithm behind how the VLAN ID for each couple of GNF
Routing Engines is calculated, where GNF-ID comes into play again as shown by
the subsequent formula:
vRE VLANs = 4010 + GNF-ID.

Indeed, in our case, as the GNF-ID for this is 1, and the dedicated VLAN is 4011
as displayed by the outputs just shown above.

NOTE These implementation details are used to explain how the nature of the
Junos node slicing feature is close to what has already been implemented for two
decades inside every Juniper Networks device, changing only where it was abso-
lutely necessary. It’s important to understand that, despite the flexibility and how
innovative it can look, it is still that beloved Juniper Networks router, just on
steroids!

The addressing scheme is always in the 128/2 subnet. The easy way to distinguish
a physical component from a virtual one is that the physical components are num-
bered starting with 128, while the virtuals start with 190.
For instance, let’s examine the Address Resolution Protocol (ARP) cache on our
virtual RE0:
root> show arp vpn __juniper_private1__ 
MAC Address       Address         Name                      Interface               Flags
02:00:00:00:00:04 128.0.0.1       bsys-master               vtnet0.32763            none
02:00:00:00:00:04 128.0.0.4       bsys-re0                  vtnet0.32763            none
02:01:00:00:00:05 128.0.0.5       bsys-re1                  vtnet1.32763            none
02:01:00:00:00:05 128.0.0.6       bsys-backup               vtnet1.32763            none
02:01:00:3e:01:05 190.0.1.5       190.0.1.5                 vtnet1.32764            none
02:01:00:3e:01:05 190.0.1.6       190.0.1.6                 vtnet1.32764            none

You can see that all the B-SYS Res are visible on the Master VLAN in vRE0 ARP
cache, while only the remote virtual RE1 is present on the dedicated. Let’s use the
same command but disable name resolution:
77 Create the EDGE-GNF Virtual Routing Engine and Add the Boot Image

root> show arp vpn __juniper_private1__ no-resolve 
MAC Address       Address         Interface                Flags
02:00:00:00:00:04 128.0.0.1       vtnet0.32763             none
02:00:00:00:00:04 128.0.0.4       vtnet0.32763             none
02:01:00:00:00:05 128.0.0.5       vtnet1.32763             none
02:01:00:00:00:05 128.0.0.6       vtnet1.32763             none
02:01:00:3e:01:05 190.0.1.5       vtnet1.32764             none
02:01:00:3e:01:05 190.0.1.6       vtnet1.32764             none
Total entries: 6

As expected, physical Routing Engines are numbered starting with 128.

NOTE It’s perfectly normal that each Routing Engine inside an MX chassis has
two IP addresses. They are used to identify RE0 (.4) / RE1 (.5) and RE-MASTER
(.1) / RE-BACKUP (.6). Indeed, also the remote virtual RE1 has exactly the same
addresses. This is more proof that all the Junos underlying machineries work
exactly the same on an external virtual Routing Engine in a Junos node slicing
environment.

 vtnet2: This connects to the JDM bme2 interface in the ‘jdm_nv_ns’ and it’s
used to perform GNF liveness detection. Every time the show virtual-network-
function command is invoked on the JDM CLI, the status of the VM is checked
through libvirt APIs; if the returned value is “isActive” then five ICMP echoes
are triggered from JDM to the GNF over this connection. If the probes are re-
plied by the GNF, then the liveness status reported will be Up, otherwise it will
be Down and a more in depth investigation will be needed to understand why.

WARNING The VLAN and addressing schemes explained here can change
without warning at any time during the node slicing feature development, as these
details are completely transparent and (almost) invisible to the end users. More-
over, there are also algorithms to create the MAC addresses used by each interface
of the VNF, as well as a kernel parameter passed during the virtual Routing Engine
boot sequence to correctly identify the nature of the VNF and other details, but
these are outside of the scope of this book, hence not covered.

Once it is pretty clear how the connectivity works on the virtual Routing Engine,
it’s also important to double check the other end of the connection. We know the
virtual Routing Engines are KVM powered VMs, therefore the correct way to in-
vestigate where each interface is plugged (virtually speaking of course!) is to check
the host operating system.
Taking a look at the Linux Ubuntu interface, it is quite visible that some macvtap
interfaces are now up and running in our setup.
78 Chapter 3: GNF Creation, Bootup, and Configuration

NOTE A detailed explanation Linux macvtap interfaces is outside the scope of


this book, but there are a lot of very good explanations available on the Internet.
For the sake of brevity, let’s simply say they are kernel interfaces similar to a tun/
tap but directly usable by QEMU to practically extend a physical interface in-
stalled on the Linux host to the KVM virtual instance. The Linux host macvtap
interfaces share the same MAC address with the virtual ones they are connected
to.

So, let’s find the MAC addresses of all the VM interfaces:


root> show interfaces fxp0 media | match Hardware      
  Current address: 02:ad:ec:d0:83:0a, Hardware address: 02:ad:ec:d0:83:0a
root> show interfaces vtnet0 media | match Hardware 
  Current address: 02:00:00:3e:01:04, Hardware address: 02:00:00:3e:01:04
root> show interfaces vtnet1 media | match Hardware    
  Current address: 02:00:01:3e:01:04, Hardware address: 02:00:01:3e:01:04

Now, let’s check which macvtap interface is connected to each of them on the
Linux Host:
VTNET0:
root@jns-x86-0:~# ifconfig  | grep 02:00:00:3e:01:04
macvtap0  Link encap:Ethernet  HWaddr 02:00:00:3e:01:04  

VTNET1:
root@jns-x86-0:~# ifconfig  | grep 02:00:01:3e:01:04
macvtap1  Link encap:Ethernet  HWaddr 02:00:01:3e:01:04

FXP0:
root@jns-x86-0:~# ifconfig  | grep 02:ad:ec:d0:83:0a
macvtap2  Link encap:Ethernet  HWaddr 02:ad:ec:d0:83:0a  

Great findings! So our vtnet0, vtnet1, and fxp0 are connected to macvtap0, 1 and
2, respectively.
The last piece of the connectivity puzzle is to find which physical interface is con-
nected to which macvtap. Always using the Linux bash shell, let’s examine the ac-
tive links in the kernel:
root@jns-x86-0:~# ip link list | grep macvtap
19: macvtap0@enp4s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_
fast state UNKNOWN mode DEFAULT group default qlen 500
20: macvtap1@enp4s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_
fast state UNKNOWN mode DEFAULT group default qlen 500
22: macvtap2@eno3: <BROADCAST,MULTICAST,UP,LOWER_
UP> mtu 1500 qdisc htb state UNKNOWN mode DEFAULT group default qlen 500
79 Create the EDGE-GNF Virtual Routing Engine and Add the Boot Image

Exactly! Each of these macvtap interfaces is extending the physical interfaces we


have configured on the JDM to act as CB0, CB1, and GNF management interfaces.
Of course, two logical units run inside each vtnet0/macvtap0@enp4s0f0 and vt-
net1/macvtap1@enp4s0f0, each with its own VLAN, and because the macvtap in-
terfaces simply tunnel traffic between the VM and the physical interfaces, the
VLAN tagging will be passed as is to the B-SYS.

NOTE Both the enp4s0f0 and enp4s0f1 interfaces are configured in “VEPA”
mode, hence all the traffic will be forwarded to the physical ports and bridged by
the control board management switch, no hair-pinning through the virtual
switching layer is performed.

The last interface to check is vtnet2. It has an addressing scheme on the


192.168.2.0/28 network, where .1 is the virtual Routing Engine and .14 is config-
ured on the JDM side. Let’s take a look by logging on to the JDM Server0 and issu-
ing the following command:
root@JDM-SERVER0:~# ip netns exec jdm_nv_ns ip addr list | grep -B2 192.168.2.14
13: bme2@if14: <BROADCAST,MULTICAST,UP,LOWER_
UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 52:54:00:b8:8d:de brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.14/28 scope global bme2

Let’s check the connectivity with the virtual RE0:


root@JDM-SERVER0:~# ip netns exec jdm_nv_ns ping 192.168.2.1 -c 1
PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data.
64 bytes from 192.168.2.1: icmp_seq=1 ttl=64 time=0.599 ms

--- 192.168.2.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms

As this is self-contained connectivity between the local virtual Routing Engine and
JDM, there is no need to create a macvtap interface to extend it outside of the local
device.
Now that everything should be quite clear, let’s illustrate the connectivity big pic-
ture as shown in Figure 3.1.
80 Chapter 3: GNF Creation, Bootup, and Configuration

Figure 3.1 Virtual Routing Engine Connectivity Schematic

Let’s continue on our first GNF creation by adding its second building block, the
forwarding plane. The configuration takes place on the B-SYS this time. Our
EDGE-GNF will be composed of a single line card, namely an MX MPC5e-Q
(2x100GE + 4x10GE version) inserted in slot number 6.

NOTE At the time of this writing, the B-SYS partitioning granularity is at the slot
level. This means a whole line card can belong to one, and only one, GNF. There
are enhancements in the feature roadmap that may introduce, at least, a per-pic
(aka per-PFE) granularity in a future release.

Two important behaviors will be seen during the B-SYS configuration:


 When a line card is configured to become part of a GNF, it is automatically re-
loaded at commit time. The reboot is necessary because the line card must re-
ceive its boot software and the right configuration from the virtual Routing
Engines of the particular GNF it will belong to.
 When the line card comes online after the reboot, it will have actually joined
the GNF. Nevertheless, its ports are not renumbered. This behavior is funda-
mental to ease the migration from standalone to Junos node slicing configura-
tion as the original configuration doesn’t need to be changed at all.
Let’s start the configuration by using the MX960-4 B-SYS Junos CLI:
{master}[edit]
magno@MX960-4-RE0# set chassis network-slices guest-network-functions gnf 1 fpcs 6

chassis {
    network-slices {
        guest-network-functions {
            gnf 1 {
                fpcs 6;
81 Create the EDGE-GNF Virtual Routing Engine and Add the Boot Image

            }
        }
    }                                   
}                                       
{master}[edit]
magno@MX960-4-RE0#

The configuration is pretty straightforward. Let’s examine the two main


statements:
 gnf 1:
This identifies the new instance GNF-ID; the value must match the virtu-
al-network-functionID previously configured on JDM. Beware that the “gnf”
command inline CLI help may be misleading, as shown below:
magno@MX960-4-RE0# set guest-network-functions gnf ?
Possible completions:
<slot> GNF slot number (1..10)

 The (1..10)identifies the GNF number and must match the JDM virtual-net-
work-functionID to configure the data plane and the control plane components
in the same partition (aka GNF).
 fpcs 6: This number identifies the slot (or the slots as the command accepts
multiple values) where the GNF MPC is installed.
Time to commit the configuration, but before doing this, let’s activate CLI time-
stamping to observe what happens to the MPC.

NOTE Under the chassis/network-slices/guest-network-functions/gnf # stanza,


there is another command not used during this lab, namely control-plane-band-
width-percent. As all communications between the virtual Routing Engines and the
B-SYS are carried by the 10GE links between the X86 servers and the MX chassis
control boards, it’s possible to statically police the total amount of bandwidth
available to a certain GNF. The value is expressed as a percentage calculated over
the total value of 10GE:
{master}[edit]
magno@MX960-4-RE0# run set cli timestamp 
Feb 14 15:48:39
CLI timestamp set to: %b %d %T

{master}[edit]
magno@MX960-4-RE0# 
Feb 14 15:48:39

{master}[edit]
magno@MX960-4-RE0# commit 
Feb 14 15:49:25
re0: 
configuration check succeeds
re1: 
configuration check succeeds
commit complete
re0: 
commit complete
82 Chapter 3: GNF Creation, Bootup, and Configuration

{master}[edit]
magno@MX960-4-RE0#

Let’s check the slot 6 line card status:


{master}
magno@MX960-4-RE0#  run show chassis fpc 6    
Feb 14 15:49:29
Temp CPU Utilization (%) CPU Utilization (%) Memory Utilization (%)
Slot State (C) Total Interrupt 1min 5min 15min DRAM (MB) Heap Buffer GNF
6 Online Testing 0 0 17 16 16 0 0 0 1

{master}
magno@MX960-4-RE0# run show chassis fpc 6
Feb 14 15:49:30
Temp CPU Utilization (%) CPU Utilization (%) Memory Utilization (%)
Slot State (C) Total Interrupt 1min 5min 15min DRAM (MB) Heap Buffer GNF
6 Offline ---GNF initiated Restart--- 1

{master}
magno@MX960-4-RE0>

You can see that the line card was rebooted after the commit, with the reason be-
ing GNF initiated restart. Perfect, everything behaved exactly as expected. The line
card went online again, but this time it is, logically speaking, not part of the B-SYS
anymore.
To check the FPC status, let’s use the EDGE-GNF CLI exactly as we would have
done on a single chassis MX series router:
root> show chassis hardware 
bsys-re0:
--------------------------------------------------------------------------
Hardware inventory:
Item             Version  Part number  Serial number     Description
Chassis                                JN122E1E0AFA      MX960
Midplane         REV 04   750-047853   ACRB9287          Enhanced MX960 Backplane
Fan Extender     REV 02   710-018051   CABM7223          Extended Cable Manager
FPM Board        REV 03   710-014974   JZ6991            Front Panel Display
PDM              Rev 03   740-013110   QCS1743501N       Power Distribution Module
PEM 0            Rev 04   740-034724   QCS171302048      PS 4.1kW; 200-240V AC in
PEM 1            Rev 07   740-027760   QCS1602N00R       PS 4.1kW; 200-240V AC in
PEM 2            Rev 10   740-027760   QCS1710N0BB       PS 4.1kW; 200-240V AC in
PEM 3            Rev 10   740-027760   QCS1710N0BJ       PS 4.1kW; 200-240V AC in
Routing Engine 0 REV 01   740-051822   9009170093        RE-S-1800x4
Routing Engine 1 REV 01   740-051822   9009176340        RE-S-1800x4
CB 0             REV 01   750-055976   CACM2281          Enhanced MX SCB 2
  Xcvr 0         REV 01   740-031980   163363A04142      SFP+-10G-SR
  Xcvr 1         REV 01   740-021308   AS90PGH           SFP+-10G-SR
CB 1             REV 02   750-055976   CADJ1802          Enhanced MX SCB 2
  Xcvr 0         REV 01   740-031980   AHJ09HD           SFP+-10G-SR
  Xcvr 1         REV 01   740-021308   09T511103665      SFP+-10G-SR
FPC 6            REV 42   750-046005   CADM2676          MPC5E 3D Q 2CGE+4XGE
  CPU            REV 11   711-045719   CADK9910          RMPC PMB
  PIC 0                   BUILTIN      BUILTIN           2X10GE SFPP OTN
    Xcvr 0       REV 01   740-031980   B11B02985         SFP+-10G-SR
  PIC 1                   BUILTIN      BUILTIN           1X100GE CFP2 OTN
83 Create the EDGE-GNF Virtual Routing Engine and Add the Boot Image

  PIC 2                   BUILTIN      BUILTIN           2X10GE SFPP OTN
  PIC 3                   BUILTIN      BUILTIN           1X100GE CFP2 OTN
Fan Tray 0       REV 04   740-031521   ACAC1075          Enhanced Fan Tray
Fan Tray 1       REV 04   740-031521   ACAC0974          Enhanced Fan Tray

gnf1-re0:
--------------------------------------------------------------------------
Chassis                                GN5C5C634895      MX960-GNF
Routing Engine 0                                         RE-GNF-2100x8
Routing Engine 1                                         RE-GNF-2100x8

root>
There are some interesting things going on here – some obvious, some not:
 The FPC6 is now connected to the virtual Routing Engines of EDGE-GNF; ex-
actly as it happens on a standalone chassis, the line card receives all the needed
information from the GNF Routing Engine.
 The FPC has maintained its slot number. This is a very important characteris-
tic, because the Junos interface naming convention dictates that the first digit
always represents the chassis slot number. The migration of existing devices is
made a lot easier by maintaining this information consistently between the
standalone and the Junos node slicing configuration, as the current config can
be reused just as it is, no renumbering actions are needed.
 Even though the MPC6 is now logically part of the EDGE-GNF, it is still physi-
cally installed inside the MX chassis; therefore, the B-SYS has an active role in
the line card life cycle as all the physical related functionalities are still handled
by the B-SYS itself.
 EDGE-GNF administrators must be able to fully manage their own slice on the
other end, so a new feature called command forwarding was implemented. This
means that some Junos commands are executed on the GNF Routing Engines
but they are then forwarded to the B-SYS to retrieve the relevant outputs. The
show chassis hardware is a good example of this feature. Indeed, if we take a
closer look at the output, it is clearly divided into two main sections. The first
section with the “bsys-re0:” and the second with “gnf1-re0:”. This means that
the former output was retrieved from the B-SYS, while the latter was retrieved
directly from the GNF Routing Engine.
 Note that besides all the common components, just the MPC in slot 6 is shown.
Indeed, the output is filtered so that only the MPCs belonging to the enquiring
GNF are shown.
 There is the concept of GNF chassis; it has a dedicated serial number and has
its own product description; it is useful for licensing purposes;
 The Routing Engines are identified as “RE-GNF”; indeed, the virtual Routing
Engines have a dedicated personality in Junos, so they are correctly handled by
the operating system; it is passed on as a boot-string during the boot phase;
84 Chapter 3: GNF Creation, Bootup, and Configuration

 The CPU speed, and the number of cores to mimic the same Juniper Networks
model naming convention used for the physical routing engines, are also re-
ported in the Routing Engine model number.
Now that you know how the MPC is correctly attached to its virtual control plane
instances, as usual, let’s check what happened on the B-SYS when the new configu-
ration was committed. And again, the most interesting things happen on the con-
trol board management switch. This is how it looks in terms of VLANs and ports
before and after the commit:
Before configuring the GNF:
{master}
magno@MX960-4-RE0> test chassis ethernet-switch shell-cmd “vlan show”

vlan 1 ports cpu,ge,xe,hg (0x000000000000f81ffc0fc0ff), untagged ge,xe (0x000000000000f81f5c0fc0fe)


MCAST_FLOOD_UNKNOWN
vlan 4001 ports ge0-ge13,xe (0x0000000000000001540fc0fc), untagged ge0-ge13
(0x0000000000000001040fc0fc) MCAST_FLOOD_UNKNOWN

After configuring the GNF:


{master}
magno@MX960-4-RE0> test chassis ethernet-switch shell-cmd “vlan show”

vlan 1 ports cpu,ge,xe,hg (0x000000000000f81ffc0fc0ff), untagged ge,xe (0x000000000000f81f5c0fc0fe)


MCAST_FLOOD_UNKNOWN
vlan 4001 ports ge0-ge13,xe (0x0000000000000001540fc0fc), untagged ge0-ge13
(0x0000000000000001040fc0fc) MCAST_FLOOD_UNKNOWN
vlan 4011 ports ge6,ge12-ge13,xe (0x000000000000000154040000), untagged ge6,ge12-ge13
(0x000000000000000104040000) MCAST_FLOOD_UNKNOWN

VLAN 4011 is now configured on the management switch! Indeed, the same for-
mula on the control plane side applies to the data plane side as well. If you exam-
ine which ports are added to VLAN 4011, they are exactly the ones connecting
B-SYS Routing Engines (ge12-ge13), external servers (xe), and the FPC belonging
to this particular GNF, that is, MPC in slot 6 (ge6). Moreover, the only ports
where traffic is tagged are the 10GE connected to the external server.
By laying on VLAN 4011, on reboot the MPC in slot 6 will request the boot image
to the virtual Master Routing Engine running on the external server, which is the
only one sitting on the same broadcast domain as the line card. And because the
B-SYS the VLAN-ID operations only happen inside the management control
switch, no modifications are required on the line card ukernel/embedded OS at all.
Once more, you can appreciate how non-invasive the Junos node slicing imple-
mentation is.
Congratulations! If you’ve been following along, the GNF is all set, great job!
85 Manage Node Slicing on JDM and B-SYS

Configure the Second GNF


The next step is to configure the second GNF, namely the CORE-GNF. By following
the same steps already detailed, you should be able to do it on your own, hence it is left
as an exercise to create the new GNF with the following characteristics:
 GNF Name: CORE-GNF

 GNF ID: 2

 Junos 18.3R1 image; (the same used for EDGE-GNF)

 GNF Flavor: 4 Core – 32G Ram

 Options: no start-up configuration required; the GNF should auto-start on commit

 The GNF should have a one MPC (in this Lab it will have MPC7e in Slot 1)

Manage Node Slicing on JDM and B-SYS


Perfect, our book’s lab has a new GNF named CORE-GNF, as shown here:
root@JDM-SERVER0> show virtual-network-functions all-servers
server0:
--------------------------------------------------------------------------
ID       Name                                              State      Liveness
--------------------------------------------------------------------------------
2        CORE-GNF                                          Running    up
1        EDGE-GNF                                          Running    up

server1:
--------------------------------------------------------------------------
ID       Name                                              State      Liveness
--------------------------------------------------------------------------------
2        CORE-GNF                                          Running    up
1        EDGE-GNF                                          Running    up

Now that two GNFs are running concurrently on the same servers, all the outputs can
be more relevant, so let’s look to useful commands to manage the Junos node slicing
solution on both JDM and B-SYS.

NOTE Despite it being a good habit to use the keyword “all-servers” to retrieve
outputs from both JDMs at the same time, to avoid almost never-ending CLI outputs,
only locals are collected.
86 Chapter 3: GNF Creation, Bootup, and Configuration

JDM GNF Monitoring Command


For detailed information on virtual instances use the show virtual-network-functions
[$NAME] detail command, which shows everything about the virtual network func-
tion in terms of computing, networking, and storage. The $NAME is optional, it is
used in the command below to shorten the CLI output and focus on the newly cre-
ated CORE-GNF:
root@JDM-SERVER0> show virtual-network-functions CORE-GNF detail   
VNF Information
---------------------------
ID                  2
Name:               CORE-GNF
Status:             Running
Liveness:           up
IP Address:         192.168.2.2
Cores:              4
Memory:             32GB
Resource Template:  4core-32g
Qemu Process id:    18153
SMBIOS version:     v1

VNF Uptime: 21:41.62

VNF CPU Utilization and Allocation Information
--------------------------------------------------------------------------------
GNF                                      CPU-Id(s)               Usage  Qemu Pid
---------------------------------------- ----------------------- -----  --------
CORE-GNF                                 12,13,14,15             14.3%  18153   

VNF Memory Information
----------------------------------------------------------------
Name                                             Actual Resident
------------------------------------------------ ------ --------
CORE-GNF                                         32.0G  18.0G   

VNF Storage Information
---------------------------------------------------------
Directory                                   Size   Used
------------------------------------------- ------ ------
/vm-primary/CORE-GNF                        52.7G  5.6G  

VNF Interfaces Statistics


---------------------------------------------------------------------------------------------------
Interface Rcvd Bytes Rcvd packets Rcvd Error Rcvd Drop Trxd bytes Trxd Packets Trxd
Error Trxd Drop
-------------------------- ------------ ------------ ---------- --------- ------------ ------------
macvtap3 95580251 240160 0 0 30735919 269699 0 0
macvtap4 7137412 37837 0 0 4885628 71277 0 0
vnet3 7354 121 0 0 3486 43 0 0
macvtap5 648 8 0 0 33162342 314805 0 0
87 Manage Node Slicing on JDM and B-SYS

VNF Network Information


-------------------------------------------------------------------------------
Virtual Interface Physical Interface MAC
-------------------------- ------------------ ----------------------------
macvtap3                     enp4s0f0              02:ad:ec:d0:83:0b            
macvtap4                     enp4s0f1              02:ad:ec:d0:83:0c            
vnet3                        bridge_jdm_vm         02:ad:ec:d0:83:0d            
macvtap5                     eno3                  02:ad:ec:d0:83:0e            

root@JDM-SERVER0>

All this information should be quite self-explanatory. As expected, from the con-
nectivity standpoint the second GNF is exactly the same as the first one. In this
case, the footprint is smaller if compared to the EDGE-GNF, as only four cores
and 32Gbytes of RAM are dedicated to this instance. There is a dedicated section
that highlights which cores are assigned to the VM and their total occupation.

B-SYS GNF Monitoring Command


There is also an interesting command on the B-SYS side to check GNFs status
from its perspective: the show chassis network-slices gnf [#] command. It shows
some interesting information about network-slices running on a B-SYS. The num-
ber is optional, as shown by the output:
{master}
magno@MX960-4-RE0> show chassis network-slices gnf      
GNF ID           1 
GNF description  NA
GNF state        Online
FPCs assigned    6 
FPCs online      6 
BSYS             MX960-4-RE0
BSYS sw version  18.3R1.9
GNF  sw version  18.3R1.9
Chassis          mx960
BSYS master RE   0
GNF uptime       3 days, 1 hour, 25 minutes, 31 seconds
GNF Routing Engine Status:
Slot 0:
    Current state   Master
    Model           NA
    GNF host name   NA
Slot 1:
    Current state   Backup
    Model           NA
    GNF host name   NA
GNF ID           2 
GNF description  NA
GNF state        Online
FPCs assigned    1 
FPCs online      1 
BSYS             MX960-4-RE0
BSYS sw version  18.3R1.9
GNF  sw version  18.3R1.9
88 Chapter 3: GNF Creation, Bootup, and Configuration

Chassis          mx960
BSYS master RE   0
GNF uptime       2 hours, 57 minutes, 7 seconds

Slot 0:
    Current state   Master
    Model           NA
    GNF host name   NA
Slot 1:
    Current state   Backup
    Model           NA
    GNF host name   NA

{master}
magno@MX960-4-RE0>

Even if this seems straightforward and easy to understand, nevertheless, let’s add
extra clarification:
 FPCs assigned / FPCs online: These outputs refer to the slots where the FPCs
assigned to the specific network-slice are installed; it may be misunderstood as
the number of FPCs assigned and online but that’s not the case. For instance,
EDGE-GNF has one MPC installed in slot 6, which is also online, and that’s
why “6” appears on both lines.
 Routing Engine mastership status: All the Routing Engines involved in a Junos
node slicing setup, both B-SYS physical ones and the couple of virtual ones in
each GNF, run their own mastership election. There are no restrictions about
which Master Routing Engine it should be. It’s perfectly supported in the sce-
nario where the Master Routing Engine runs on JDM Server0 on a given GNF
while it runs on JDM Server1 for another. There are no mastership dependen-
cies for the B-SYS, either.

Routing Engine Masterships in Junos Node Slicing


Two different kinds of Routing Engines are involved in a Junos node slicing setup:
hardware and virtual. Despite their different natures, both have the very same
look, feel, and behavior. At the end of the day, on some Routing Engines such as
RE-S-X6/X8 (the newest MX Series Routing Engines), Junos runs as a VM over a
Linux-embedded host OS. As already highlighted, a GNF is a separated router
made of a pair of virtual Routing Engines and one or more line cards, hence inside
each partition one Routing Engine is elected as a master and the other as a backup,
exactly as it happens in a single chassis installation. The main difference is under
the hood: whereas inside a physical MX chassis the Routing Engine liveness is
checked through hardware signals which monitor the state of the card, on the vir-
tual Routing Engines only the software keepalives are used. This doesn’t change
anything from a configuration and feature standpoint: the virtual Routing Engines
89 Manage Node Slicing on JDM and B-SYS

support graceful switchover, commit synchronization, non-stop routing, non-stop


bridging, and the configuration is exactly the same as the hardware Routing En-
gine counterpart.

EDGE-GNF Mastership Configuration Statements


As you may have noticed, the configuration is exactly the same as if the Routing
Engine was physical; same commands, same features, and same behavior:
set groups re0 system host-name EDGE-GNF-re0
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.181.175/24 master-only
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.181.176/24
set groups re0 routing-options static route 0.0.0.0/0 next-hop 172.30.181.1
set groups re0 routing-options static route 0.0.0.0/0 no-readvertise
set groups re1 system host-name EDGE-GNF-re1
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.181.175/24 master-only
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.181.177/24
set groups re1 routing-options static route 0.0.0.0/0 next-hop 172.30.181.1
set groups re1 routing-options static route 0.0.0.0/0 no-readvertise
set apply-groups re0
set apply-groups re1
set system commit fast-synchronize
set system commit synchronize
set system services ftp
set system services ssh root-login allow
set system services netconf ssh
set system services rest http

NOTE FXP0 IP addresses are configured according to the IP addressing schema


shown in Table 3.1. Moreover, some services are enabled even though they are not
strictly required by the redundancy configuration but they are useful for the
execution of our lab exercise!

Let’s commit the configuration and see what happens:


[edit]
root# commit and-quit 
re0: 
configuration check succeeds
re1: 
configuration check succeeds
commit complete
re0: 
commit complete
Exiting configuration mode

root@EDGE-GNF-re0> 

{master}
root@EDGE-GNF-re0>

Perfect! As expected, because no default election parameter was modified, re0 was
90 Chapter 3: GNF Creation, Bootup, and Configuration

elected Master Routing Engine, and re1 is the backup, let’s double check to be
sure:
root@JDM-SERVER1> request virtual-network-functions EDGE-GNF console    
Connected to domain EDGE-GNF
Escape character is ^]

FreeBSD/amd64 (EDGE-GNF-re1) (ttyu0)

login: root
Password:
Last login: Wed Feb 13 14:47:43 on ttyu0

--- JUNOS 18.3R1.9 Kernel 64-bit  JNPR-11.0-20180816.8630ec5_buil
root@EDGE-GNF-re1:~ # 
root@EDGE-GNF-re1:~ # cli

{backup}
root@EDGE-GNF-re1>

That’s great. As an exercise, you should repeat the same configuration on the
CORE-GNF and check the end result.

NOTE And from now on, it will be possible to reach the GNFs directly using
their FXP0 IP address, as shown here:

EDGE-GNF:
mmagnani-mbp:.ssh mmagnani$ ssh root@172.30.181.175
The authenticity of host ‘172.30.181.175 (172.30.181.175)’ can’t be established.
ECDSA key fingerprint is SHA256:jkbl3XiXbgsgrjGH0augTOAQDeoTvCmag0rM5wQUVms.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘172.30.181.175’ (ECDSA) to the list of known hosts.
Password:
Last login: Wed Feb 20 17:44:45 2019
--- JUNOS 18.3R1.9 Kernel 64-bit  JNPR-11.0-20180816.8630ec5_buil
root@EDGE-GNF-re0:~ #

CORE-GNF:
mmagnani-mbp:.ssh mmagnani$ ssh root@172.30.181.178
The authenticity of host ‘172.30.181.178 (172.30.181.178)’ can’t be established.
ECDSA key fingerprint is SHA256:NEddDs9zKcKjaG3pRw3eyVjCjYoMAeZ0JJCTbKwEgSk.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘172.30.181.178’ (ECDSA) to the list of known hosts.
Password:
Last login: Wed Feb 20 17:43:17 2019
--- JUNOS 18.3R1.9 Kernel 64-bit  JNPR-11.0-20180816.8630ec5_buil
root@CORE-GNF-re0:~ #

Check X86 Server Resource Consumption


Before configuring our GNFs, the last set of JDM commands worth mentioning
91 Manage Node Slicing on JDM and B-SYS

are the ones under the “show system” stanza. Indeed, they provide very useful and
well-presented information at a glance and are useful to double-check how the
X86 servers resources are doing.

NOTE Providing a full troubleshooting guide is out of the scope of this Day One
book, so the purpose of this section is to show only the commands that provide a
synthetic view of the resources used by Junos node slicing. For full troubleshoot-
ing-oriented purposes, it is recommended to use the “show system visibility”
hierarchy as it offers a much more detailed output. And, if you need to retrieve
specific information about the X86 inventory, the “show system inventory
[hardware | software]” hierarchy is the right place to look.

Network Resource Summary


The show system network command provides information about physical and logical
per-gnf and per-jdm statistics, all in a single output. Moreover, it also shows the
MAC address pool assigned to generate GNF MAC addresses. The command is
useful to double-check for packet errors or drops at every connectivity component
of the solution:
root@JDM-SERVER0> show system network

Physical Interfaces
----------------------------------------------------------------------------------------------------
Name Index MTU Hardware-address Rcvd Bytes Rcvd Packets Rcvd Error Rcvd Drop Trxd Bytes Trxd
Packets Trxd Error Trxd Drop Flags
-------- ----- ------- ----------------- ------------ ------------ ---------- --------- ------------
eno2 3 1500 ac:1f:6b:90:50:21 59969926653 245718527 0 626593 15060741 163447
0 0 0
enp4s0f1 7 1500 ac:1f:6b:8a:42:b7 1660347940 22419111 0 15356393 41695631056
38780720 0 0 0
enp4s0f0 6 1500 ac:1f:6b:8a:42:b6 4097335868 50600711 0 15356413 6077222086
21521066 0 0 0
eno3 4 1500 ac:1f:6b:90:50:22 11442711222 111847706 0 20070 1944 24
0 0 0

Per VNF Interface Statistics


----------------------------------------------------------------------------------------------------
Interface Source MAC Address Rcvd Bytes Rcvd packets Rcvd Error Rcvd Drop
Trxd bytes Trxd Packets Trxd Error Trxd Drop
-------------------------- ------------- ----------------- ------------ ------------ ---------- ----
VNF name: EDGE-GNF
macvtap0 enp4s0f0 02:ad:ec:d0:83:07 500841262 7737667 0 0
2406286308 30745755 0 0
macvtap1 enp4s0f1 02:ad:ec:d0:83:08 41107090034 31044160 0 0
445755818 6298711 0 0
vnet1 bridge_jdm_vm 02:ad:ec:d0:83:09 61836 1202 0 0 54680
1060 0 0
macvtap2 eno3 02:ad:ec:d0:83:0a 648 8 0 0
3839069674 37474946 0 0
92 Chapter 3: GNF Creation, Bootup, and Configuration

VNF name: CORE-GNF


macvtap3 enp4s0f0 02:ad:ec:d0:83:0b 295888515 3398730 0 0
540385888 4679331 0 0
macvtap4 enp4s0f1 02:ad:ec:d0:83:0c 127105886 675772 0 0
87064628 1270434 0 0
vnet3 bridge_jdm_vm 02:ad:ec:d0:83:0d 12562 245 0 0 8694
167 0 0
macvtap5 eno3 02:ad:ec:d0:83:0e 648 8 0 0
599768722 5609062 0 0

JDM Interface Statistics


----------------------------------------------------------------------------------------------------
Name Index MTU Hardware-address Rcvd Bytes Rcvd Packets Rcvd Error Rcvd Drop Trxd Bytes Trxd
Packets Trxd Error Trxd Drop Flags
-------- ----- ----- ----------------- ------------ ------------ ---------- --------- ------------ ---
bme1 12 1500 52:54:00:ec:ff:a1 627958792 5896401 0 0 12769583 167437
0 0 BMRU
jmgmt0 18 1500 02:ad:ec:d0:83:06 13695445959 111984659 0 15660 15011487 162704
0 0 BMRU
bme2 14 1500 52:54:00:73:c6:c2 64562 1241 0 0 70142 1371 0
0 BMRU
cb0 16 1500 02:ad:ec:d0:83:04 1681412548 22717837 0 15345447 5077358851 7306894
0 0 BMRU
cb1 17 1500 02:ad:ec:d0:83:05 1659282554 22407058 0 15346294 461429592 7060708
0 0 BMRU
cb0.4002 2 1500 02:ad:ec:d0:83:04 383996780 7372290 0 0 5077355661 7306855
0 0 ABMRU
cb1.4002 3 1500 02:ad:ec:d0:83:05 366149014 7060658 0 0 461427092 7060678
0 0 ABMRU

VNF MAC Address Pool


-----------------------------------------------------------
Start MAC Address: 02:ad:ec:d0:83:04
Range: 96

root@JDM-SERVER0>

Computing Resource Utilization


The show system cpu command provides a very condensed, but relevant, view about
utilization of CPU cores, both in terms of CPU pinning and real-time usage. It also
provides a quick view about free cores:
root@JDM-SERVER0> show system cpu

VNF CPU Utilization and Allocation Information
---------------------------------------------------------------------------------------------
VNF                                      CPU-Id(s)               Usage  Qemu Pid  State
---------------------------------------- ----------------------- ------ --------  -----------
CORE-GNF                                 12,13,14,15             13.6%  20225     Running    
EDGE-GNF                                 4,5,6,7,8,9,10,11       13.9%  21073     Running    

Free CPUs      : 16,17,18,19
Host Isolcpu(s): 2-19
Emulator Pins  : 2-3
93 Manage Node Slicing on JDM and B-SYS

root@JDM-SERVER0>

Memory Resource Utilization


The show system memory command shows a synthetic view of the memory consump-
tion on the X86 server. It provides data for the whole server, for JDM, and per
GNF:
root@JDM-SERVER0> show system memory 

Memory Usage Information
---------------------------
       Total  Used   Free
       ------ ------ ------
Host:  125.9G 35.6G  81.4G 

JDM :  0K     0K     0K    

VNF Memory Information
----------------------------------------------------------------
Name                                             Actual Resident
------------------------------------------------ ------ --------
CORE-GNF                                         32.0G  17.2G   
EDGE-GNF                                         64.0G  17.5G   

root@JDM-SERVER0>

Storage Resource Utilization


The show system storage command displays a concise and useful summary of all the
storage resources, used and free, showing statistics per system, JDM, and GNF:
root@JDM-SERVER0> show system storage 

Host Storage Information
--------------------------------------------------------------------------------
Device                             Size   Used   Available Use  Mount Point
---------------------------------- ------ ------ --------- ---- ----------------
/dev/mapper/jatp700--3--vg-root    491G   8.2G   458G      2%   /               
/dev/sda1                          720M   158M   525M      24%  /boot           
/dev/mapper/vm--primary--vg-vm--pr 1008G  12G    946G      2%   /vm-primary     

JDM Storage Information
--------------------------------------------------
Directories                                 Used
------------------------------------------- ------
/vm-primary/                                12G   
/var/third-party/                           76M   
/var/jdm-usr/                               12K   
/juniper                                    1.1G  

VNF Storage Information
---------------------------------------------------------
Directories                                 Size   Used
94 Chapter 3: GNF Creation, Bootup, and Configuration

------------------------------------------- ------ ------
/vm-primary/CORE-GNF                        52.7G  5.6G  
/vm-primary/EDGE-GNF                        52.7G  5.6G  

About JDM Automation


It should now be a very well-known fact that JDM provides a Junos-like CLI for
end user interaction. It is very important to emphasize that it also provides pro-
grammatic interfaces and a full-fledged NETCONF/YANG machinery to allow
network administrators to eventually fully automate Junos node slicing
operations.
A full automation explanation on JDM is beyond the scope of this book, neverthe-
less it’s useful to give some hints to start to trigger your curiosity towards this very
interesting automation machinery.
To retrieve JDM XML RPC APIs, the CLI provides exactly the same features of a
standard Junos CLI, that is, an inline API help available directly from the CLI. For
instance, to find the RPC to retrieve information about all running GNFs, it’s pos-
sible to use the | display xml rpc output redirection option:
root@JDM-SERVER0> show virtual-network-functions | display xml rpc 
<rpc-reply xmlns:junos=”http://xml.juniper.net/junos/18.3R1/junos”>
    <rpc>
        <get-virtual-network-functions>
        </get-virtual-network-functions>
    </rpc>
    <cli>
        <banner></banner>
    </cli>
</rpc-reply>

root@JDM-SERVER0>

And to retrieve the XML output from the CLI, the | display xml is available as well:
root@JDM-SERVER0> show virtual-network-functions | display xml 
<rpc-reply xmlns:junos=”http://xml.juniper.net/junos/18.3R1/junos”>
    <vnf-information xmlns=”http://xml.juniper.net/junos/18.3R1/junos-jdmd” junos:style=”brief”>
        <vnf-instance>
            <id>2</id>
            <name>CORE-GNF</name>
            <state>Running</state>
            <liveliness>up</liveliness>
        </vnf-instance>
        <vnf-instance>
            <id>1</id>
            <name>EDGE-GNF</name>
            <state>Running</state>
            <liveliness>up</liveliness>
        </vnf-instance>
    </vnf-information>
95 About JDM Automation

    <cli>
        <banner></banner>
    </cli>
</rpc-reply>

root@JDM-SERVER0>

Last but not least, another useful feature offered by the JDM CLI is using the op-
erational show system schema command to retrieve the YANG models to use them
for automation purposes. For instance, the YANG schema for jdm-rpc-virtual-net-
work-functions is:

root@JDM-SERVER0> show system schema module jdm-rpc-virtual-network-functions 
/*
 * Copyright (c) 2019 Juniper Networks, Inc.
 * All rights reserved.
 */
 module jdm-rpc-virtual-network-functions {
   namespace “http://yang.juniper.net/jdm/rpc/virtual-network-functions”;

   prefix virtual-network-functions;

   import junos-common-types {
     prefix jt;
   }

   organization “Juniper Networks, Inc.”;

   contact “yang-support@juniper.net”;

   description “Junos RPC YANG module for virtual-network-functions command(s)”;

   revision 2018-01-01 {
     description “Junos: 18.3R1.9”;
   }

   rpc get-virtual-network-functions {
     description “Show virtual network functions information”;
     input {
       uses command-forwarding;
       leaf vnf-name {
         description “VNF name”;
         type string {
           length “1 .. 256”;
         }
       }
-----SNIP ------

And exactly as what exists on standard Junos, to enable the natural born transport
companion for YANG, that is NETCONF, the configuration, is available under
the “system / service” stanza on JDM as well:
system {
    services {
        ssh {
            root-login allow;
96 Chapter 3: GNF Creation, Bootup, and Configuration

        }
        netconf {
            ssh;
            rfc-compliant;
        }
}

MORE? Please refer to the Junos Node Slicing Feature Guide available at https://
www.juniper.net/documentation/en_US/junos/information-products/pathway-
pages/junos-node-slicing/junos-node-slicing.pdf, which contains all the instruc-
tions to set up YANG-based Junos node slicing orchestration by using an external
SDN controller. To learn how to exploit the NETCONF/YANG tools that Junos
OS offers, a great place to start can be found at http://yang.juniper.net.
Chapter 4

GNF AF Interfaces

Two GNF instances are now running inside the same MX physical chassis. They
are completely separated and behave as single chassis routers, each of them
equipped with its own routing engines, line cards, and physical interfaces.
The next step is to perform the foundation of every network, that is… to intercon-
nect different nodes! Of course, as the two partitions can be considered as separate
devices, the most obvious way to achieve this goal is to use a physical cross con-
nection between ports installed on MPCs belonging to different GNFs.
But this approach has major drawbacks:
 If the connection must offer redundancy, more than one port is needed;
 Interconnecting different partitions wastes revenue ports, increasing the eco-
nomic impact of Junos node slicing;
 The topology to interconnect different GNFs running inside the same MX
chassis is a direct function of the maximum density achievable and the econom-
ics. If one more connection is needed for any reason, one more port per GNF,
two optics and a new cable will be needed;
 The number of necessary connections will have to be engineered based on the
total expected throughput needed by the solution, becoming an additional di-
mensioning factor of the solution;
 And, the internal chassis crossbar, which can interconnect line cards installed
in different slots, is completely wasted.
98 Chapter 4: GNF AF Interfaces

The solution to solve all the aforementioned drawbacks is provided by the AF In-
terfaces. As the name implies, these new interfaces are a logical abstraction of the
MX chassis internal fabric.Indeed, Junos OS has no way to handle the crossbar
directly, but it can easily manage it if the fabric is modeled as a physical Junos Eth-
ernet interface. This was the easiest way to create a very elegant and effective inter-
connection solution in a Junos node slicing installation.
The AF Interfaces are configured on the B-SYS as a point-to-point connection be-
tween two different GNFs. From a design perspective, they are the Junos node slic-
ing WAN or, in other words, core-facing interfaces. From a high-level logical view,
AF Interfaces can be depicted as shown in Figure 4.1.

Figure 4.1 AF Interface Logical Schema

AF Interfaces are numbered with a single digit in Junos, hence af0 to af9 interfaces
can be configured. In this Day One book lab, two interfaces, namely AF0, will be
configured to interconnect EDGE-GNF and CORE-GNF instances.

NOTE The PFEs installed on the same line card are part of the same GNF.
Therefore, they communicate through the fabric as it happens in a standalone
chassis without the need of an AF Interface.

Let’s configure the AF Interfaces and explore some more details about them once
they are in action. There are two main phases to correctly set up the connectivity
between two GNFs using AF Interfaces::
99 AF Interface Creation on the B-SYS

 Phase 1: Create the AF Interfaces on the B-SYS so that they are available to the
desired GNFs.
 Phase 2: Configure each end of the AF Interfaces on both GNFs as they are
plain-vanilla Junos interfaces.

AF Interface Creation on the B-SYS


To create an AF Interface so that it can show up on the corresponding GNF, some
commands must be configured using the B-SYS Junos CLI. As always, each of
them will be explained in detail on the following pages. Let’s apply the following
statements to MX960-4:
{master}[edit]
magno@MX960-4-RE0# set chassis network-slices guest-network-
functions gnf 1 af0 description “AF0 to CORE-GNF AF0” 
{master}[edit]
magno@MX960-4-RE0# set chassis network-slices guest-network-functions gnf 1 af0 peer-gnf id 2 
{master}[edit]
magno@MX960-4-RE0# set chassis network-slices guest-network-functions gnf 1 af0 peer-gnf af0 
{master}[edit]
magno@MX960-4-RE0# set chassis network-slices guest-network-functions gnf 2 fpcs 1 
{master}[edit]
magno@MX960-4-RE0# set chassis network-slices guest-network-
functions gnf 2 af0 description “AF0 to EDGE-GNF AF0” 
{master}[edit]
magno@MX960-4-RE0# set chassis network-slices guest-network-functions gnf 2 af0 peer-gnf id 1 
{master}[edit]
magno@MX960-4-RE0# set chassis network-slices guest-network-functions gnf 2 af0 peer-gnf af0 
{master}[edit]
magno@MX960-4-RE0#

The final config looks like this:


chassis {
    network-slices {
        guest-network-functions {
            gnf 1 {
                af0 {
                    description “AF0 to CORE-GNF AF0”;
                    peer-gnf id 2 af0;
                }
            }
            gnf 2 {
                af0 {
                    description “AF0 to EDGE-GNF AF0”;
                    peer-gnf id 1 af0;
                }
            }
        } 
    }
}
The configuration works as follows:
 The local end of an AF Interface is configured under the gnf $ID#; for instance,
EDGE-GNF (gnf id = 1) local AF name is af0;
100 Chapter 4: GNF AF Interfaces

 Under the local ‘af name’ stanza (af0 for EDGE-GNF), the peer-gnf command
identifies the remote end of the AF Interface using the remote GNF ID; for in-
stance, the peer-gnf id 2command means the remote end of the GNF-ID 1 AF0
interface is located on GNF id 2; in our case, ID 1 = EDGE-GNF, and ID 2 =
CORE-GNF;
 As the last parameter, the remote AF Interface name must be explicitly provid-
ed;
 Bottom line, the set chassis network-slices guest-network-functions gnf 1 af0
peer-gnf id 2 af0 command means in readable human language “GNF 1 AF0
interface connects to GNF 2 AF0 interface”;
 The configuration must be mirrored on the other end’s GNF, so the set chassis
command
network-slices guest-network-functions gnf 2 af0 peer-gnf id 1 af0
implements the AF Interface reverse direction from GNF 2 to GNF 1;
 The optional description statement allows the B-SYS administrator to create a

 remark string to describe the AF Interface; it is only cosmetic; it doesn’t affect


interface creation.
Let’s now activate CLI timestamping on MX960-4 and both CORE and EDGE
GNFs.
EDGE-GNF:
{master}
magno@EDGE-GNF-re0> set cli timestamp 
Feb 21 00:27:44
CLI timestamp set to: %b %d %T

{master}
magno@EDGE-GNF-re0> show interfaces terse | match af0 
Feb 21 00:28:13

{master}
magno@EDGE-GNF-re0>

CORE-GNF:
{master}
magno@CORE-GNF-re0> set cli timestamp        
Feb 21 00:27:23
CLI timestamp set to: %b %d %T
{master}
magno@CORE-GNF-re0> show interfaces terse | match af0 
Feb 21 00:28:20

{master}
magno@CORE-GNF-re0>
As expected, no AF0 interface is present on either GNF:
101 AF Interface Creation on the B-SYS

MX960-4:
{master}[edit]
magno@MX960-4-RE0# run set cli timestamp 
Feb 21 00:28:28
CLI timestamp set to: %b %d %T

{master}[edit]
magno@MX960-4-RE0#

Now, let’s commit the AF interfaces configuration on MX960-4 B-SYS:


{master}[edit]
magno@MX960-4-RE0# commit 
Feb 21 00:28:59
re0: 
configuration check succeeds
re1: 
configuration check succeeds
commit complete
re0: 
commit complete

{master}[edit]
magno@MX960-4-RE0#

The configuration was committed, so let’s check the GNFs again.


EDGE-GNF:
{master}
magno@EDGE-GNF-re0> show interfaces terse | match af0    
Feb 21 00:29:10
af0                     up    up

{master}
magno@EDGE-GNF-re0>

CORE-GNF:
{master}
magno@CORE-GNF-re0> show interfaces terse | match af0    
Feb 21 00:29:16
af0                     up    up

{master}
magno@CORE-GNF-re0>

Amazing! Now the AF0 interface appears on both GNFs and it’s in up / up state so
our Phase 1 can be considered completed!

NOTE The physical AF Interface is considered operationally Up if at least one


peer PFE is reachable from all local MPCs. The reachability is monitored using the
same fabric liveness detection mechanism used in a standalone chassis.
102 Chapter 4: GNF AF Interfaces

Configure AF Interfaces on EDGE and CORE GNFs


Now that AF0 interface is available to both GNFs, it’s time to perform the configu-
ration tasks needed to interconnect them. The concept at the foundation of AF In-
terfaces is very straightforward: they are modeled as Junos Ethernet interfaces.
Therefore, their settings must reflect those of a real Ethernet port.

NOTE With Junos 18.3, AF Interfaces feature parity is with Junos 17.4.

Before starting to fiddle with AF Interfaces, it’s very important to underline that
they are designed around the core-facing use case, therefore they are not completely
feature parity with a physical Ethernet interface; let’s examine all the major caveats:
 H-QOS is not supported;

 Only two traffic priorities, low and high, are available on AF interfaces;

 802.1/802.1AD Classification and rewrite is not supported;

 No bridge encapsulation and, generally speaking, no Layer 2 configurations are


supported on AF Interfaces;
 VLAN and flexible VLAN tagging are supported but VLAN manipulation op-
erations are not (neither atomic vlan-tag operation, nor VLAN id list/range);
 In-service software upgrade (ISSU) is not yet supported on AF Interfaces; it will
be in a future Junos release;
 Edge service terminations are not supported on AF Interfaces.

NOTE This last point deserves a more elaborate explanation, as it may be seen as a
major flaw, but for the sake of brevity it’s important to recall that AF Interfaces
were introduced to play as simple and fast core-facing interfaces. Implementing
unnecessary service termination functionalities on an interface that is designed to
forward traffic as fast as possible is a bad engineering decision and would go against
the fundamental principle of the whole Junos node slicing design – simplicity.

Lab AF Interface Configuration Guidelines


It’s time to actually configure our link between the EDGE and CORE GNFs. As AF
Interfaces support most of the features of a real Ethernet interface, it’s useful to de-
cide which subset of settings should be used in the exercise. Please bear in mind that
these choices are purely arbitrary, but another goal of this lab is to show you how
many features can be used over AF Interfaces. Nothing prevents you from choosing
your configuration style according to your network’s preferences.
103 AF Interface Creation on the B-SYS

This book’s AF Interfaces will be configured with the following main features:
 IFD encapsulation will be ‘flexible-ethernet-services’;

 Flexible VLAN tagging is supported on AF Interfaces and will be enabled;

 AF IFD MTU will be set to 9216 bytes;

 Unit 72 with VLAN-ID 72 is the core-facing IFL between the two GNFs;
 Inet, inet6, iso, and MPLS families will be activated under unit 72 even though
MPLS is not used;
 ISIS will be the IGP of choice; single Level 2 domain with point-to-point inter-
face, 100Gb reference-bw and wide-metric;
 Loopbacks interfaces are configured to demonstrate routing is working prop-
erly.
 The IP addressing scheme used for the book’s lab is listed next in Table 4.1.

Table 4.1 IP Addressing Scheme

GNF IFACE FAMILY ADDRESS


EDGE AF0.72 inet 72.0.0.1/30
EDGE AF0.72 Inet6 fec0::72.0.0.1/126
EDGE LO0.0 inet 72.255.255.1/32
EDGE LO0.0 inet6 fec0::72.255.255.1/128
EDGE LO0.0 iso 49.0001.7272.0255.0001.00
CORE AF0.72 inet 72.0.0.2/30
CORE AF0.72 inet6 fec0::72.0.0.2/126
CORE LO0.0 inet 72.255.255.2/32
CORE LO0.0 inet6 fec0::72.255.255.2/128
CORE LO0.0 iso 49.0001.7272.0255.0002.00

Okay, now let’s configure both ends and see if everything works as expected.
EDGE-GNF configuration:
set interfaces af0 flexible-vlan-tagging
set interfaces af0 mtu 9216
set interfaces af0 encapsulation flexible-ethernet-services
set interfaces af0 unit 72 vlan-id 72
set interfaces af0 unit 72 family inet address 72.0.0.1/30
set interfaces af0 unit 72 family iso
set interfaces af0 unit 72 family inet6 address fec0::71.0.0.1/126
set interfaces af0 unit 72 family mpls
set interfaces lo0 unit 0 family inet address 72.255.255.1/32
set interfaces lo0 unit 0 family iso address 49.0001.7272.0255.0001.00
set interfaces lo0 unit 0 family inet6 address fec0::72.255.255.1/128
104 Chapter 4: GNF AF Interfaces

set protocols isis reference-bandwidth 100g
set protocols isis level 1 disable
set protocols isis level 2 wide-metrics-only
set protocols isis interface af0.72 point-to-point
set protocols isis interface lo0.0 passive

CORE-GNF configuration:
set interfaces af0 flexible-vlan-tagging
set interfaces af0 mtu 9216
set interfaces af0 encapsulation flexible-ethernet-services
set interfaces af0 unit 72 vlan-id 72
set interfaces af0 unit 72 family inet address 72.0.0.2/30
set interfaces af0 unit 72 family iso
set interfaces af0 unit 72 family inet6 address fec0::71.0.0.2/126
set interfaces af0 unit 72 family mpls
set interfaces lo0 unit 0 family inet address 72.255.255.2/32
set interfaces lo0 unit 0 family iso address 49.0001.7272.0255.0002.00
set interfaces lo0 unit 0 family inet6 address fec0::72.255.255.2/128
set protocols isis reference-bandwidth 100g
set protocols isis level 1 disable
set protocols isis level 2 wide-metrics-only
set protocols isis interface af0.72 point-to-point
set protocols isis interface lo0.0 passive
set protocols layer2-control nonstop-bridging

After committing the configurations, let’s see if routing is properly set up:
{master}
magno@EDGE-GNF-re0> show isis adjacency 
Interface             System         L State        Hold (secs) SNPA
af0.72                CORE-GNF-re0   2  Up                   20

{master}
magno@EDGE-GNF-re0>

Looks promising. The ISIS adjacency is UP. Let’s take a quick look at the inet and
inet6 routing table to confirm the remote loopback addresses are correctly learned
through ISIS and placed into the inet0 and inet6.0 tables:
{master}
magno@EDGE-GNF-re0> show route protocol isis         

inet.0: 8 destinations, 9 routes (8 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

72.255.255.2/32    *[IS-IS/18] 00:37:26, metric 1
                    > to 72.0.0.2 via af0.72

iso.0: 1 destinations, 1 routes (1 active, 0 holddown, 0 hidden)

inet6.0: 7 destinations, 7 routes (7 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

fec0::48ff:ff02/128*[IS-IS/18] 00:37:26, metric 1
                    > to fe80::22a:9900:48ce:a142 via af0.72

{master}
magno@EDGE-GNF-re0>
105 AF Interface Creation on the B-SYS

Let’s test loopback to loopback inet and inet6 connectivity:


{master}
magno@EDGE-GNF-re0> ping 72.255.255.2 source 72.255.255.1 count 5 
PING 72.255.255.2 (72.255.255.2): 56 data bytes
64 bytes from 72.255.255.2: icmp_seq=0 ttl=64 time=1.888 ms
64 bytes from 72.255.255.2: icmp_seq=1 ttl=64 time=1.772 ms
64 bytes from 72.255.255.2: icmp_seq=2 ttl=64 time=1.781 ms
64 bytes from 72.255.255.2: icmp_seq=3 ttl=64 time=1.684 ms
64 bytes from 72.255.255.2: icmp_seq=4 ttl=64 time=1.682 ms

--- 72.255.255.2 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 1.682/1.761/1.888/0.076 ms

{master}
magno@EDGE-GNF-re0> ping inet6 fec0::72.255.255.2 source fec0::72.255.255.1 count 5   
PING6(56=40+8+8 bytes) fec0::48ff:ff01 --> fec0::48ff:ff02
16 bytes from fec0::48ff:ff02, icmp_seq=0 hlim=64 time=2.495 ms
16 bytes from fec0::48ff:ff02, icmp_seq=1 hlim=64 time=12.118 ms
16 bytes from fec0::48ff:ff02, icmp_seq=2 hlim=64 time=1.842 ms
16 bytes from fec0::48ff:ff02, icmp_seq=3 hlim=64 time=1.747 ms
16 bytes from fec0::48ff:ff02, icmp_seq=4 hlim=64 time=1.748 ms

--- fec0::72.255.255.2 ping6 statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/std-dev = 1.747/3.990/12.118/4.074 ms

{master}
magno@EDGE-GNF-re0>

Now that connectivity is achieved, let’s take a closer look at the AF Interfaces.
Let’s examine it on the EDGE-GNF side and highlight what is different from a
physical Ethernet counterpart:
{master}
magno@EDGE-GNF-re0> show interfaces af0           
Physical interface: af0, Enabled, Physical link is Up
  Interface index: 156, SNMP ifIndex: 544
  Type: Ethernet, Link-level type: Flexible-Ethernet, MTU: 9216, Speed: 480000mbps
  Device flags   : Present Running
  Interface flags: Internal: 0x4000
  Link type      : Full-Duplex
  Current address: 00:90:69:a4:14:1a, Hardware address: 00:90:69:a4:14:1a
  Last flapped   : 2019-02-21 00:28:28 UTC (14:18:53 ago)
  Input rate     : 384 bps (0 pps)
  Output rate    : 408 bps (0 pps)
  Bandwidth      : 480 Gbps 
  Peer GNF id    : 2
  Peer GNF Forwarding element(FE) view : 
  FPC slot:FE num  FE Bandwidth(Gbps) Status      Transmit Packets         Transmit Bytes
       1:0                   240         Up                      0                      0
       1:1                   240         Up                      0                      0

  Residual Transmit Statistics : 
  Packets :                    0 Bytes :                    0
106 Chapter 4: GNF AF Interfaces

  Fabric Queue Statistics :    
  FPC slot:FE num    High priority(pkts)        Low priority(pkts) 
       1:0                            0                         0
       1:1                            0                         0
  FPC slot:FE num    High priority(bytes)      Low priority(bytes) 
       1:0                              0                        0
       1:1                              0                        0
  Residual Queue Statistics : 
      High priority(pkts)       Low priority(pkts) 
                       0                        0
      High priority(bytes)      Low priority(bytes) 
                        0                        0

Logical interface af0.72 (Index 334) (SNMP ifIndex 545)


Flags: Up SNMP-Traps 0x4000 VLAN-Tag [ 0x8100.72 ] Encapsulation: ENET2
Input packets : 4836
Output packets: 4841
Protocol inet, MTU: 9194
Max nh cache: 75000, New hold nh limit: 75000, Curr nh cnt: 1, Curr new hold cnt: 0, NH drop cnt: 0
Flags: Sendbcast-pkt-to-re
Addresses, Flags: Is-Preferred Is-Primary
Destination: 72.0.0.0/30, Local: 72.0.0.1, Broadcast: 72.0.0.3
Protocol iso, MTU: 9191
Flags: Is-Primary
Protocol inet6, MTU: 9194
Max nh cache: 75000, New hold nh limit: 75000, Curr nh cnt: 1, Curr new hold cnt: 0, NH drop cnt: 0
Flags: Is-Primary
Addresses, Flags: Is-Preferred
Destination: fe80::/64, Local: fe80::22a:9900:48ce:a13e
Addresses, Flags: Is-Preferred Is-Primary
Destination: fec0::4700:0/126, Local: fec0::4700:1
Protocol mpls, MTU: 9182, Maximum labels: 3
Flags: Is-Primary
Protocol multiservice, MTU: Unlimited

Logical interface af0.32767 (Index 337) (SNMP ifIndex 546)


Flags: Up SNMP-Traps 0x4004000 VLAN-Tag [ 0x0000.0 ] Encapsulation: ENET2
Input packets : 0
Output packets: 0
Protocol multiservice, MTU: Unlimited
Flags: Is-Primary

{master}
magno@EDGE-GNF-re0>
As you can see, most of the information retrieved by the show interfaces af0 com-
mand is exactly the same that can be found on a real interface. There are differ-
ences though, that are worth further explanations, so let’s deep dive into them:
Type: Ethernet, Link-level type: Flexible-Ethernet, MTU: 9216, Speed: 480000mbps

As explained, the AF Interface behaves like an Ethernet interface and the flexible-
ethernet-service encapsulation is configured as well as an MTU of 9216 bytes. So
let’s focus on the most relevant information of the pack: Speed: 480000mbps. The
AF Interface speed is reported to Junos (kernel and rpd) as a 480Gbps Ethernet
interface!
107 AF Interface Creation on the B-SYS

NOTE All bandwidth figures are expressed as full-duplex values.

But where is this number coming from? Let’s dig deeper by examining some other
lines:
Bandwidth      : 480 Gbps 
  Peer GNF id    : 2
  Peer GNF Forwarding element(FE) view : 
  FPC slot:FE num  FE Bandwidth(Gbps) Status      Transmit Packets         Transmit Bytes
       1:0                   240         Up                      0                      0
       1:1                   240         Up                      0                      0

As shown by this output, the local AF Interface knows by the B-SYS configuration
that the remote GNF is the one with ID = 2 and it is composed by a line card host-
ed in slot number 1 hosting of two PFEs, each of them capable of pushing up to
240Gbps fabric bandwidth. Indeed, GNF ID = 2 is the CORE-GNF and the slot 1
line card is a MPC7e which has two EA chips capable of pushing 240Gbps each.
By summing up each PFE BW capacity, the total AF available bandwidth towards
GNF 2 is 480 Gbps.
Hey, let’s stop for a moment. We know that EDGE-GNF is also composed of a sin-
gle line card, but it is an MPC5eQ, different from an MPC7e. Indeed, it has two
PFEs based on the previous generation of TRIO chipset, capable of supporting
120Gbps towards the fabric. So, from CORE-GNF AF Interface’s perspective,
AF0 should have 120 + 120 = 240 Gbps forwarding capacity. Let’s check if this
understanding is correct!
From CORE-GNF, execute the show interfaces af0 CLI command:
{master}
magno@CORE-GNF-re0> show interfaces af0 
Physical interface: af0, Enabled, Physical link is Up
  Interface index: 190, SNMP ifIndex: 578
  Type: Ethernet, Link-level type: Flexible-Ethernet, MTU: 9216, Speed: 240000mbps
  Device flags   : Present Running
  Interface flags: Internal: 0x4000
  Link type      : Full-Duplex
  Current address: 00:90:69:39:cc:1a, Hardware address: 00:90:69:39:cc:1a
  Last flapped   : 2019-02-21 00:28:28 UTC (13:28:55 ago)
  Input rate     : 344 bps (0 pps)
  Output rate    : 0 bps (0 pps)
  Bandwidth      : 240 Gbps 
  Peer GNF id    : 1
  Peer GNF Forwarding element(FE) view : 
  FPC slot:FE num  FE Bandwidth(Gbps) Status      Transmit Packets         Transmit Bytes
       6:0                   120         Up                      0                      0
       6:1                   120         Up                      0                      0

  Residual Transmit Statistics : 
  Packets :                    0 Bytes :                    0

  Fabric Queue Statistics :    
  FPC slot:FE num    High priority(pkts)        Low priority(pkts) 
       6:0                            0                         0
108 Chapter 4: GNF AF Interfaces

6:1 0 0
FPC slot:FE num High priority(bytes) Low priority(bytes)
6:0 0 0
6:1 0 0
Residual Queue Statistics :
High priority(pkts) Low priority(pkts)
0 0
High priority(bytes) Low priority(bytes)
0 0

Logical interface af0.72 (Index 334) (SNMP ifIndex 579)


Flags: Up SNMP-Traps 0x4000 VLAN-Tag [ 0x8100.72 ] Encapsulation: ENET2
Input packets : 3771
Output packets: 3772
Protocol inet, MTU: 9194
Max nh cache: 75000, New hold nh limit: 75000, Curr nh cnt: 1, Curr new hold cnt: 0, NH drop cnt: 0
Flags: Sendbcast-pkt-to-re
Addresses, Flags: Is-Preferred Is-Primary
Destination: 72.0.0.0/30, Local: 72.0.0.2, Broadcast: 72.0.0.3
Protocol iso, MTU: 9191
Flags: Is-Primary
Protocol inet6, MTU: 9194
Max nh cache: 75000, New hold nh limit: 75000, Curr nh cnt: 1, Curr new hold cnt: 0, NH drop cnt: 0
Flags: Is-Primary
Addresses, Flags: Is-Preferred
Destination: fe80::/64, Local: fe80::22a:9900:48ce:a142
Addresses, Flags: Is-Preferred Is-Primary
Destination: fec0::4700:0/126, Local: fec0::4700:2
Protocol mpls, MTU: 9182, Maximum labels: 3
Flags: Is-Primary
Protocol multiservice, MTU: Unlimited

Logical interface af0.32767 (Index 337) (SNMP ifIndex 580)


Flags: Up SNMP-Traps 0x4004000 VLAN-Tag [ 0x0000.0 ] Encapsulation: ENET2
Input packets : 0
Output packets: 0
Protocol multiservice, MTU: Unlimited
Flags: Is-Primary

{master}
magno@CORE-GNF-re0>

Bingo! The GNF ID 1, that is the EDGE-GNF, has one line card installed in slot 6,
made of two PFEs, each of them capable of 120 Gbps towards the fabric. Bottom
line: the local AF Interface available bandwidth is simply the sum of the bandwith
available on all the line cards belonging to the peer GNF!
To better explain how AF Interface BW works, assume we have a setup where one
GNF is composed of two MPC5e line cards, while the other is using a single
MPC7e. The sample setup is shown in Figure 4.2.
109 AF Interface Creation on the B-SYS

Figure 4.2 AF Interface Bandwidth During Normal Operation

Now let’s assume that GNF 1 MPC5e in slot 1 is not available for any reason, as in
Figure 4.3.

Figure 4.3 GNF 2 AF Interface Bandwidth Availability Towards GNF 1 is Halved by MPC5e Slot 1 Failure
110 Chapter 4: GNF AF Interfaces

NOTE As you may notice, it is not unusual that the AF Interface bandwidth is
asymmetric between two GNFs. Indeed, as it depends on the sum of the PFE
installed on a line card, if they are different on the two peer GNFs it is perfectly
normal. On the other hand, it is not different from what happens in a single chassis
installation because the cards and the fabric are exactly the same from a hardware
perspective; this fact is just more evident because of the intrinsic nature of the AF
Interface.

Bandwidth is also a dynamic parameter that can change during the GNF life. If a
remote GNF is composed of two MPC5e line cards, for instance, the local AF In-
terface will account for 480Gbps bandwidth (2 MPC5e = 4 PFE each with
120Gbps fabric capacity) during normal operations. But what happens if one
MPC5e disappears from the GNF for any reason? While it doesn’t come back to
service, the very same AF Interface will account for half the bandwidth as only two
PFEs out of four are available in that moment.

NOTE Beware that if routing protocols are configured to automatically derive


interface costs based on their bandwidth, they may change during extraordinary
events making MPCs unavailable.

Before starting the lab, let’s see how AF Interfaces can also provide a suitable
transport infrastructure to terminate services.

Advanced AF Interface Use Cases for Service Termination


AF Interfaces are a logical abstraction of the underling MX Series fabric infra-
structure exposed to the Junos OS. Because they are engineered to address the
“fast and simple” core facing interfaces use case, service termination is not sup-
ported on the AF Interfaces.
While holding the “fast and simple” AF Interface implementation principle abso-
lutely true, there are no technical constraints that prevent you from using other
MX Series advanced features to provide the actual service anchor point, and then
use the AF Interfaces as a pure transport interface to simply deliver the service
originated traffic.
Consider a modern network design cornerstone, decoupling the underlying layer
from the service layer: why can’t it be applied to our scenario? AF Interfaces can
provide the underlay transport, while advanced features, such as service headend
termination, can provide the overlay service layer. So, let’s take a little detour from
our original lab path and examine this kind of use case more closely, because it of-
fers very important and practical service termination opportunities when node
slicing comes into play.
TRIO-based MX Series routers have been supporting service headend termination
for long time now, and the infrastructure has been enriched with new features at
111 AF Interface Creation on the B-SYS

each new release. So, let’s configure a simple example to show how service termi-
nation can be achieved on AF Interfaces through the use of the pseudowire sub-
scriber (PS) interfaces.
In this lab set up, three services must be delivered through AF Interfaces from
EDGE-GNF to CORE-GNF. To demonstrate the flexibility of the configuration,
each of them will have a different nature ranging from a plain Layer 2 bridge do-
main to more sophisticated VPLS and EVPN services.
These services will be collected on EDGE-GNF on the line card interface connected
to the IXIA Test Center and then, by using the pseudowire subscriber interface ma-
chinery, the overlay (service layer) and the underlay (transport layer) will be
stitched to deliver the traffic over the other end of the AF Interface sitting on the
CORE-GNF. The setup schematics are illustrated in Figure 4.4.

Figure 4.4 Overlay and Underlay Stitching Through AF Interfaces

Each VLAN provides a different service, namely:


 VLAN 50 – EVPN VLAN-Bundle Service

 VLAN 100 – VPLS Service

 VLAN 200 – Plain Layer 2 Bridging service

On the IXIA Test Center twenty end clients are emulated for each service.
The configuration works in principle by using the Pseudowire Subscriber Service
(PS) interfaces to collect the traffic directly from the origin service instantiation
and tunnel it through a transparent local cross-connect with the local AF Interface
to deliver it to the other end of the AF Interface located on the remote GNF.

NOTE For the sake of brevity, the bridge domain PS interface configuration is not
shown in Figure 4.5, but it is exactly like the other ones only with VLAN-ID =
200.
112 Chapter 4: GNF AF Interfaces

Figure 4.5 Pseudowire Subscriber to AF Stitching Configuration Logic

To better understand the configuration, is very important to understand how pseu-


dowire subscriber interfaces are modeled. They have a base unit 0, called the
transport unit, which provides the underlay connectivity vehicle to all the other
non-zero units, named service units. The transport unit encapsulation is always
“ethernet-ccc” as it must just transport Ethernet frames transparently. The service
units on the other end support a variety of encapsulations such as VLAN-VPLS,
VLAN-CCC, and VLAN-bridge to be suitable to become an interface of all the
desired service instances.
113 AF Interface Creation on the B-SYS

For instance, the EVPN service needs all the interfaces to be configured as either
ethernet-bridge or vlan-bridge, otherwise the Junos commit will fail. On the other
hand, the VPLS instance mandates the access interfaces encapsulation to be config-
ured as ethernet-vpls or vlan-vpls, therefore they all must be available on the pseu-
dowire subscriber service.
Let’s examine just the EVPN configuration, as all of the interfaces are configured
in the same way concerning the pseudowire subscriber interface and any other dif-
ferences are just related to specific service configuration statements.
The EVPN instance contains two interfaces: the physical access instance connected
to the IXIA Test Center (interface xe-6/0/0.50) and the pseudowire subscriber ser-
vice interface (ps2 unit 50). By configuring both into the same EVPN routing in-
stance, the communication between them is automatically achieved using the
EVPN machinery. As visible from the PS2 configuration snippet, this interface has
just two units: the transport and the service. They are stitched together just be-
cause they belong to the same underlying physical PS interface, so no further con-
figurations are needed.
The final missing piece is the cross-connect between the pseudowire subscriber
transport unit and the AF Interface on the EDGE-GNF. It’s achieved using a local
switched pseudowire configured leveraging Junos L2Circuit functionality.
Once the local cross connect is up and running, all the frames coming from the xe-
6/0/0.50 access interface will be forwarded through the PS service logical interface
and, in turn, to the PS transport unit and then, local switched to the local end of
the AF Interface.

NOTE The AF Interface configuration was not shown in the previous diagram
because of lack of space, so it is added here:
{master}[edit]
magno@EDGE-GNF-re0# show interfaces af0 unit 50
encapsulation vlan-ccc;
vlan-id 50;

{master}[edit]
magno@EDGE-GNF-re0#

As you can see, there’s nothing exciting here, just a plain-vanilla VLAN-CCC en-
capsulation with vlan-id = 50, as a normal L2circuit configuration requires.
Let’s examine the EVPN service configuration, starting from EDGE-GNF access
interface xe-6/0/0 interface all the way to the last hop that is the CORE-GNF inter-
face xe-1/0/0. Just one single service is explained as all of the principles can be sim-
ply applied to all of the services.
114 Chapter 4: GNF AF Interfaces

The first component to be examined is the EDGE-GNF xe-6/0/0 access interface,


which is connected to one side of the IXIA traffic generator. Its configuration is
very simple:
{master}[edit]
magno@EDGE-GNF-re0# show interfaces xe-6/0/0 
flexible-vlan-tagging;
encapsulation flexible-ethernet-services;
unit 50 {
    encapsulation vlan-bridge;
    vlan-id 50;
}
{master}[edit]
magno@EDGE-GNF-re0#

Business as usual here, with the physical interface (IFD in Junos jargon) configured
to provide the most flexible feature set available on the MX Series routers through
the use of the encapsulation flexible-ethernet-services (which allows you to mix
Layer 2 and Layer 3 services on the same IF), and flexible-vlan-tagging, which
provides the ability to use single and dual VLAN-tags on different units belonging
to the same underlying IFD. Then a single unit 50 with vlan-id 50 and encapsulation
vlan-bridge. Remember, EVPN access interfaces must be configured as plain bridg-
ing interfaces.
Okay, now let’s move towards the service configuration stanza:
{master}[edit]
magno@EDGE-GNF-re0# show routing-instances EVPN-VLAN-50 
instance-type evpn;
vlan-id 50;
interface xe-6/0/0.50;
interface ps2.50;
route-distinguisher 72.255.255.1:150;
vrf-target target:65203:50;
protocols {
       evpn;
}
{master}[edit]
magno@EDGE-GNF-re0#

The EVPN-VLAN-50 instance is, surprise surprise, an EVPN type instance and contains
two interfaces: the unit xe-6/0/0.50 just examined above, and a pseudowire service
unit, namely ps2.50. Despite the local-only nature of this EVPN context (we are
just stitching interfaces on a single node, no other EVPN PEs are present in the set-
up), the route-distinguisher and the route target must be configured otherwise Ju-
nos commit will fail. The vlan-id command enables the eventual VLAN
normalization feature, which is not needed in this setup since all the interfaces con-
figured on the same vlan-id and protocols evpn enable the EVPN machinery. Very
simple and straightforward so far, right?
115 AF Interface Creation on the B-SYS

Now to the tricky part: the pseudowire service interface and the local switched
cross-connection. First of all, the create the PS interfaces, the pseudowire-service
command must be configured under the chassis stanza. Moreover, as these inter-
faces are anchored to logical tunnels, they must be configured using the tunnel-ser-
vices statement. The resulting configuration:

{master}[edit]
magno@EDGE-GNF-re0# show chassis 
--- SNIP ---
pseudowire-service {
    device-count 10;
}
fpc 6 {
    pic 0 {
        tunnel-services {
            bandwidth 40g;
        }
    }
    pic 1 {
        tunnel-services {
            bandwidth 40g;
        }
    }
}
--- SNIP ---
{master}[edit]
magno@EDGE-GNF-re0#

With these settings, up to ten PS interfaces, from ps0 to ps9, and two LT interfaces,
namely lt-6/0/0 and lt-6/1/0 are created. The pic number means one logical tunnel
is created for each PFE installed on the MPC5e-Q line card, each providing up to
40Gbps bandwidth.

NOTE There is no free lunch in networking, by allocating up to 40GE BW to


logical tunnels we are allowing these interfaces to consume up to this bandwidth
from the PFE. On a MPC5e-Q each PFE can forward up to 120Gbps, hence if we
push 40Gbps over the pseudowire subscriber (or logical tunnel) interfaces, the
remaining bandwidth available to all the other services on the PFE will be 80Gbps.
It’s also very important to underline that the configuration itself doesn’t pre-allo-
cate any bandwidth, hence if no traffic is using the tunnel services, the PFE BW will
not decrease. Bottom line, only the actual tunneled traffic will consume data plane
resources.

NOTE Up to 7,000 PS interfaces can be configured on a single MX Series chassis


(or a single GNF).

Now that the PS interfaces are available, let’s configure them. For the EVPN ser-
vice use the PS2 interface:
116 Chapter 4: GNF AF Interfaces

{master}[edit]
magno@EDGE-GNF-re0# show interfaces ps2   
anchor-point {
    lt-6/0/0;
}
flexible-vlan-tagging;
mtu 9216;
encapsulation flexible-ethernet-services;
unit 0 {
    encapsulation ethernet-ccc;
}
unit 50 {
    encapsulation vlan-bridge;
    vlan-id 50;
}

{master}[edit]
magno@EDGE-GNF-re0#

As explained above, each PS interface must be anchored to an underlying LT, thus


PS2 is using lt-6/0/0 in this case. All the usual settings such as flexible vlan tagging,
encapsulation flexible-ethernet-services, and mtu are configured. Then the trans-
port unit (unit 0) is set with Ethernet cross-connect transparent encapsulation, and
unit 50 is configured to be used as an EVPN access interface, that is, with bridging
encapsulation. The vlan-id used is 50. And as we have already seen, the ps2.50 unit
indeed belongs to the EVPN-VLAN-50 instance.
We know that the local switching service will involve pseudowire subscriber trans-
port interface and the AF Interface, so let’s check its configuration before examin-
ing the Level 2 circuit:
{master}[edit]
magno@EDGE-GNF-re0# show interfaces af0 
Mar 06 11:00:25
flexible-vlan-tagging;
mtu 9224;
encapsulation flexible-ethernet-services;
unit 50 {
    encapsulation vlan-ccc;
    vlan-id 50;
}

{master}[edit]
magno@EDGE-GNF-re0#

Again, the first thing to notice is how simple the configuration is: business as usual
encapsulation and tagging, and a single unit configured with a vlan-id, and the
right encapsulation to be suitable for a Level 2 circuit service. You may have al-
ready noticed, the mtu value for this interface is slightly higher than the one ob-
served on the PS side, 9224 bytes versus 9216. We’ll come back to this in a
moment as it’s time to examine the last piece of the configuration, the local
switched l2circuit:
117 AF Interface Creation on the B-SYS

{master}[edit]
magno@EDGE-GNF-re0# show protocols l2circuit 
local-switching {
    interface af0.50 {
        end-interface {
            interface ps2.0;
        }
        ignore-encapsulation-mismatch;
    }
}

{master}[edit]
magno@EDGE-GNF-re0#

This is maybe the trickiest piece of the setup: the “local-switching” defines the
l2circuit as a self-contained cross-connect service inside the EDGE-GNF; bottom
line, no remote end points are involved. With this configuration, the AF0.50 and
the PS2.0 interfaces are stitched together in a point-to-point connection acting as a
very simple pseudowire. Nevertheless, as usual, the devil hides in the details so it’s
paramount to consider two mandatory conditions to successfully set up a so-called
“Martini” circuit:
 Encapsulation on both ends must be the same;

 MTU on both ends must match as traffic fragmentation/reassembly is not


available on pseudowires.
In our case we have a situation to fix: indeed, as described above, the pseudowire
subscriber transport unit is set as a plain Ethernet-CCC, while the AF0.50 is a
VLAN-CCC interface. This means the encapsulations are different. And it ex-
plains why the ignore-encapsulation-mismatch command is used: with this knob con-
figured the l2 circuit will come up regardless of the encapsulation used on both
ends of the pseudowire. So, we should be done, shouldn’t we? Not yet… indeed,
Junos derives the CCC MTU using a formula, which takes the physical interface
MTU and the overhead added by the encapsulation used on the pseudowire. It
happens that if vlan-ccc is used, 8 more bytes (basically the length of two VLAN
tags) are subtracted from the interface MTU. As the pseudowire subscriber inter-
face is configured with an MTU of 9216 bytes and uses Ethernet-CCC encapsula-
tion because no VLAN-ID is configured, the calculated MTU is still 9216 bytes.
On the other hand, the AF Interface uses VLAN-CCC encapsulation, hence 8 bytes
are subtracted from the IFD MTU, therefore a value of 9126 + 8 = 9224 is used. As
promised, the MTU mystery is now solved!
At this point we have reached the last hop on the EDGE-GNF. Once the traffic
reaches the AF0.50 unit, it is delivered as plain Ethernet frames to its remote end
that casually happens to sit on the CORE-GNF. As you’ll see, this time the configu-
ration will be really simple and straightforward as the traffic is delivered over a
basic VLAN that can easily be handled in a variety of ways. In our case, to ease the
lab set up, a bridge domain that will deliver plain-vanilla Ethernet transport be-
tween the AF and the xe-1/0/0 interface connected to the IXIA traffic generator.
118 Chapter 4: GNF AF Interfaces

Let’s start with the CORE-GNF AF Interface configuration:


{master}[edit]
magno@CORE-GNF-re1# show interfaces af0              
flexible-vlan-tagging;
mtu 9216;
encapsulation flexible-ethernet-services;
unit 50 {
    encapsulation vlan-bridge;
    vlan-id 50;
}

{master}[edit]
magno@CORE-GNF-re1#

Not too much to explain here that you haven’t already seen before. This time the
unit 50 is an encapsulation vlan-bridge interface because it must be configured in-
side a bridge domain. The xe-1/0/0 access interface configuration is very similar to
the one just examined:
{master}[edit]
magno@CORE-GNF-re1# show interfaces xe-1/0/0 
flexible-vlan-tagging;
encapsulation flexible-ethernet-services;
unit 50 {
    encapsulation vlan-bridge;
    vlan-id 50;
}

{master}[edit]
magno@CORE-GNF-re1#

The most noticeable difference is the MTU value, set to the default 1518 bytes val-
ue (Layer 2), because it’s a customer-facing interface.
Both interfaces are then inserted into a bridge-domain:
{master}[edit]
magno@CORE-GNF-re1# show bridge-domains 
VLAN-50 {
    vlan-id 50;
    interface xe-1/0/0.50;
    interface af0.50;
    routing-interface irb.50;
}

{master}[edit]
magno@CORE-GNF-re1#

Pretty straightforward, a plain VLAN 50 configuration. The integrated routing


and bridging (IRB) interface is not used in the lab tests but was inserted to demon-
strate how Layer 2 and Layer 3 may be easily deployed. The final EVPN-based ser-
vice configuration is illustrated in Figures 4.6 and 4.7.
119 AF Interface Creation on the B-SYS

Figure 4.6 EDGE-GNF Overlay - Underlay Configuration

Figure 4.7 CORE-GNF Service Delivery Configuration


120 Chapter 4: GNF AF Interfaces

Verification
Now that all the service termination over AF Interface pieces of the Junos node
slicing puzzle are in place, let’s quickly test the services to see if they work as
expected.
Ten hosts on each side of the setup are configured for each service (10 x 2 sides x 3
services = 60 hosts) and will act as end users. They will generate three bidirectional
10,000 pps traffic streams (one for each service) running for 60 seconds. The ex-
pected result is to send 3 x 10,000 * 60 seconds = 1,800,000 packets for each di-
rection (a total of 3,600,000 packets) that must be correctly received on the remote
end of the stream.
The test plan looks like Figure 4.8.

Figure 4.8 IXIA Test Center Scenario

Before starting the real test, let’s see if the control plane is working by first looking
at the l2 circuit connections. There are three, one for each service, and all of them
must be in the Up state, otherwise there would be traffic blackholing:
{master}
magno@EDGE-GNF-re0> show l2circuit connections 
--- SNIP ---
Local Switch af0.100 
    Interface                 Type  St     Time last up          # Up trans
    af0.100(vc 0)             loc   Up     Mar  5 15:59:16 2019           1
      Local interface: af0.100, Status: Up, Encapsulation: VLAN
      Local interface: ps1.0, Status: Up, Encapsulation: ETHERNET
 Local Switch af0.200 
121 Verification

    Interface                 Type  St     Time last up          # Up trans
    af0.200(vc 0)             loc   Up     Mar  5 16:01:49 2019           1
      Local interface: af0.200, Status: Up, Encapsulation: VLAN
      Local interface: ps0.0, Status: Up, Encapsulation: VLAN
 Local Switch af0.50 
    Interface                 Type  St     Time last up          # Up trans
    af0.50(vc 0)              loc   Up     Mar  6 00:41:16 2019           1
      Local interface: af0.50, Status: Up, Encapsulation: VLAN
      Local interface: ps2.0, Status: Up, Encapsulation: ETHERNET

{master}
magno@EDGE-GNF-re0>

All three l2 circuits local connections are Up and ready to receive traffic.
Before starting the traffic, the Address Resolution Protocol (ARP) resolution pro-
cess was triggered on the IXIA traffic generator, so let’s take a look to see if all the
MAC Addresses are correctly learned the different service instances:
{master}
magno@EDGE-GNF-re0> show vpls mac-table count 

20 MAC address learned in routing instance VPLS-VLAN100 bridge domain __VPLS-VLAN100__

  MAC address count per interface within routing instance:
    Logical interface        MAC count
    ps1.100:100                     10
    xe-6/0/0.100:100                10

  MAC address count per learn VLAN within routing instance:
    Learn VLAN ID            MAC count
              100                   20

0 MAC address learned in routing instance __juniper_private1__ bridge domain ____juniper_
private1____

{master}
magno@EDGE-GNF-re0> show evpn mac-table count    

21 MAC address learned in routing instance EVPN-VLAN-50 bridge domain __EVPN-VLAN-50__

  MAC address count per interface within routing instance:
    Logical interface        MAC count
    ps2.50:50                       11
    xe-6/0/0.50:50                  10

  MAC address count per learn VLAN within routing instance:
    Learn VLAN ID            MAC count
               50                   21

{master}
magno@EDGE-GNF-re0> show bridge mac-table bridge-domain VLAN-200 count 

20 MAC address learned in routing instance default-switch bridge domain VLAN-200

  MAC address count per interface within routing instance:
    Logical interface        MAC count
    ps0.200:200                     10
122 Chapter 4: GNF AF Interfaces

    xe-6/0/0.200:200                10

  MAC address count per learn VLAN within routing instance:
    Learn VLAN ID            MAC count
              200                   20

{master}
magno@EDGE-GNF-re0>

Again, everything is fine, and all the MAC addresses are present in the relevant
MAC tables: there are 20 in all, as 10 hosts are simulated, per service, on each ac-
cess interfaces. But wait a second! The EVPN MAC table count output reads 21
MAC addresses! It’s easily explainable: raise your hand if you can remember the
unused IRB interface configured inside the VLAN-50 bridge domain on the
CORE-GNF! Indeed, 11 MAC addresses are learned through the pseudowire sub-
scriber interface pointing to the remote GNF! So now that there’s confidence in the
lab that everything should work, time to start the traffic.
During the traffic run, we’ll examine all the involved interfaces, from the xe-6/0/0
access interface on EDGE-GNF to the CORE-GNF xe-1/0/0. As traffic is symmet-
rical, only one direction is shown (the counters should show the same values on
both input and output directions). The traffic is transmitted at 30,000 pps on all
the involved interfaces.
EDGE-GNF:
{master}
magno@EDGE-GNF-re0> show interfaces xe-6/0/0 | match rate    
Mar 06 13:52:39
  Input rate     : 121920120 bps (30000 pps)
  Output rate    : 121916040 bps (29998 pps)

{master}
magno@EDGE-GNF-re0> show interfaces ps0 | match rate         
  Input rate     : 40640056 bps (10000 pps)
  Output rate    : 40640056 bps (10000 pps)

{master}
magno@EDGE-GNF-re0> show interfaces ps1 | match rate    
  Input rate     : 40640120 bps (10000 pps)
  Output rate    : 40640120 bps (10000 pps)

{master}
magno@EDGE-GNF-re0> show interfaces ps2 | match rate    
  Input rate     : 40638000 bps (9999 pps)
  Output rate    : 40640040 bps (10000 pps)

{master}
magno@EDGE-GNF-re0> show interfaces af0 | match rate    
  Input rate     : 121918016 bps (29999 pps)
  Output rate    : 121920048 bps (30000 pps)

{master}
magno@EDGE-GNF-re0>
123 Verification

You can see here that each PS interface carries 10,000 pps in both directions and
the aggregated packet per second value of 30,000 is seen in input and output on
both the xe-6/0/0 and af0 interfaces. To ensure confirmation that the traffic is
flowing as expected, let’s also check CORE-GNF:
{master}
magno@CORE-GNF-re1> show interfaces af0 | match rate 
Mar 06 13:53:12
  Input rate     : 121933480 bps (30000 pps)
  Output rate    : 121929312 bps (30000 pps)

{master}
magno@CORE-GNF-re1> show interfaces xe-1/0/0 | match rate 
  Input rate     : 121925768 bps (30000 pps)
  Output rate    : 121923736 bps (30000 pps)

{master}
magno@CORE-GNF-re1>

These results look pretty promising. Indeed, on both af0 and xe-1/0/0 interfaces
you can see that 30,000 packets per second are forwarded in both input and out-
put directions. Let’s wait until the traffic stops and check the aggregate counters to
confirm that all the 1,800,000 packets (in each direction) could make their end-to-
end journey. From the EDGE-GNF:
{master}
magno@EDGE-GNF-re0> show interfaces xe-6/0/0  
Physical interface: xe-6/0/0, Enabled, Physical link is Up
  Interface index: 171, SNMP ifIndex: 538
  Link-level type: Flexible-Ethernet, MTU: 1522, MRU: 1530, LAN-
PHY mode, Speed: 10Gbps, BPDU Error: None, Loop Detect PDU Error: None, MAC-
REWRITE Error: None, Loopback: None,
  --- SNIP ---
    Flags: Up SNMP-Traps 0x20004000 VLAN-Tag [ 0x8100.50 ]  Encapsulation: VLAN-Bridge
    Input packets : 600000
    Output packets: 600000
    Protocol bridge, MTU: 1522
      Flags: Is-Primary

  Logical interface xe-6/0/0.100 (Index 359) (SNMP ifIndex 548)
    Flags: Up SNMP-Traps 0x4000 VLAN-Tag [ 0x8100.100 ]  Encapsulation: VLAN-VPLS
    Input packets : 600000
    Output packets: 600000
    Protocol vpls, MTU: 1522

  Logical interface xe-6/0/0.200 (Index 349) (SNMP ifIndex 551)
    Flags: Up SNMP-Traps 0x20004000 VLAN-Tag [ 0x8100.200 ]  Encapsulation: VLAN-Bridge
    Tenant Name: (null)
    Input packets : 600000
    Output packets: 600000
    Protocol bridge, MTU: 1522
      Flags: Is-Primary
--- SNIP ---

{master}
magno@EDGE-GNF-re0>
124 Chapter 4: GNF AF Interfaces

Each unit configured on the xe-6/0/0 access interface has sent and received
600,000 packets in each direction, the sum of 1,800,000 as expected.
And now the pseudowire subscription PS interfaces:
{master}
magno@EDGE-GNF-re0> show interfaces ps0    
Physical interface: ps0, Enabled, Physical link is Up
  --- SNIP ---
  Logical interface ps0.0 (Index 338) (SNMP ifIndex 564)
    Flags: Up Point-To-Point 0x4004000 VLAN-Tag [ 0x8100.200 ]  Encapsulation: VLAN-CCC
    Input packets : 600000
    Output packets: 600000
    Protocol ccc, MTU: 9216

  Logical interface ps0.200 (Index 330) (SNMP ifIndex 593)
    Flags: Up 0x20004000 VLAN-Tag [ 0x8100.200 ]  Encapsulation: VLAN-Bridge
    Tenant Name: (null)
    Input packets : 600000
    Output packets: 600000
    Protocol bridge, MTU: 9216
---- SNIP ---

{master}
magno@EDGE-GNF-re0> show interfaces ps1    
Physical interface: ps1, Enabled, Physical link is Up
--- SNIP ---
  Logical interface ps1.0 (Index 341) (SNMP ifIndex 578)
    Flags: Up Point-To-Point 0x4004000 Encapsulation: Ethernet-CCC
    Input packets : 600000
    Output packets: 600000
    Protocol ccc, MTU: 9216

  Logical interface ps1.100 (Index 352) (SNMP ifIndex 594)
    Flags: Up 0x4000 VLAN-Tag [ 0x8100.100 ]  Encapsulation: VLAN-VPLS
    Input packets : 600000
    Output packets: 600000
    Protocol vpls, MTU: 9216
      Flags: Is-Primary

  --- SNIP ---

{master}
magno@EDGE-GNF-re0> show interfaces ps2    
Physical interface: ps2, Enabled, Physical link is Up
  --- SNIP ---
  Logical interface ps2.0 (Index 355) (SNMP ifIndex 587)
    Flags: Up Point-To-Point 0x4004000 Encapsulation: Ethernet-CCC
    Input packets : 600000
    Output packets: 600000
    Protocol ccc, MTU: 9216

  Logical interface ps2.50 (Index 331) (SNMP ifIndex 590)
    Flags: Up 0x20004000 VLAN-Tag [ 0x8100.50 ]  Encapsulation: VLAN-Bridge
    Input packets : 600000
    Output packets: 600000
    Protocol bridge, MTU: 9216
125 Verification

  --- SNIP ---
{master}
magno@EDGE-GNF-re0>

The PS interface counters look good, too! The transport and service units on each
pseudowire subscription interface accounted for the magic number of 600,000
packets in each direction. So, the last step is a double check of the last leg, the AF
Interface:

{master}
magno@EDGE-GNF-re0> show interfaces af0                 
Physical interface: af0, Enabled, Physical link is Up
  ---- SNIP ----
  FPC slot:FE num  FE Bandwidth(Gbps) Status      Transmit Packets         Transmit Bytes
       1:0                   240         Up                 901256              457838048
       1:1                   240         Up                 898744              456561952
  Residual Transmit Statistics : 
  Packets :                    0 Bytes :                    0

  Fabric Queue Statistics :    
  FPC slot:FE num    High priority(pkts)        Low priority(pkts) 
       1:0                            0                    901256
       1:1                            0                    898744
  FPC slot:FE num    High priority(bytes)      Low priority(bytes) 
       1:0                              0                457838048
       1:1                              0                456561952
  Residual Queue Statistics : 
      High priority(pkts)       Low priority(pkts) 
                       0                        0
      High priority(bytes)      Low priority(bytes) 
                        0                        0

  Logical interface af0.50 (Index 356) (SNMP ifIndex 592)
    Flags: Up SNMP-Traps 0x4000 VLAN-Tag [ 0x8100.50 ]  Encapsulation: VLAN-CCC
    Input packets : 600000
    Output packets: 600000
    Protocol ccc, MTU: 9224

  Logical interface af0.100 (Index 333) (SNMP ifIndex 549)
    Flags: Up SNMP-Traps 0x4000 VLAN-Tag [ 0x8100.100 ]  Encapsulation: VLAN-CCC
    Input packets : 600000
    Output packets: 600000
    Protocol ccc, MTU: 9224
      Flags: Is-Primary

  Logical interface af0.200 (Index 351) (SNMP ifIndex 550)
    Flags: Up SNMP-Traps 0x4000 VLAN-Tag [ 0x8100.200 ]  Encapsulation: VLAN-CCC
    Input packets : 600000
    Output packets: 600000
    Protocol ccc, MTU: 9216
      Flags: User-MTU

  --- SNIP ---
{master}
magno@EDGE-GNF-re0>
126 Chapter 4: GNF AF Interfaces

Nothing strange on this interface, either. The 600,000 magic number shows up
again on all the AF units, so it’s absolutely on track.
You may have spotted some interesting output from the show interface af0 com-
mand, in the Fabric Queue Statistics section. As explained, the AF Interface is a
logical abstraction of the underlying fabric, hence the packets should be sprayed
evenly among different PFEs. At the same time, they are also Junos interfaces,
therefore the load balancing algorithm is applied to share packets among different
PFEs. It is worth noting how well the load balancing algorithm works, the total
traffic is shared with a 50,0070 / 49,9930 % ratio between PFE 0 and PFE 1,
which is an amazing result, especially considering the relatively low number of
flows used during the test.
A quick check on the CORE-GNF interfaces confirms the good results already
observed:
{master}
magno@CORE-GNF-re1> show interfaces xe-1/0/0  
Physical interface: xe-1/0/0, Enabled, Physical link is Up
  --- SNIP ---
  Logical interface xe-1/0/0.50 (Index 346) (SNMP ifIndex 931)
    Flags: Up SNMP-Traps 0x20004000 VLAN-Tag [ 0x8100.50 ]  Encapsulation: VLAN-Bridge
    Tenant Name: (null)
    Input packets : 600000
    Output packets: 600000
    Protocol bridge, MTU: 1522

  Logical interface xe-1/0/0.100 (Index 389) (SNMP ifIndex 613)
    Flags: Up SNMP-Traps 0x20004000 VLAN-Tag [ 0x8100.100 ]  Encapsulation: VLAN-Bridge
    Tenant Name: (null)
    Input packets : 600000
    Output packets: 600000
    Protocol bridge, MTU: 1522

  Logical interface xe-1/0/0.200 (Index 390) (SNMP ifIndex 615)
    Flags: Up SNMP-Traps 0x20004000 VLAN-Tag [ 0x8100.200 ]  Encapsulation: VLAN-Bridge
    Tenant Name: (null)
    Input packets : 600000
    Output packets: 600000
    Protocol bridge, MTU: 1522

---- SNIP ----
{master}
magno@CORE-GNF-re1> show interfaces af0 
Physical interface: af0, Enabled, Physical link is Up
  ---- SNIP ----
  Peer GNF Forwarding element(FE) view : 
  FPC slot:FE num  FE Bandwidth(Gbps) Status      Transmit Packets         Transmit Bytes
       6:0                   120         Up                 900714              457561802
       6:1                   120         Up                 899288              456838304

  Residual Transmit Statistics : 
  Packets :                    0 Bytes :                    0
127 Verification

  Fabric Queue Statistics :    
  FPC slot:FE num    High priority(pkts)        Low priority(pkts) 
       6:0                            0                    900714
       6:1                            0                    899288
  FPC slot:FE num    High priority(bytes)      Low priority(bytes) 
       6:0                              0                457561802
       6:1                              0                456838304
  Residual Queue Statistics : 
      High priority(pkts)       Low priority(pkts) 
                       0                        0
      High priority(bytes)      Low priority(bytes) 
                        0                        0

  Logical interface af0.50 (Index 345) (SNMP ifIndex 930)
    Flags: Up SNMP-Traps 0x20004000 VLAN-Tag [ 0x8100.50 ]  Encapsulation: VLAN-Bridge
    Tenant Name: (null)
    Input packets : 600000
    Output packets: 600000
    Protocol bridge, MTU: 9216

  Logical interface af0.100 (Index 341) (SNMP ifIndex 608)
    Flags: Up SNMP-Traps 0x20004000 VLAN-Tag [ 0x8100.100 ]  Encapsulation: VLAN-Bridge
    Tenant Name: (null)
    Input packets : 600000
    Output packets: 600000
    Protocol bridge, MTU: 9216
      Flags: Is-Primary

  Logical interface af0.200 (Index 342) (SNMP ifIndex 607)
    Flags: Up SNMP-Traps 0x20004000 VLAN-Tag [ 0x8100.200 ]  Encapsulation: VLAN-Bridge
    Tenant Name: (null)
    Input packets : 600000
    Output packets: 600000
    Protocol bridge, MTU: 9216

---- SNIP ----
{master}
magno@CORE-GNF-re1>

These CORE-GNF xe-1/0/0 and AF0 interfaces look good, too. All 600,000 pack-
ets were forwarded in both directions over all three configured units. Also, in this
case, it’s interesting to notice how good the load balancing algorithm worked on
the AF Interface, where the split ratio between the two PFEs was 50,0039 /
49,9961 %! Impressive!
So far all the traffic generated by the IXIA Test Center was correctly received and
forwarded, leaving the counters on the traffic generator itself to be verified as
shown in Figure 4.9.
128 Chapter 4: GNF AF Interfaces

Figure 4.9 IXIA Test Center Frames TX / RC report

And bingo! The IXIA Test Center certifies that all the traffic was sent and received
correctly, claiming the test was successful!
This use case is a perfect fit for scenarios where the best service delivery point is
located over the abstract fabric interface. Even though AF Interfaces are consid-
ered simple and fast, core-facing interfaces, it is still possible to leverage them as
underlay transport for traffic originated by services terminated over the dedicated
subscriber service infrastructure. In this way, it is perfectly possible to use the
bandwidth offered by the AF Interfaces without wasting revenue ports. At the
same time, these high-speed fabric-based interfaces will stay fast, reliable, and sim-
ple to fulfill their main purpose, which is to allow core connections between differ-
ent GNFs in the most flexible, efficient, and easiest way possible.
Chapter 5

Lab It! EDGE and CORE Testing

After the discussion of advanced service termination with AF Interface in Chapter


4, it’s finally time to get the lab set up with edge and a core function leveraging
Junos node slicing.

Lab Setup
First of all, let’s take a look at Figure 5.1 for a logical view of the lab so far, where
basic configurations, such as AF Interface point-to-point connection, loopback
addresses, and ISIS routing are already up and running on both GNFs.

Figure 5.1 Final Logical Lab Setup Schema


130 Chapter 5: Lab It! EDGE and CORE Testing

The two GNFs have the same features, scaling, look, and feel as a standalone MX
Series router, therefore anything that can run on a single device can run on this set-
up! But for our purposes, one EDGE and one CORE application will be config-
ured on the two GNFs, and in particular:
 The EDGE GNF will provide a very basic broadband edge functionality for
64,000 subscribers.
 The CORE-GNF will act as a BGP peering router to provide Internet access to
the EDGE BNG.
 Leveraging the IXIA Test Center, the lab configuration will showcase:

 A broadband edge C-VLAN access-model with DHCPv4 relay to an external


DHCP server.
 An EDGE-GNF advertising a subscriber management pool (100.100.0.0/16)
to CORE-GNF which, in turn, advertises to all the e-BGP peers.
 The DHCP server will be emulated on the IXIA port connected to the CORE-
GNF; this is to demonstrate that a control-plane intensive task, such as DHCP
relay, can be provided over AF Interfaces.
 The broadband edge services are configured in a very simple and straightfor-
ward way: CoS, RADIUS authentication, and security services are not activat-
ed on the BNG, even though they are perfectly supported on the GNFs.
 The CORE-GNF is configured to act as an AS 65203 BGP peering router where
100 external sessions are emulated by the IXIA Test Center; a total of 100,000
routes are advertised by external BGP peers, while the BBE subscriber address
pool is advertised by the CORE-GNF.
 The EDGE-GNF and CORE-GNF share an internal BGP peering session where
the former is advertising subscriber management address pools, while the lat-
ter is sending a single default route to provide Internet reachability.
 And, the iBGP peering session between EDGE and CORE GNFs is configured
over the loopback addresses, reciprocally learned by an ISIS IGP adjacency
over the AF Interfaces between them.
Once the setup is configured, it looks like Figure 5.2, where verything is working
just as expected: subscribers are all connected to EDGE-GNF and routes are
learned on the CORE-GNF.
131 Lab Setup

Figure 5.2 Lab Setup Running BBE and BGP Peering Scenarios

NOTE Full configurations for both GNFs, BASE-SYS, and IXIA Test Center are
provided in the Appendix.

Some screenshots are provided inline within the text to testify to the results
achieved. Let’s start checking the routing infrastructure first, starting by inspecting
ISIS adjacencies between EDGE and CORE GNFs, and checking that loopback
addresses are correctly learned:
{master}
magno@EDGE-GNF-re0> show isis adjacency 
Interface             System         L State        Hold (secs) SNPA
af0.72                CORE-GNF-re1   2  Up                   19

{master}
magno@EDGE-GNF-re0> show route protocol isis table inet.0 

inet.0: 13 destinations, 14 routes (13 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

72.255.255.2/32    *[IS-IS/18] 2d 15:57:52, metric 1
                    > to 72.0.0.2 via af0.72

{master}
magno@EDGE-GNF-re0> ping 72.255.255.2 source 72.255.255.1 count 1                            
PING 72.255.255.2 (72.255.255.2): 56 data bytes
64 bytes from 72.255.255.2: icmp_seq=0 ttl=64 time=1.964 ms

--- 72.255.255.2 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max/stddev = 1.964/1.964/1.964/0.000 ms

{master}
magno@EDGE-GNF-re0>
132 Chapter 5: Lab It! EDGE and CORE Testing

Loopback to loopback reachability is working as expected, let’s check the internal


BGP session between EDGE and CORE-GNF status and route exchanges:
{master}
magno@EDGE-GNF-re0> show bgp summary 
Groups: 1 Peers: 1 Down peers: 0
Table          Tot Paths  Act Paths Suppressed    History Damp State    Pending
inet.0               
                       1          1          0          0          0          0
inet6.0              
                       0          0          0          0          0          0
Peer                     AS      InPkt     OutPkt    OutQ   Flaps Last Up/Dwn State|#Active/
Received/Accepted/Damped...
72.255.255.2          65203         44         45       0       3       18:51 Establ
  inet.0: 1/1/1/0
  inet6.0: 0/0/0/0

{master}
magno@EDGE-GNF-re0> show route receive-protocol bgp 72.255.255.2 table inet.0 

inet.0: 13 destinations, 14 routes (13 active, 0 holddown, 0 hidden)
  Prefix                  Nexthop              MED     Lclpref    AS path
* 0.0.0.0/0               72.255.255.2                 100        65400 I

{master}
magno@EDGE-GNF-re0> show route advertising-protocol bgp 72.255.255.2 table inet.0 

inet.0: 13 destinations, 14 routes (13 active, 0 holddown, 0 hidden)
  Prefix                  Nexthop              MED     Lclpref    AS path
* 100.100.0.0/16          Self                         100        I

{master}
magno@EDGE-GNF-re0> show route protocol bgp table inet.0 

inet.0: 13 destinations, 14 routes (13 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

0.0.0.0/0          *[BGP/170] 00:22:04, localpref 100, from 72.255.255.2
                      AS path: 65400 I, validation-state: unverified
                    > to 72.0.0.2 via af0.72

{master}
magno@EDGE-GNF-re0>

{master}
magno@CORE-GNF-re1> show route protocol bgp table inet.0 100.100.0.0/16 exact 

inet.0: 100217 destinations, 100218 routes (100217 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

100.100.0.0/16     *[BGP/170] 00:25:55, localpref 100, from 72.255.255.1
                      AS path: I, validation-state: unverified
                    > to 72.0.0.1 via af0.72

{master}
magno@CORE-GNF-re1>
133 Lab Setup

Everything is just as expected. The iBGP session is established between the two
GNFs; the default route is advertised by the CORE-GNF and active in the EDGE-
GNF, and the broadband edge subscriber address pool is advertised the other way
around and active in CORE-GNF routing table. IP reachability between the simu-
lated Internet and the edge is thus achieved. The AF Interface is providing the un-
derlay connection between the two separated functions as desired.

NOTE You may have noticed that RE0 holds the mastership on EDGE-GNF
while on the CORE it is held by RE1. This was done on purpose to further under-
line that there are no technical issues or constraints related to the Routing Engine
mastership status.

Now let’s focus our attention on the core side, and check if all the 100 eBGP peer-
ing sessions are established and all the 100,000 expected routes are active in the
CORE-GNF routing table:
{master}
magno@CORE-GNF-re1> show bgp summary | match Establ | except 65203 | count
Count: 100 lines

{master}
magno@CORE-GNF-re1> show route summary | match BGP:   
                 BGP: 100001 routes, 100001 active

{master}
magno@CORE-GNF-re1> show route advertising-protocol bgp 99.99.99.2 extensive 

inet.0: 100216 destinations, 100217 routes (100216 active, 0 holddown, 0 hidden)
* 100.100.0.0/16 (1 entry, 1 announced)
 BGP group eBGP type External
     Nexthop: Self
     AS path: [65203] I 

{master}
magno@CORE-GNF-re1> show route advertising-protocol bgp 99.99.99.6 extensive    

inet.0: 100216 destinations, 100217 routes (100216 active, 0 holddown, 0 hidden)
* 100.100.0.0/16 (1 entry, 1 announced)
 BGP group eBGP type External
     Nexthop: Self
     AS path: [65203] I

{master}
magno@CORE-GNF-re1>

Perfect! It’s just as desired! One hundred established BGP sessions show up and all
the 100,001 BGP routes are learned and active in the CORE-GNF RIB.

NOTE Don’t forget that the CORE-GNF is also receiving one iBGP route from
the EDGE-GNF, hence the total BGP learned routes are 100,001 as the command
output doesn’t discriminate between external (100,000) and internal (1) BGP
routes!
134 Chapter 5: Lab It! EDGE and CORE Testing

And CORE-GNF is advertising, as expected, the subscriber address pool is re-


ceived by EDGE-GNF. Now that we are confident the infrastructure routing is in
place, let’s check the subscriber management services status on the EDGE-GNF:
{master}
magno@EDGE-GNF-re0> show subscribers summary 

Subscribers by State
   Active: 128000
   Total: 128000

Subscribers by Client Type
   DHCP: 64000
   VLAN: 64000
   Total: 128000

{master}
magno@EDGE-GNF-re0> show dhcp relay binding | match BOUND | count 
Count: 64000 lines

{master}
magno@EDGE-GNF-re0> show route protocol access-internal | match Access-internal | count 
Count: 64000 lines

{master}
magno@EDGE-GNF-re0> show route 100.100.34.33 

inet.0: 64013 destinations, 64014 routes (64013 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

100.100.34.33/32   *[Access-internal/12] 00:04:37
                      Private unicast

{master}
magno@EDGE-GNF-re0> ping 100.100.34.33 count 1 
PING 100.100.34.33 (100.100.34.33): 56 data bytes
64 bytes from 100.100.34.33: icmp_seq=0 ttl=64 time=1.769 ms

--- 100.100.34.33 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max/stddev = 1.769/1.769/1.769/0.000 ms

{master}
magno@EDGE-GNF-re0>

NOTE With the C-VLAN access model with auto-sense VLANs, one subscriber is
accounted for the Layer 2 interface and one for the actual DHCP subscriber.

It must be the lab’s lucky day, because everything looks just fine, all the BBE sub-
scribers are connected to the EDGE-GNF, the DHCP relay bindings are all in a
“BOUND” state, all the access-internal routes are active in the RIB, and, by check-
ing a random subscriber, BNG-to-subscriber connectivity is up and running.
135 Lab Setup

So, as a final test, let’s configure some traffic streams on the IXIA Test Center and
double-check that the connectivity between the subscribers and emulated Internet
is working properly.

NOTE Only 10,000 subscribers’ bi-directional communication to 10,000


external prefixes are configured in the traffic section to cope with some IXIA Test
Center scaling limitations, which didn’t allow us to create a mesh using 64,000
subscribers and 100,000 BGP destinations.

The IXIA Test Center is configured to send 512 bytes packets at 100,000 pps for
60 seconds in both directions; let’s check the final output in Figure 5.3.

Figure 5.3 60 seconds of TX / RX Frames

The tester has correctly sent and received 6,000,000 frames (100,000 pps for 60
seconds) so the end-to-end connectivity is just fine. Indeed, if we double-check on
the GNFs themselves, other confirmations are easily spotted.
CORE-GNF:
{master}
magno@CORE-GNF-re1> show interfaces xe-1/0/0 | match rate    
Mar 04 15:56:04
  Input rate     : 392011008 bps (99994 pps)
  Output rate    : 388810904 bps (99997 pps)

{master}
magno@CORE-GNF-re1> show interfaces af0 |match rate          
Mar 04 15:56:07
  Input rate     : 388817224 bps (100004 pps)
  Output rate    : 392013128 bps (100003 pps)

{master}
magno@CORE-GNF-re1>

As shown by this output, the 100,000 pps are received by the Internet simulated
hosts on the IXIA and forwarded through the AF Interface. The fact that 100,000
packets also are present in the reverse direction already provides a good indication
that everything is working just fine.
Let’s continue our packet tracking activity on the EDGE-GNF:
{master}
magno@EDGE-GNF-re0> show interfaces af0 | match rate         
Mar 04 15:56:11
  Input rate     : 392000776 bps (100000 pps)
136 Chapter 5: Lab It! EDGE and CORE Testing

  Output rate    : 388800792 bps (100000 pps)

{master}
magno@EDGE-GNF-re0> show interfaces xe-6/0/0 | match rate    
Mar 04 15:56:13
  Input rate     : 388800384 bps (100000 pps)
  Output rate    : 392002352 bps (100000 pps)

{master}
magno@EDGE-GNF-re0>

Again, what we see is exactly what we expect. The end-to-end connectivity looks
to be there, as already suspected, but best practice means we need to collect con-
clusive proofs. Finally, let’s check the counters on the IXIA-facing interfaces after
the test has ended on both GNFs.
CORE-GNF:
{master}
magno@CORE-GNF-re1> show interfaces xe-1/0/0 media | match bytes: 
Mar 04 16:06:43
Input bytes: 3072376256, Input packets: 6004536, Output bytes: 3048377456, Output packets: 6004523

EDGE-GNF:
{master}
magno@EDGE-GNF-re0> show interfaces xe-6/0/0 media | match bytes
Mar 04 16:08:12
Input bytes: 3072000000, Input packets: 6000000, Output bytes: 3096000000, Output packets: 6000000

{master}
magno@EDGE-GNF-re0>

Even in this case, the counters confirm our observations. You may have noticed
that they are slightly different between EDGE and CORE GNF interfaces. Indeed,
where on the former, only IXIA transit traffic travels on the subscriber-facing inter-
face, on the latter there is light control plane BGP traffic using the CORE GNF xe-
1/0/0 interface, such as BGP keepalive or updates packets. This explains the delta
between the two interface counters.
Well, it was a pretty long journey, but in the end all the goals of the lab were
achieved thanks to Junos node slicing. Indeed, it’s important to recall that thanks
to unique node slicing characteristics such as resource protection, full logical parti-
tioning, and flexible interconnection options through AF Interfaces, it’s perfectly
secure, feasible, and practical to deploy the very same physical node to perform
external and internal functions without jeopardizing scaling, reliability, and
security.
And now let’s turn a single chassis MX router running production services into a
Junos node slicing solution, minimizing downtimes during the process!
Chapter 6

From Single Chassis to Junos Node Slicing

In Chapter 5, the Junos node slicing setup was activated in an MX Series chassis
that wasn’t running any real service, hence there was no focus in being particularly
sensitive about conversion procedure optimization. But in the real world, the most
frequent scenario is for a production router, already providing end user services,
being converted to a Junos node slicing solution.
As already explained throughout this book, as soon as some prerequisites are ful-
filled (we will talk about them in a moment), only one step of the conversion pro-
cess impacts traffic and service: GNF configuration commit on the B-SYS. Indeed,
as soon as the commit process happens, the line cards involved by the new GNFs
deployment need to reboot to attach themselves to the external virtual Routing
Engines.
In this chapter, the goal is to get a single chassis MX running some services and to
convert it into a Junos node slicing solution while minimizing service and traffic
impacts. The look and feel of the procedure should be as close as possible to a
plain-vanilla Junos release upgrade. Let’s start!

Initial Lab Setup


Let’s first look at the lab used to test the optimized Junos node slicing conversion
procedure. The setup will leverage the same hardware devices used in Chapter 5
but this time the starting point will be a single MX960 device already running the
BNG and BGP peering roles.
138 Chapter 6: From Single Chassis to Junos Node Slicing

The MX960-4 chassis is equipped with:


 2 x RE-S-1800X4-32G Routing Engines

 2 x SCBe2 Switching Control Boards (Enhanced 2 Model)

 1 x MPC7e 40XGE Line Card in slot 1 – xe-1/0/0 connected to IXIA Test Cen-
ter Card 7 – Port 8
 1 x MPC5eQ 2CGE+4XGE Line Card in slot 6 – xe-6/0/0 connected to IXIA
Test Center Card 7 – Port 7
The lab schematic is depicted in Figure 6.1:

Figure 6.1 Chapter 6 Initial Lab Setup

No matter how easy node slicing technology is to deploy, manage, or understand,


it is still a very new technology and engineers in charge of the solution in a produc-
tion environment may want to become a little familiar with it before unleashing its
real potential. For this reason, there are sometimes requests to convert the single
node into a ‘degenerated’ single GNF node slicing setup, adding additional slices
as a second step.
Therefore, our Junos node slicing conversion procedure will only turn a single
chassis MX into a single GNF MX. This approach brings considerable advantages
such as:
 No configuration changes are needed – the line cards are not renumbered when
they are associated to a GNF; it’s possible to simply start the new virtual Rout-
ing Engines using the original configuration file and they will start acting ex-
actly as the originals.
 There are no addressing schema changes required – the single GNF will simply
replace the original chassis.
139 Initial Lab Setup

 The activity complexity is a lot lower than a migration to a multi-GNF solution


and this is an important factor when maintenance windows are tight and peo-
ple work at night.
 This Junos node slicing light deployment approach allows you to test the reli-
ability of the solution first, as no changes in behaviors are expected if compared
to the single chassis deployment.
 It allows operation and maintenance engineers to get acquainted with the new
technology and to become more confident in managing, provisioning, and
troubleshooting more complex multi-GNF scenarios.
 The real goal of this light migration approach is to insert the Junos node slicing
machinery in production so that, after a while, it is simple to start deploying
other GNFs by either reallocating the existing line cards and/or installing new
ones into newly created partitions.
Of course, there are no technical impairments in turning a single MX Series chassis
into a multi-GNF Junos node slicing installation, it is just more complicated be-
cause it involves some changes that must be arranged before the migration and the
operativity during the scheduled maintenance window. Note that this activity can
be harder and longer, especially if the off-line configuration changes are not
achieving the desired behaviors.
After the migration procedure, our lab setup will look like Figure 6.2.

Figure 6.2 Chapter 6 Lab Setup after Node Slicing Conversion


140 Chapter 6: From Single Chassis to Junos Node Slicing

It looks pretty similar to the initial lab setup in Figure 6.1, doesn’t it? Logically
speaking nothing changes. The differences are only physical as the control plane of
the solution now runs on the external servers instead of running on the internal
Routing Engines!

Single MX960-4 Chassis Running BGP Peering and BNG Services


The simulation starts with a single MX Series router, which is in production and is
running BGP peering and BNG services.
The lab leverages all the configurations already used in the previous chapters, for
both IXIA and MX Series routers, but this time both functionalities are collapsed
in a single device. The scaling is the same as the previous lab exercise for both
BNG and BGP peering roles:
 64,000 subscribers;

 Access model is C-VLAN with DHCPv4 relay;

 100 eBGP peering routers injecting 1,000 routes each;

 BNG Subscriber addressing pool advertised to the e-BGP peers;

 DHCP Server is running on the IXIA Test Center.

The lab testing environment is shown in Figure 6.3.

Figure 6.3 Initial Lab Setup

The setup is exactly the same as used in Chapter 5 with the difference being that in
this case our DUT is a single chassis MX960-4. Before describing all the prerequi-
sites and the steps needed to achieve our goal to migrate a single MX to a one-
GNF Junos node slicing solution, it’s time to perform some sanity checks to be
sure everything is working as expected.
141 Single MX960-4 Chassis Running BGP Peering and BNG Services

Let’s check the control plane premigration status:


{master}
magno@MX960-4-RE0> show bgp summary | match Esta | count 
Count: 100 lines

{master}
magno@MX960-4-RE0> show route table inet.0 protocol bgp | match BGP | count 
Count: 100000 lines

The BGP side looks perfect, and 100 peers are advertising 100,000 routes as
expected:
{master}
magno@MX960-4-RE0> show subscribers summary 

Subscribers by State
   Active: 128000
   Total: 128000

Subscribers by Client Type
   DHCP: 64000
   VLAN: 64000
   Total: 128000

{master}
magno@MX960-4-RE0> show route table inet.0 protocol access-internal | match Access- | count 
Count: 64000 lines

{master}
magno@MX960-4-RE0> show dhcp relay binding | match BOUND | count 
Count: 64000 lines

{master}
magno@MX960-4-RE0>

The BNG side looks as expected, 64,000 subscribers are connected, 64,000 access-
internal routes are installed, and 64,000 DHCP relay bindings are in “BOUND”
state. We’ll check the same KPIs again after the migration to single-GNF Junos
node slicing is completed.

Single-GNF Junos Node Slicing Migration Procedure


It’s now time to perform the migration process to the target single-GNF Junos
node slicing setup while trying to minimize the traffic interruption window.
Before starting the conversion activities, the external X86 servers must be in-
stalled, connected, and configured according to what has already been explained
during the first deployment of Junos node slicing. These operational tasks can be
accomplished without touching the MX production device because they are totally
disconnected from the device itself. Even cabling can be completed without touch-
ing the router if we exclude the physical SFP+ and fiber installation operations.
142 Chapter 6: From Single Chassis to Junos Node Slicing

NOTE In this lab exercise, we are going to leverage the same two servers already
used, hence we can assume they are already correctly configured (so no particular
outputs are provided). They are available in Chapter 5, anyway.

Once the JDM servers are properly set up, some prerequisites must be checked on
the production MX Device: none of them impact service, therefore they can be
performed during regular working hours.

WARNING The single chassis MX must run the suitable Junos version before
being configured as a Junos node slicing B-SYS. For this reason, all of the assump-
tions on performing activities that do not impact service outside of any mainte-
nance windows hold true only if the desired Junos version is already running on
the router. In case it’s not, a Junos upgrade activity must be performed according
to the normal well-know procedures. Of course, using the In-Service Software
Upgrade (ISSU) machinery may minimize service impacts but the mileage may vary
according to the hardware components installed and the features active on the
router. Therefore, if the device administrator wants to perform an in-service
upgrade, it is strongly advised to double-check that all of the hardware compo-
nents and features active on the router are supported by the starting Junos release
and that the upgrade path to the new operating system version is one of the ISSU
upgrade combinations officially supported by Juniper Networks.

B-SYS Preparatory Activities


First check that the Junos version running on the device is the one chosen for Junos
node slicing deployment (see the preceding WARNING note):
{master}
magno@MX960-4-RE0> show version brief | match Junos: 
Junos: 18.3R1.9

{master}
magno@MX960-4-RE0>
Status: Passed – 18.3 is the correct Junos Version.

Then check that the chassis network-service mode is configured as enhanced-IP:


{master:
magno@MX960-4-RE0> show chassis network-services 
Network Services Mode: Enhanced-IP

{master}
magno@MX960-4-RE0>
Status: Passed; the network-service mode is correctly set configured as “Enhanced-IP”;

NOTE The network-services configuration change requires the MX to be restart-


ed, but if the router is equipped with SCBe2 it will have to run in enhanced mode,
otherwise these fabric modules will not boot at all.
143 Single MX960-4 Chassis Running BGP Peering and BNG Services

Save the MX Router configuration into a file and store it in an easily accessible lo-
cation because you will need it during the external Routing Engine provisioning
process; in this case, we’re saving it on the jns-x86-0 server:
{master}
magno@MX960-4-RE0> show configuration | save scp://administrator@172.30.181.171:/home/administrator/
configs/MX960-4-CONFIG.txt
administrator@172.30.181.171’s password:
tempfile 100% 31KB
30.8KB/s 00:00
Wrote 1253 lines of output to ‘scp://administrator@172.30.181.171:/home/administrator/configs/
MX960-4-CONFIG.txt’

{master}
magno@MX960-4-RE0>
Status: Passed. The configuration is now

Check that all the physical cablings between the SCBe2 10GE ports and the exter-
nal X86 are properly working. The most effective way to perform this task is to
configure set chassis network-slices on the MX Router and check the status on the
four ports on the servers:
{master}[edit]
magno@MX960-4-RE0# set chassis network-slices

{master}[edit]
magno@MX960-4-RE0# show chassis 
--- SNIP ---
network-slices;

{master}[edit]
magno@MX960-4-RE0# commit 
re0: 
configuration check succeeds
re1: 
configuration check succeeds
commit complete
re0: 
commit complete

{master}[edit]
magno@MX960-4-RE0#

NOTE With this command the MX router is now ready to act as a B-SYS. But as
no guest-network-function configurations are present, it keeps on behaving as a
single chassis. The command does not affect service.

At this point, using the Linux ethtool command on the external servers, check the
four 10GE connections state.
Server JNS-X86-0:
root@jns-x86-0:/home/administrator# ethtool enp4s0f0 | grep Speed: -A1
        Speed: 10000Mb/s
144 Chapter 6: From Single Chassis to Junos Node Slicing

        Duplex: Full
root@jns-x86-0:/home/administrator# ethtool enp4s0f1 | grep Speed: -A1
        Speed: 10000Mb/s
        Duplex: Full
root@jns-x86-0:/home/administrator#

Server JNS-X86-1:
root@jns-x86-1:/home/administrator# ethtool enp4s0f0 | grep Speed: -A1
        Speed: 10000Mb/s
        Duplex: Full
root@jns-x86-1:/home/administrator# ethtool enp4s0f1 | grep Speed: -A1
        Speed: 10000Mb/s
        Duplex: Full
root@jns-x86-1:/home/administrator#

Status: Passed. All the 10GE interfaces are connected and ready for the Junos
node slicing deployment.

NOTE Naturally, besides the physical ports state, check for proper cabling
schematics; as a reminder, Server0/1 port 0 should be connected to SCB0/1 port 0,
and Server0/1 port 1 should be connected to SCB0/1 port 1, respectively.

Production Single Chassis MX to Single-GNF Junos Node Slicing


Procedure
All the environment sanity checks are now completed, so let’s start the conversion
procedure step-by-step.

STEP 1: Configuration Staging


The first thing to do is to modify the configuration and to make it suitable to run
on the soon-to-be-created new virtual Routing Engines. Indeed, the process is very
simple, as normally, only the management addresses must be changed. Actually,
logically speaking, we are adding a new node on the network, which will replace,
from a service standpoint, the running MX standalone chassis. But this device will
not be decommissioned, it will run as a base system for our node slicing solution.
For this reason, it still needs a management address. A comfortable way to solve
this situation is to maintain the current addresses on the B-SYS and configure new
ones on the virtual Routing Engines. The same considerations apply to the host
name.
In the book’s lab, the new virtual Routing Engines will have the following manage-
ment IP address and hostnames:
 MX960-4-GNF-RE0: 172.30.181.176
145 Production Single Chassis MX to Single-GNF Junos Node Slicing Procedure

 MX960-4-GNF-RE1: 172.30.181.177

 The Master only address will be 172.30.181.175

NOTE In this particular case, external servers and B-SYS management networks
are not on the same subnet, therefore the IP GW address was also changed.

All the changes can be easily made modifying the previous saved configuration file
using a text editor of your choice. The changes are highlighted here:
groups {
    re0 {
        system {
            host-name MX960-4-GNF-RE0;
            backup-router 172.30.181.1 destination 172.16.0.0/12;
        }
        interfaces {
            fxp0 {
                unit 0 {
                    family inet {
                        address 172.30.181.176/24;
                        address 172.30.181.175/24 {
                            master-only;
                        }
                    }
                }
            }
        }
    }
    re1 {
        system {
            host-name MX960-4-GNF-RE1;
            backup-router 172.30.177.1 destination 172.16.0.0/12;
        }
        interfaces {
            fxp0 {
                unit 0 {
                    family inet {
                        address 172.30.181.177/24;
                        address 172.30.181.175/24 {
                            master-only;
                        }
                    }
                }
            }
        }
    }
}

STEP 2: Creating the Virtual Routing Engines


Once the startup configuration is ready, it’s time to create the new virtual Routing
Engines on both JDM Server0 and Server1 using all the commands learned in the
previous chapters (listed below as reference):
146 Chapter 6: From Single Chassis to Junos Node Slicing

root@JDM-SERVER0> request virtual-network-functions add-image /var/tmp/junos-install-ns-mx-x86-64-


18.3R1.9.tgz MX960-4-GNF all-servers
server0:
--------------------------------------------------------------------------
Added image: /vm-primary/MX960-4-GNF/MX960-4-GNF.img

server1:
--------------------------------------------------------------------------
Added image: /vm-primary/MX960-4-GNF/MX960-4-GNF.img

root@JDM-SERVER0> edit
Entering configuration mode

[edit]
root@JDM-SERVER0# set virtual-network-functions MX960-4-GNF id 3 no-autostart chassis-type mx960
resource-template 4core-32g

[edit]
root@JDM-SERVER0#

Copy the already tuned configuration to the GNF storage path: /vm-primary/
MX960-4-GNF:
root@jns-x86-0:~# cp /home/administrator/configs/MX960-4-GNF-CONFIG.txt /vm-primary/MX960-4-GNF/

The configuration file is now accessible by the JDM, so let’s configure the new
VNF to use it as a startup config and commit the configuration:
[edit]
root@JDM-SERVER0# set virtual-network-functions MX960-4-GNF base-config /vm-primary/MX960-4-GNF/
MX960-4-GNF-CONFIG.txt

[edit]
root@JDM-SERVER0# commit
server0:
configuration check succeeds
server1:
commit complete
server0:
commit complete

[edit]
root@JDM-SERVER0#

NOTE It is not necessary to manually copy the configuration files on the remote
JDM server. The configuration synchronization machinery will take care of this
task. Indeed, take a look on JDM server1:
Last login: Fri Mar  8 10:46:13 2019 from 172.29.81.183
administrator@jns-x86-1:~$ ls /vm-primary/MX960-4-GNF/
----- SNIP -----
/vm-primary/MX960-4-GNF/MX960-4-GNF-CONFIG.txt
---- SNIP ----
administrator@jns-x86-1:~$
147 Production Single Chassis MX to Single-GNF Junos Node Slicing Procedure

The file is already there! Great job, now the VNFs are ready to be started. Remem-
ber, we configured the no-autostart command a while back? Take a look for
yourself:
[edit]
root@JDM-SERVER0# exit  
Exiting configuration mode

root@JDM-SERVER0> show virtual-network-functions all-servers 
server0:
--------------------------------------------------------------------------
ID       Name                                              State      Liveness
--------------------------------------------------------------------------------
3        MX960-4-GNF                                       Shutdown   down

server1:
--------------------------------------------------------------------------
ID       Name                                              State      Liveness
--------------------------------------------------------------------------------
3        MX960-4-GNF                                       Shutdown   down

root@JDM-SERVER0>

Per the configuration, the two VNFs are provisioned but shut down. To power
them on:
root@JDM-SERVER0> request virtual-network-functions MX960-4-GNF start all-servers 
server0:
--------------------------------------------------------------------------
MX960-4-GNF started

server1:
--------------------------------------------------------------------------
MX960-4-GNF started

root@JDM-SERVER0>

Now the two virtual Routing Engines are booting up, of course it’s possible to use
the console access to verify the booting process with the request virtual-network-
functions MX960-4-GNF console command.

For sake of curiosity, we measured the booting time of the VNF. We activated the
CLI timestamps, connected to the virtual Routing Engine console, and then got
back to the JDM CLI as soon as the configuration file had loaded and all the pro-
cesses had started:
root@JDM-SERVER0> request virtual-network-functions MX960-4-GNF start 
Mar 09 10:42:49
MX960-4-GNF started

root@JDM-SERVER0> request virtual-network-functions MX960-4-GNF console  
Mar 09 10:43:16
Connected to domain MX960-4-GNF
Escape character is ^]
Mounting junos-platform-x86-32-20180920.185504_builder_junos_183_r1
----- SNIP ------
148 Chapter 6: From Single Chassis to Junos Node Slicing

Mar  9 10:44:39 jlaunchd: general-authentication-
service (PID 6210) sending signal USR1: due to “proto-mastership”: 0x1
Mar  9 10:44:39 jlaunchd: Registered PID 6525(mpls-traceroute): exec_command

FreeBSD/amd64 (MX960-4-GNF-RE0) (ttyu0)

login: 

root@JDM-SERVER0> 
Mar 09 10:44:44

Wow, impressive, it looks like the VNF took less than two minutes to be fully op-
erational. This is a huge improvement if compared to the real counterparts and it’s
mostly due to the POST stage, which is basically fulminous on a VM.
From the observed prompt, you may have already seen that the hostname
“MX960-4-GNF-RE0” that we had set in the startup configuration file is already
shown. This is a very good indication that the new virtual RE0 has correctly used
the configuration file provided by the JDM CLI.

WARNING It’s important to remark that the Junos running on the virtual Routing
Engines is not in full parity in terms of CLI commands with its counterpart
running on the B-SYS. For this reason you can find that the configuration file
(retrieved from the chassis Routing Engines) used to boot the virtual Routing
Engines can contain commands that are not actually available on the GNF Junos,
and this can cause troubles, especially with commit operations. For instance, let’s
examine a case where the statement ‘system ports console log-out-on-disconnect’ is
originally present in the MX960-4 chassis configuration but it doesn’t exist in the
GNF Junos CLI. When the user tries to commit any configuration change, the
operation will fail as shown here:
{master}[edit]
magno@MX960-4-GNF-RE0# commit
re0:

{master}[edit]
magno@MX960-4-GNF-RE0#

Something strange is happening: the commit process was not run on the backup
Routing Engine although it was supposed to. This is a very good indication that
somewhere in the GNF loaded configuration there is one, or more, non-existent
commands. To discover which ones are preventing the configuration from commit-
ting, it’s sufficient to run the show | compare command:
{master}[edit]
magno@MX960-4-GNF-RE0# show | compare
/config/juniper.conf:93:(37) syntax error: log-out-on-disconnect
[edit system ports console]
'console log-out-on-disconnect;'
syntax error
[edit system ports]
149 Production Single Chassis MX to Single-GNF Junos Node Slicing Procedure

'console'
warning: statement has no contents; ignored

{master}[edit]
magno@MX960-4-GNF-RE0#

It’s clear the log-out-on-disconnect statement is not supported on the GNF Junos.
To fix the problem, the offending command must be removed before attempting to
commit the configuration. There are two ways to fix this kind of problem:
 Perform a line-by-line comparation between the B-SYS and the GNF Junos
CLIs;
 Perform a quick trial and error reiterated process until all the unknown com-
mands are removed.
The second approach is strongly advised because it is a lot quicker and more ef-
ficient to test the configuration and eventually get rid of any offending statement
because:
 The number of unsupported commands is limited;

 No more than three unsupported statements have ever been found in the expe-
rience of this book's author;
 And ultimately, the GNF is not in production, so you have time to tune the con-
figuration.
Let’s fix the problem trying these two main methods:
1) Fix the configuration file and restart the virtual Routing Engines on the JDM.
2) Fix the configuration file directly inside the virtual Routing Engines and commit
the new sanitized configuration.
The first approach consists in directly editing the file configured as ‘base-config’ in
the JDM GNF configuration, namely in our case “/vm-primary/MX960-4-GNF/ MX960-
4-GNF-CONFIG.txt”. So, by logging on both Linux servers and using the text editor of
your choice, it’s possible to edit the file and remove the undesired statements. At
this point, it’s sufficient to go back to the JDM CLI and type the well-known op-
erational request virtual-network-functions MX960-4-GNF restart all-servers com-
mand to first destroy and then spin up the two VMs from scratch.
Of course, if more than one command must be removed, this ‘trial and error’ pro-
cess is a lot slower as the GNF must be restarted every time.
The second approach is faster because it takes place directly inside the GNF itself
and doesn’t require any reboot. Once the offending command is clearly identified,
as explained above, the Junos CLI can be used to remove it and then commit the
configuration, as shown here:
150 Chapter 6: From Single Chassis to Junos Node Slicing

{master}
magno@MX960-4-GNF-RE0> show configuration system ports
console log-out-on-disconnect;

{master}
magno@MX960-4-GNF-RE0> edit
Entering configuration mode

{master}[edit]
magno@MX960-4-GNF-RE0# delete system ports <--- DELETE THE FIRST OFFENDING COMMAND

{master}[edit]
magno@MX960-4-GNF-RE0# show chassis redundancy
failover {
on-loss-of-keepalives;
on-re-to-fpc-stale;
not-on-disk-underperform;
}
graceful-switchover;

{master}[edit]
magno@MX960-4-GNF-RE0# commit and-quit
re0:

{master}[edit]
magno@MX960-4-GNF-RE0#

Wait a moment! The problem is still here, the commit process on the backup rout-
ing engine didn’t go through! Indeed, it is easily explicable: the offending com-
mand is still present in the backup routing engine! So, we need to remove it there
before committing (and synchronizing) the configuration on the Master!
{master}
magno@MX960-4-GNF-RE0> request routing-engine login other-routing-engine

--- JUNOS 18.3R1.9 Kernel 64-bit JNPR-11.0-20180816.8630ec5_buil

{backup}
magno@MX960-4-GNF-RE1> show configuration system ports
console log-out-on-disconnect;

{backup}
magno@MX960-4-GNF-RE1> edit
Entering configuration mode

{backup}[edit]
magno@MX960-4-GNF-RE1# delete system ports

{backup}[edit]
magno@MX960-4-GNF-RE1# show chassis redundancy
failover {
on-loss-of-keepalives;
on-re-to-fpc-stale;
not-on-disk-underperform;
}
graceful-switchover;

{backup}[edit]
magno@MX960-4-GNF-RE1# commit
151 Production Single Chassis MX to Single-GNF Junos Node Slicing Procedure

warning: Graceful-switchover is enabled, commit on backup is not recommended


Continue commit on backup RE? [yes,no] (no) yes

re1:
configuration check succeeds
re0:
configuration check succeeds
commit complete
re1:
commit complete

{backup}[edit]
magno@MX960-4-GNF-RE1#

Now that the configuration on the backup routing-engine is fixed, we can commit
the config on the Master as well:
{backup}[edit]
magno@MX960-4-GNF-RE1# exit rlogin: connection closed

{master}
magno@MX960-4-GNF-RE0> edit

{master}
magno@MX960-4-GNF-RE0# commit
re0:
configuration check succeeds
re1:
configuration check succeeds
commit complete
re0:
commit complete

{master}[edit]
magno@MX960-4-GNF-RE0#

As expected, the commit process goes finally through and the problem is now fixed.
Okay, it’s now time to verify that the new control plane status is in good shape de-
spite no line cards being attached to it. So, let’s do some basic connectivity tests us-
ing the management network and if they succeed, proceed to perform some sanity
checks about hardware, mastership status, task replication, network services, and
the running configuration.
First from one of the X86 servers, let’s try to simply ping the three management ad-
dresses assigned to the new virtual Routing Engines:
administrator@jns-x86-0:~$ ping 172.30.181.175 -c 1
PING 172.30.181.175 (172.30.181.175) 56(84) bytes of data.
64 bytes from 172.30.181.175: icmp_seq=1 ttl=64 time=0.382 ms

--- 172.30.181.175 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms
administrator@jns-x86-0:~$ ping 172.30.181.176 -c 1
PING 172.30.181.176 (172.30.181.176) 56(84) bytes of data.
64 bytes from 172.30.181.176: icmp_seq=1 ttl=64 time=0.652 ms
152 Chapter 6: From Single Chassis to Junos Node Slicing

--- 172.30.181.176 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms
administrator@jns-x86-0:~$ ping 172.30.181.177 -c 1
PING 172.30.181.177 (172.30.181.177) 56(84) bytes of data.
64 bytes from 172.30.181.177: icmp_seq=1 ttl=64 time=0.592 ms

--- 172.30.181.177 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms
administrator@jns-x86-0:~$

That’s very promising, both the RE0 and RE1 addresses are reachable as well as
the Master IP .175! Let’s connect to the Master Routing Engine using SSH:
administrator@jns-x86-0:~$ ssh magno@172.30.181.175
The authenticity of host ‘172.30.181.175 (172.30.181.175)’ can’t be established.
ECDSA key fingerprint is SHA256:QIW9uleS7Xm9hZLgTjiQjRw61JAl8smqJKGG+a98n1s.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘172.30.181.175’ (ECDSA) to the list of known hosts.
Password:
Last login: Sat Mar  9 10:05:00 2019 from 172.29.83.81
--- Junos 18.3R1.9 Kernel 64-bit  JNPR-11.0-20180816.8630ec5_buil
{master}
magno@MX960-4-GNF-RE0>

Perfect, from the CLI prompt it’s clear the configuration was correctly applied, and
from the {master} prompt it’s fair to also assume the mastership election was cor-
rectly completed.
Let’s perform some sanity checks to be fully confident the system is actually work-
ing properly:
{master}
magno@MX960-4-GNF-RE0> show chassis hardware 
Chassis                                GN5C8388BF4A      MX960-GNF
Routing Engine 0                                         RE-GNF-2100x4
Routing Engine 1                                         RE-GNF-2100x4

{master}
magno@MX960-4-GNF-RE0>

The two virtual Routing Engines are correctly onboarded. This is the correct out-
put because on the B-SYS there is no GNF configuration at all, hence the command
forwarding output, which is filtered per GNF, is void.
Let’s check routing engine mastership and task replication status:
{master}
magno@MX960-4-GNF-RE0> show chassis routing-engine | match “Current state” 
    Current state                  Master
    Current state                  Backup

{master}
magno@MX960-4-GNF-RE0> show task replication 
        Stateful Replication: Enabled
        RE mode: Master
153 Production Single Chassis MX to Single-GNF Junos Node Slicing Procedure

    Protocol                Synchronization Status
    BGP                     Complete              

{master}
magno@MX960-4-GNF-RE0>

These outputs are good as well. The two virtual Routing Engines have correctly ne-
gotiated their status, and the only configured routing protocol is ready to be
synchronized.
Last check, chassis network-services:

{master}
magno@MX960-4-GNF-RE0> show chassis network-services       
Network Services Mode: Enhanced-IP

magno@MX960-4-GNF-RE0>

Enhanced-IP it is! As all the checks appear the way they should, we can be quite
confident our new control plane can take ownership of the services. It’s now time to
attach the line cards to the new virtual Routing Engines!

STEP 3 – Virtually Connect the Line Cards to the VNFs


As already stated, all the actions performed until now haven’t affected the services
in any way. But it’s very well understood that the final step of this process, that is
virtually inserting the line cards into our new GNF, will produce inescapable im-
pacts to all the services provided by the device under migration. The point here is
just to try to minimize them. From a pure device perspective, the expected down-
time is the same as what occurs with a line-card reboot, which of course is a quite
destructive event, as all the states related to that particular card are lost and the con-
trol plane must react, triggering a lot of actions to achieve network restoration.
But from the service perspective, it’s even worse – it’s a lot more difficult to foresee
the exact downtime window because of all the concurring factors, any of which can
heavily impact service restoration time.
Let’s take our particular scenario as an example: there are two main services run-
ning on the soon-to-be migrated device, subscriber management on the edge, and a
BGP peering on the core side. These services are completely different in nature and
of course they also behave totally differently during service disruption situations.
Broadband edge subscribers are using an auto-configuration feature based on DHCP
packets snooping, and no keepalives are running between their CPEs and the BNG.
Therefore, in this particular case, service restoration time is highly dependent on the
access devices DHCP dynamics because it’s also not possible to migrate the control
plane subscriber states. And until a DHCP packet is not received by the BNG, it
can’t trigger the subscriber VLAN auto-config process and, in turn, all the other ac-
tions needed to properly set up a subscriber flow inside the MX PFE.
154 Chapter 6: From Single Chassis to Junos Node Slicing

On the BGP side, the BGP hold-timer expires at some point as the old session is not
available anymore on the MX side, and then the protocol itself will start to keep
on trying to re-establish the peering on both BGP speakers. Once the BGP session
is up, additional time must be allotted to wait for both peers to receive all the
routes and to mark them as active in the routing table, and then download them to
the forwarding plane.
Even though neither of the services is the quickest to restore, the BGP-based one
looks a little bit less complicated as it relies totally on the protocol itself, while in
the BNG case, the access devices and the DHCP server also play a fundamental
role in service restoration time.
After this rightful digression, let’s get our hands back on the CLI and start the real
work! What’s left is our MX960-4 single chassis running its service. They are still
in good shape, as the following CLI output testifies:
{master}
magno@MX960-4-RE0> show route summary table inet.0   
Autonomous system number: 65203
Router ID: 72.255.255.1

inet.0: 164210 destinations, 164210 routes (164210 active, 0 holddown, 0 hidden)
              Direct:    105 routes,    105 active
               Local:    103 routes,    103 active
                 BGP: 100000 routes, 100000 active
              Static:      2 routes,      2 active
     Access-internal:  64000 routes,  64000 active

{master}
magno@MX960-4-RE0>

Now let’s configure a new GNF with ID = 3 (remember, the ID must match on both
the B-SYS and JDM configurations):
{master}
magno@MX960-4-RE0> edit 
Entering configuration mode

{master}[edit]
magno@MX960-4-RE0# set chassis network-slices guest-network-functions gnf 3 fpcs 1 

{master}[edit]
magno@MX960-4-RE0# set chassis network-slices guest-network-functions gnf 3 fpcs 6    

{master}[edit]
magno@MX960-4-RE0# set chassis network-slices guest-network-functions gnf 3 description “Single-
GNF Setup” 

{master}[edit]
magno@MX960-4-RE0# show chassis network-slices 
guest-network-functions {
    gnf 3 {
        description “Single-GNF Setup”;
        fpcs [ 1 6 ];
    }
}

{master}[edit]
magno@MX960-4-RE0#
155 Production Single Chassis MX to Single-GNF Junos Node Slicing Procedure

Perfect. We are now ready to start the migration of both MPCs (installed in slots 1
and 6) to the new GNF! As soon as the commit command runs, both line cards will
reboot themselves:
{master}[edit]
magno@MX960-4-RE0# run set cli timestamp           
Mar 09 15:58:43
CLI timestamp set to: %b %d %T

{master}[edit]
magno@MX960-4-RE0# run show chassis hardware | no-more 
Mar 09 15:58:52
Hardware inventory:
Item             Version  Part number  Serial number     Description
Chassis                                JN122E1E0AFA      MX960
--- SNIP ---
FPC 1            REV 42   750-053323   CAGF3038          MPC7E 3D 40XGE
  CPU            REV 19   750-057177   CAGF7762          SMPC PMB
  PIC 0                   BUILTIN      BUILTIN           20x10GE SFPP
    Xcvr 0       REV 01   740-030658   B10L02628         SFP+-10G-USR
  PIC 1                   BUILTIN      BUILTIN           20x10GE SFPP
FPC 6            REV 42   750-046005   CADM2676          MPC5E 3D Q 2CGE+4XGE
  CPU            REV 11   711-045719   CADK9910          RMPC PMB
  PIC 0                   BUILTIN      BUILTIN           2X10GE SFPP OTN
    Xcvr 0       REV 01   740-031980   B11B02985         SFP+-10G-SR
  PIC 1                   BUILTIN      BUILTIN           1X100GE CFP2 OTN
  PIC 2                   BUILTIN      BUILTIN           2X10GE SFPP OTN
  PIC 3                   BUILTIN      BUILTIN           1X100GE CFP2 OTN
Fan Tray 0       REV 04   740-031521   ACAC1075          Enhanced Fan Tray
Fan Tray 1       REV 04   740-031521   ACAC0974          Enhanced Fan Tray

{master}[edit]
magno@MX960-4-RE0# commit 
Mar 09 15:58:56
re0: 
configuration check succeeds
re1: 
configuration check succeeds
commit complete
re0: 
commit complete

{master}[edit]
magno@MX960-4-RE0# run show chassis hardware | no-more    
Mar 09 15:59:01
Hardware inventory:
Item             Version  Part number  Serial number     Description
Chassis                                JN122E1E0AFA      MX960
--- SNIP ---
FPC 1            REV 42   750-053323   CAGF3038          MPC7E 3D 40XGE
  CPU            REV 19   750-057177   CAGF7762          SMPC PMB
FPC 6            REV 42   750-046005   CADM2676          MPC5E 3D Q 2CGE+4XGE
  CPU            REV 11   711-045719   CADK9910          RMPC PMB
Fan Tray 0       REV 04   740-031521   ACAC1075          Enhanced Fan Tray
Fan Tray 1       REV 04   740-031521   ACAC0974          Enhanced Fan Tray

{master}[edit]
magno@MX960-4-RE0#
156 Chapter 6: From Single Chassis to Junos Node Slicing

It’s clearly visible by examining the time stamps: as soon as the commit command is
executed, both line cards on the MX960 chassis reboot.
In the meanwhile, on the IXIA Test Center, the physical ports status has just gone
down, hence the icons’ color turned to red:

Figure 6.4 IXIA Test Generator Detected the Interface Down State

Figure 6.5 IXIA BGP Simulated Session Down

As expected, the BGP sessions are still established as the BGP Hold-Timer (set to
90 seconds) hasn’t expired yet and the subscriber sessions are all in UP state be-
cause none of the simulated CPEs tried to renew their DHCP bindings (set at 3600
seconds).
Even though the physical lasers of the interfaces are now turned off and the IXIA
port is in down state, see Figure 6.5, the simulated entities change status based on
protocol timers as if a Layer 2 switch was between it and the MX Series.
157 Production Single Chassis MX to Single-GNF Junos Node Slicing Procedure

Indeed, after some time, the BGP hold timer expired and the sessions went down,
while the CPEs DHCP lease time didn’t, because they are two orders of magnitude
longer.
In the meantime, on the new virtual Routing Engines, of course the line card on-
boarding process has already started as soon the commit was executed on the
B-SYS:
{master}
magno@MX960-4-GNF-RE0> show chassis hardware 
bsys-re0:
--------------------------------------------------------------------------
Hardware inventory:
Item             Version  Part number  Serial number     Description
Chassis                                JN122E1E0AFA      MX960
Midplane         REV 04   750-047853   ACRB9287          Enhanced MX960 Backplane
Fan Extender     REV 02   710-018051   CABM7223          Extended Cable Manager
FPM Board        REV 03   710-014974   JZ6991            Front Panel Display
PDM              Rev 03   740-013110   QCS1743501N       Power Distribution Module
PEM 0            Rev 04   740-034724   QCS171302048      PS 4.1kW; 200-240V AC in
PEM 1            Rev 07   740-027760   QCS1602N00R       PS 4.1kW; 200-240V AC in
PEM 2            Rev 10   740-027760   QCS1710N0BB       PS 4.1kW; 200-240V AC in
PEM 3            Rev 10   740-027760   QCS1710N0BJ       PS 4.1kW; 200-240V AC in
Routing Engine 0 REV 01   740-051822   9009170093        RE-S-1800x4
Routing Engine 1 REV 01   740-051822   9009176340        RE-S-1800x4
CB 0             REV 01   750-055976   CACM2281          Enhanced MX SCB 2
  Xcvr 0         REV 01   740-031980   163363A04142      SFP+-10G-SR
  Xcvr 1         REV 01   740-021308   AS90PGH           SFP+-10G-SR
CB 1             REV 02   750-055976   CADJ1802          Enhanced MX SCB 2
  Xcvr 0         REV 01   740-031980   AHJ09HD           SFP+-10G-SR
  Xcvr 1         REV 01   740-021308   09T511103665      SFP+-10G-SR
FPC 1            REV 42   750-053323   CAGF3038          MPC7E 3D 40XGE
  CPU            REV 19   750-057177   CAGF7762          SMPC PMB
FPC 6            REV 42   750-046005   CADM2676          MPC5E 3D Q 2CGE+4XGE
  CPU            REV 11   711-045719   CADK9910          RMPC PMB
Fan Tray 0       REV 04   740-031521   ACAC1075          Enhanced Fan Tray
Fan Tray 1       REV 04   740-031521   ACAC0974          Enhanced Fan Tray

gnf3-re0:
--------------------------------------------------------------------------
Chassis                                GN5C8388BF4A      MX960-GNF
Routing Engine 0                                         RE-GNF-2100x4
Routing Engine 1                                         RE-GNF-2100x4

{master}
magno@MX960-4-GNF-RE0> show chassis fpc 
bsys-re0:
--------------------------------------------------------------------------
Temp CPU Utilization (%) CPU Utilization (%) Memory Utilization (%)
Slot State (C) Total Interrupt 1min 5min 15min DRAM (MB) Heap Buffer GNF
1 Present 46 3
6 Present 43 3

{master}
magno@MX960-4-GNF-RE0>
158 Chapter 6: From Single Chassis to Junos Node Slicing

The first thing to notice here is that, this time, the show chassis hardware output has
a “bsys-re#” section. This means that a configuration for this GNF ID exists on the
base system and therefore the output returned by its chassisd daemon is filtered
and displayed on the GNF CLI.
The other important thing to notice is that the line cards are present, but they are
still booting. Indeed, no xe- interfaces are present in the show interface terse
output:
{master}
magno@MX960-4-GNF-RE0> set cli timestamp 
Mar 09 16:00:03
CLI timestamp set to: %b %d %T

{master}
magno@MX960-4-GNF-RE0> show interfaces terse | match xe-    
Mar 09 16:00:05

{master}
magno@MX960-4-GNF-RE0>

NOTE All the devices are NTP-synchronized, so the timestamps can be consid-
ered current and relevant.

The commit operation took place at the B-SYS at 15:58:56. Let’s check when the
xe- interfaces will in the new routing engine CLI and compare them:
{master}
magno@MX960-4-GNF-RE0> show interfaces terse | match xe-6/0/0 | refresh 2
---(refreshed at 2019-03-09 16:00:49 CET)---
---(refreshed at 2019-03-09 16:00:51 CET)---
--- SNIP ---
---(refreshed at 2019-03-09 16:03:47 CET)---
xe-6/0/0                up    up
xe-6/0/0.32767          up    up   multiservice
^C[abort]

---(refreshed at 2019-03-09 16:03:49 CET)---
{master}
magno@MX960-4-GNF-RE0>

Great! At around 16:03:47 the xe- interface shows up and this is the worst case, as
slot 1 line card, whose logs are not shown, booted a little bit faster at 16:03:18.
As expected, because BGP keeps on trying to connect from both sides (as also IXIA
is configured as initiator), it converged very fast, indeed:
{master}
magno@MX960-4-GNF-RE0> show route summary table inet.0 
Mar 09 16:04:27
Autonomous system number: 65203
Router ID: 72.255.255.1
159 Production Single Chassis MX to Single-GNF Junos Node Slicing Procedure

inet.0: 100209 destinations, 100210 routes (100209 active, 0 holddown, 0 hidden)
              Direct:    105 routes,    104 active
               Local:    103 routes,    103 active
                 BGP: 100000 routes, 100000 active
              Static:      2 routes,      2 active

{master}
magno@MX960-4-GNF-RE0> 

This output provides two useful bits of information: first of all, BGP routes are al-
ready learned and marked as active in the inet.0, and this is great news. On the
other end, unfortunately, and as expected, it shows no ‘access-internal’ routes,
which is a clear symptom that subscribers’ states have not built yet. The reason is
pretty straightforward: without any kind of keepalive machinery, the IXIA simu-
lated end users’ CPEs control plane is not (yet) aware that a catastrophic event
happened. The only way for them to reconnect is to wait for its DHCP lease time
to expire, which in turn will trigger a DHCP REQUEST to try to renew its lease
with the DHCP server. The BNG can now use this DHCP packet to trigger all the
auto-configuration machinery which will recreate the control plane states and the
data plane flows to forward subscriber traffic.

NOTE You can also use a PPP-based access model (which can leverage the Link
Control Protocol (LCP) to periodically check the client / server connection) to
demonstrate a faster service recovery scenario, but the DHCP example is more
relevant, as one of the goals of this exercise is to highlight the differences, in both
meaning and recovery times aspects, between a simple Junos node slicing one-
GNF migration and an end-to-end service restoration.

The current situation is easily shown by comparing the BNG and IXIA view of the
same service:
{master}
magno@MX960-4-GNF-RE0> show subscribers summary 
Mar 09 16:05:45

Subscribers by State
   Total: 0

Subscribers by Client Type
   Total: 0

{master}
magno@MX960-4-GNF-RE0>

On the BNG, no subscriber states at all… while on IXIA side, everything looks like
it is working perfectly in Figure 6.6.
160 Chapter 6: From Single Chassis to Junos Node Slicing

Figure 6.6 IXIA DHCP Subscribers Status - 64k Up

In the real world the only way to fix this situation is to wait for the subscribers’
CPEs to renew their DHCP lease. Indeed, often it happens that some time before
the maintenance window, operators change DHCP server lease time to a much
smaller value to force the installed base to shorten their lease times, and thus, to
minimize the service interruption during the maintenance window. Then, they set
the lease time back to the default value once the activity is complete.
In our case, we are much luckier as we can control the simulated CPEs, so we can
trigger a DHCP binding renew on all of them. See Figure 6.7. Let’s do that, and
then check what happens:

Figure 6.7 IXIA Ongoing Action: Renew DHCP Client 1

Once the IXIA starts to send the DHCP renew, the subscribers’ states are rebuilt as
expected:
{master}
magno@MX960-4-GNF-RE0> show subscribers summary | refresh 5 
Mar 09 16:07:10
---(refreshed at 2019-03-09 16:07:10 CET)---

Subscribers by State
   Total: 0

--- SNIP ----
---(refreshed at 2019-03-09 16:07:25 CET)---

Subscribers by State
   Init: 32
   Total: 32
161 Production Single Chassis MX to Single-GNF Junos Node Slicing Procedure

Subscribers by Client Type
   VLAN: 32
   Total: 32
---(refreshed at 2019-03-09 16:07:30 CET)---

Subscribers by State
   Init: 78
   Configured: 50
   Active: 3928
   Total: 4056

Subscribers by Client Type
   DHCP: 2016
   VLAN: 2040
   Total: 4056
---- SNIP ----

---(refreshed at 2019-03-09 16:10:10 CET)---

Subscribers by State
   Init: 114
   Configured: 11
   Active: 127859
   Total: 127984

Subscribers by Client Type
   DHCP: 63984
   VLAN: 64000
   Total: 127984
---(refreshed at 2019-03-09 16:10:15 CET)---

Subscribers by State
   Active: 128000
   Total: 128000

Subscribers by Client Type
   DHCP: 64000
   VLAN: 64000
   Total: 128000
---(refreshed at 2019-03-09 16:10:20 CET)---

Subscribers by State
   Active: 128000
   Total: 128000

Subscribers by Client Type
   DHCP: 64000
   VLAN: 64000
   Total: 128000
---(*more 100%)---[abort]
                                        
{master}
magno@MX960-4-GNF-RE0> show subscribers summary    
Mar 09 16:10:42

Subscribers by State
   Active: 128000
   Total: 128000
162 Chapter 6: From Single Chassis to Junos Node Slicing

Subscribers by Client Type
   DHCP: 64000
   VLAN: 64000
   Total: 128000

{master}
magno@MX960-4-GNF-RE0> show route summary table inet.0 
Mar 09 16:11:02
Autonomous system number: 65203
Router ID: 72.255.255.1

inet.0: 164209 destinations, 164210 routes (164209 active, 0 holddown, 0 hidden)
              Direct:    105 routes,    104 active
               Local:    103 routes,    103 active
                 BGP: 100000 routes, 100000 active
              Static:      2 routes,      2 active
     Access-internal:  64000 routes,  64000 active

{master}
magno@MX960-4-GNF-RE0>

Perfect, that’s much better. It now looks like the end-to-end services are completely
restored. In the end, the disruptive GNF configuration commit was performed at
15:58:56 and the end-to-end service restoration was achieved at 16:10:42, which
means our service disruption lasted about 11 minutes and 46 seconds. That’s not
too bad at all if we consider the nature of the services involved!
It’s now possible to remove all the configurations related to the protocol from the
MX chassis as they are not used anymore. Indeed, the line cards are now attached
to the new routing engines, which means no data plane resources are available to
the internal routing engines, which makes the old configuration completely
ineffective.

Summary
This has been an extraordinary Junos node slicing day. Ease of setup and flexibility
are the undeniable advantages this new technology brings to networking. Thank
you for labbing Junos node slicing, and remember that more information can be
found on the Junos Node Slicing Feature Guide, which is constantly updated with
new features once they are released. Look for it on the Juniper TechLibrary:
https://www.juniper.net/documentation/en_US/junos/information-products/path-
way-pages/junos-node-slicing/junos-node-slicing.pdf.
Appendix

Node Slicing Lab Configurations

Please find the final configurations used for this book. Each configuration is identi-
fied by the name of the element it belongs to and by the use cases covered. More-
over, to save space, the BGP protocol and interfaces configurations used to
establish the 100 peer sessions, are explicitly shown just once to save space.

MX960-4-BSYS-SET
The MX960 B-SYS configuration for Chapters 3 and 4 cases:
set version 18.3R1.9
set groups re0 system host-name MX960-4-RE0
set groups re0 system backup-router 172.30.177.1
set groups re0 system backup-router destination 172.30.176.0/20
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.178.71/24
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.177.196/24 master-only
set groups re1 system host-name MX960-4-RE1
set groups re1 system backup-router 172.30.177.1
set groups re1 system backup-router destination 172.30.176.0/20
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.178.72/24
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.177.196/24 master-only
set apply-groups re0
set apply-groups re1
set system commit fast-synchronize
set system commit synchronize
set system commit persist-groups-inheritance
set system login user magno uid 2001
set system login user magno class super-user
set system login user magno authentication encrypted-password “-- SNIP --”
set system login user magno authentication ssh-rsa “ssh-rsa -- SNIP --”
set system login user remote uid 2000
set system login user remote class super-user
set system root-authentication encrypted-password “-- SNIP --”
set system domain-name poc-nl.jnpr.net
164 Appendix: Node Slicing Lab Configurations

set system backup-router 172.30.177.1
set system backup-router destination 172.30.176.0/20
set system time-zone Europe/Amsterdam
set system authentication-order password
set system authentication-order radius
set system name-server 172.30.207.10
set system name-server 172.30.207.13
set system radius-server 172.30.176.9 secret “$9$DMHPTz36CtOqmBEclLXik.mfT6/t1Eyn/”
set system radius-server 172.30.176.9 retry 3
set system radius-server 172.30.177.4 secret “$9$CgY9p1EcylvWx0B7VwgUDtuOBIEleWNVYre”
set system radius-server 172.30.177.4 retry 3
set system services ftp
set system services ssh root-login allow
set system services ssh client-alive-interval 120
set system services telnet
set system services xnm-clear-text
set system services netconf ssh
set system services rest http
set system services rest enable-explorer
set system services web-management http
set system syslog user * any emergency
set system syslog host 172.30.189.13 any notice
set system syslog host 172.30.189.13 authorization info
set system syslog host 172.30.189.13 interactive-commands info
set system syslog host 172.30.189.14 any notice
set system syslog host 172.30.189.14 authorization info
set system syslog host 172.30.189.14 interactive-commands info
set system syslog file messages any notice
set system syslog file messages authorization info
set system syslog file messages match-strings “!*0x44b*”
set system compress-configuration-files
set system ntp boot-server 172.30.207.10
set system ntp server 172.30.207.10
set chassis redundancy failover on-loss-of-keepalives
set chassis redundancy failover not-on-disk-underperform
set chassis redundancy graceful-switchover
set chassis fpc 0 pic 0 pic-mode 100G
set chassis fpc 0 pic 1 pic-mode 100G
set chassis network-services enhanced-ip
set chassis network-slices guest-network-functions gnf 1 fpcs 6
set chassis network-slices guest-network-functions gnf 1 af0 description “AF0 to CORE-GNF AF0”
set chassis network-slices guest-network-functions gnf 1 af0 peer-gnf id 2
set chassis network-slices guest-network-functions gnf 1 af0 peer-gnf af0
set chassis network-slices guest-network-functions gnf 2 fpcs 1
set chassis network-slices guest-network-functions gnf 2 af0 description “AF0 to EDGE-GNF AF0”
set chassis network-slices guest-network-functions gnf 2 af0 peer-gnf id 1
set chassis network-slices guest-network-functions gnf 2 af0 peer-gnf af0
set interfaces lo0 unit 0 family inet address 192.177.0.196/32 preferred
set interfaces lo0 unit 0 family inet address 127.0.0.1/32
set interfaces lo0 unit 0 family iso address 49.0177.0000.0000.0196.00
set snmp location “AMS, EPOC location=3.09”
set snmp contact “emea-poc@juniper.net”
set snmp community public authorization read-only
set snmp community public clients 172.30.0.0/16
set snmp community public clients 0.0.0.0/0 restrict
set snmp community private authorization read-write
set snmp community private clients 172.30.0.0/16
set snmp community private clients 0.0.0.0/0 restrict
165 EDGE-GNF-AF INTERFACE-ADV

set snmp trap-options source-address 172.30.177.196
set routing-options nonstop-routing
set routing-options static route 172.16.0.0/12 next-hop 172.30.177.1
set routing-options static route 172.16.0.0/12 no-readvertise
set routing-options router-id 192.177.0.196
set routing-options autonomous-system 100
set protocols layer2-control nonstop-bridging

EDGE-GNF-AF INTERFACE-ADV
For the advanced AF Interface use cases:

set version 18.3R1.9
set groups re0 system host-name EDGE-GNF-re0
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.181.175/24 master-only
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.181.176/24
set groups re1 system host-name EDGE-GNF-re1
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.181.175/24 master-only
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.181.177/24
set apply-groups re0
set apply-groups re1
set system commit fast-synchronize
set system commit synchronize
set system configuration-database max-db-size 629145600
set system login user magno uid 2000
set system login user magno class super-user
set system login user magno authentication ssh-rsa “ssh-rsa -- SNIP --”
set system root-authentication encrypted-password “ -- SNIP -- “
set system time-zone Europe/Amsterdam
set system use-imported-time-zones
set system dynamic-profile-options versioning
set system services ftp
set system services ssh root-login allow
set system services netconf ssh
set system services rest http
set system services subscriber-management enable
deactivate system services subscriber-management
set system syslog user * any emergency
set system syslog file messages any notice
set system syslog file messages authorization info
set system syslog file interactive-commands interactive-commands any
set system ntp boot-server 172.30.207.10
set system ntp server 172.30.207.10
set chassis redundancy failover on-loss-of-keepalives
set chassis redundancy failover not-on-disk-underperform
set chassis redundancy graceful-switchover
set chassis aggregated-devices maximum-links 64
set chassis pseudowire-service device-count 10
set chassis redundancy-group interface-type redundant-logical-tunnel device-count 1
set chassis fpc 6 pic 0 tunnel-services bandwidth 40g
set chassis fpc 6 pic 1 tunnel-services bandwidth 40g
set chassis network-services enhanced-ip
set interfaces xe-6/0/0 flexible-vlan-tagging
set interfaces xe-6/0/0 encapsulation flexible-ethernet-services
set interfaces xe-6/0/0 unit 50 encapsulation vlan-bridge
166 Appendix: Node Slicing Lab Configurations

set interfaces xe-6/0/0 unit 50 vlan-id 50
set interfaces xe-6/0/0 unit 100 encapsulation vlan-vpls
set interfaces xe-6/0/0 unit 100 vlan-id 100
set interfaces xe-6/0/0 unit 200 encapsulation vlan-bridge
set interfaces xe-6/0/0 unit 200 vlan-id 200
set interfaces af0 flexible-vlan-tagging
set interfaces af0 mtu 9224
set interfaces af0 encapsulation flexible-ethernet-services
set interfaces af0 unit 50 encapsulation vlan-ccc
set interfaces af0 unit 50 vlan-id 50
set interfaces af0 unit 100 encapsulation vlan-ccc
set interfaces af0 unit 100 vlan-id 100
set interfaces af0 unit 200 encapsulation vlan-ccc
set interfaces af0 unit 200 vlan-id 200
set interfaces af0 unit 200 family ccc mtu 9216
set interfaces lo0 unit 0 family inet address 72.255.255.1/32
set interfaces lo0 unit 0 family inet address 100.100.255.254/32
set interfaces lo0 unit 0 family iso address 49.0001.7272.0255.0001.00
set interfaces lo0 unit 0 family inet6 address fec0::72.255.255.1/128
set interfaces ps0 anchor-point lt-6/0/0
set interfaces ps0 flexible-vlan-tagging
set interfaces ps0 mtu 9216
set interfaces ps0 encapsulation flexible-ethernet-services
set interfaces ps0 unit 0 encapsulation vlan-ccc
set interfaces ps0 unit 0 vlan-id 200
set interfaces ps0 unit 200 encapsulation vlan-bridge
set interfaces ps0 unit 200 vlan-id 200
set interfaces ps1 anchor-point lt-6/1/0
set interfaces ps1 flexible-vlan-tagging
set interfaces ps1 mtu 9216
set interfaces ps1 encapsulation flexible-ethernet-services
set interfaces ps1 unit 0 encapsulation ethernet-ccc
set interfaces ps1 unit 100 encapsulation vlan-vpls
set interfaces ps1 unit 100 vlan-id 100
set interfaces ps2 anchor-point lt-6/0/0
set interfaces ps2 flexible-vlan-tagging
set interfaces ps2 mtu 9216
set interfaces ps2 encapsulation flexible-ethernet-services
set interfaces ps2 unit 0 encapsulation ethernet-ccc
set interfaces ps2 unit 50 encapsulation vlan-bridge
set interfaces ps2 unit 50 vlan-id 50
set routing-options nonstop-routing
set routing-options static route 100.100.0.0/16 discard
set routing-options static route 192.168.0.0/16 next-hop 172.30.181.1
set routing-options static route 172.16.0.0/12 next-hop 172.30.181.1
set routing-options static route 10.0.0.0/8 next-hop 172.30.181.1
set routing-options autonomous-system 65203
set routing-options forwarding-table chained-composite-next-hop ingress l2ckt
set routing-options forwarding-table chained-composite-next-hop ingress fec129-vpws
set routing-options forwarding-table chained-composite-next-hop ingress no-evpn
set routing-options forwarding-table chained-composite-next-hop ingress labeled-bgp inet6
set routing-options forwarding-table chained-composite-next-hop ingress l3vpn
set protocols mpls interface af0.72
set protocols l2circuit local-switching interface af0.200 end-interface interface ps0.0
set protocols l2circuit local-switching interface af0.100 end-interface interface ps1.0
set protocols l2circuit local-switching interface af0.100 ignore-encapsulation-mismatch
set protocols l2circuit local-switching interface af0.50 end-interface interface ps2.0
set protocols l2circuit local-switching interface af0.50 ignore-encapsulation-mismatch
167 EDGE-GNF-AF INTERFACE-ADV

set protocols layer2-control nonstop-bridging
set routing-instances EVPN-VLAN-50 instance-type evpn
set routing-instances EVPN-VLAN-50 vlan-id 50
set routing-instances EVPN-VLAN-50 interface xe-6/0/0.50
set routing-instances EVPN-VLAN-50 interface ps2.50
set routing-instances EVPN-VLAN-50 route-distinguisher 72.255.255.1:150
set routing-instances EVPN-VLAN-50 vrf-target target:65203:50
set routing-instances EVPN-VLAN-50 protocols evpn
set routing-instances VPLS-VLAN100 instance-type vpls
set routing-instances VPLS-VLAN100 vlan-id 100
set routing-instances VPLS-VLAN100 interface xe-6/0/0.100
set routing-instances VPLS-VLAN100 interface ps1.100
set routing-instances VPLS-VLAN100 protocols vpls vpls-id 100
set bridge-domains VLAN-200 vlan-id 200 
set bridge-domains VLAN-200 interface ps0.200
set bridge-domains VLAN-200 interface xe-6/0/0.200

EDGE-GNF-AF INTERFACE-ADV
For the advanced AF Interface use cases:
set version 18.3R1.9
set groups re0 system host-name CORE-GNF-re0
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.181.178/24 master-only
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.181.179/24
set groups re1 system host-name CORE-GNF-re1
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.181.178/24 master-only
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.181.180/24
set apply-groups re0
set apply-groups re1
set system commit fast-synchronize
set system commit synchronize
set system login user magno uid 2000
set system login user magno class super-user
set system login user magno authentication ssh-rsa “ssh-rsa -- SNIP -- “
set system root-authentication encrypted-password “ -- SNIP -- “
set system services ftp
set system services ssh root-login allow
set system services netconf ssh
set system services rest http
set system syslog user * any emergency
set system syslog file messages any notice
set system syslog file messages authorization info
set system syslog file interactive-commands interactive-commands any
set system ntp boot-server 172.30.207.10
set system ntp server 172.30.207.10
set chassis redundancy failover on-loss-of-keepalives
set chassis redundancy failover not-on-disk-underperform
set chassis redundancy graceful-switchover
set chassis redundancy-group interface-type redundant-logical-tunnel device-count 1
deactivate chassis redundancy-group interface-type redundant-logical-tunnel
set interfaces xe-1/0/0 flexible-vlan-tagging
set interfaces xe-1/0/0 encapsulation flexible-ethernet-services
set interfaces xe-1/0/0 unit 50 encapsulation vlan-bridge
set interfaces xe-1/0/0 unit 50 vlan-id 50
168 Appendix: Node Slicing Lab Configurations

set interfaces xe-1/0/0 unit 100 encapsulation vlan-bridge
set interfaces xe-1/0/0 unit 100 vlan-id 100
set interfaces xe-1/0/0 unit 200 encapsulation vlan-bridge
set interfaces xe-1/0/0 unit 200 vlan-id 200
set interfaces af0 flexible-vlan-tagging
set interfaces af0 mtu 9216
set interfaces af0 encapsulation flexible-ethernet-services
set interfaces af0 unit 50 encapsulation vlan-bridge
set interfaces af0 unit 50 vlan-id 50
set interfaces af0 unit 100 encapsulation vlan-bridge
set interfaces af0 unit 100 vlan-id 100
set interfaces af0 unit 200 encapsulation vlan-bridge
set interfaces af0 unit 200 vlan-id 200
set interfaces irb unit 50 family inet address 50.50.50.99/24
set interfaces lo0 unit 0 family inet address 72.255.255.2/32
set interfaces lo0 unit 0 family iso address 49.0001.7272.0255.0002.00
set interfaces lo0 unit 0 family inet6 address fec0::72.255.255.2/128
set routing-options nonstop-routing
set routing-options static route 10.0.0.0/8 next-hop 172.30.181.1
set routing-options static route 10.0.0.0/8 no-readvertise
set routing-options static route 172.16.0.0/12 next-hop 172.30.181.1
set routing-options static route 172.16.0.0/12 no-readvertise
set routing-options static route 192.168.0.0/16 next-hop 172.30.181.1
set routing-options static route 192.168.0.0/16 no-readvertise
set routing-options router-id 72.255.255.2
set routing-options autonomous-system 65203
set protocols layer2-control nonstop-bridging
set bridge-domains VLAN-100 vlan-id 100
set bridge-domains VLAN-100 interface af0.100
set bridge-domains VLAN-100 interface xe-1/0/0.100
set bridge-domains VLAN-200 vlan-id 200
set bridge-domains VLAN-200 interface xe-1/0/0.200
set bridge-domains VLAN-200 interface af0.200
set bridge-domains VLAN-50 vlan-id 50
set bridge-domains VLAN-50 interface xe-1/0/0.50
set bridge-domains VLAN-50 interface af0.50
set bridge-domains VLAN-50 routing-interface irb.50

EDGE-GNF-BBE
The GNF configuration for Chapter 4 cases:
set version 18.3R1.9
set groups re0 system host-name EDGE-GNF-re0
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.181.175/24 master-only
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.181.176/24
set groups re1 system host-name EDGE-GNF-re1
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.181.175/24 master-only
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.181.177/24
set apply-groups re0
set apply-groups re1
set system commit fast-synchronize
set system commit synchronize
set system configuration-database max-db-size 629145600
set system login user magno uid 2000
169 EDGE-GNF-BBE

set system login user magno class super-user
set system login user magno authentication ssh-rsa “ssh-rsa -- SNIP -- “
set system root-authentication encrypted-password “  -- SNIP -- “
set system time-zone Europe/Amsterdam
set system use-imported-time-zones
set system dynamic-profile-options versioning
set system services ftp
set system services ssh root-login allow
set system services netconf ssh
set system services rest http
set system services subscriber-management enable
set system syslog user * any emergency
set system syslog file messages any notice
set system syslog file messages authorization info
set system syslog file interactive-commands interactive-commands any
set system ntp boot-server 172.30.207.10
set system ntp server 172.30.207.10
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” no-traps
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” proxy-arp
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” vlan-
tags outer “$junos-stacked-vlan-id”
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” vlan-
tags inner “$junos-vlan-id”
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” demux-
options underlying-interface “$junos-underlying-interface”
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-
unit” family inet unnumbered-address lo0.0
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-
unit” family inet unnumbered-address preferred-source-address 100.100.255.254
set dynamic-profiles DP-IP-DEMUX interfaces demux0 unit “$junos-interface-unit” demux-
options underlying-interface “$junos-underlying-interface”
set dynamic-profiles DP-IP-DEMUX interfaces demux0 unit “$junos-interface-
unit” family inet unnumbered-address lo0.0
set chassis redundancy failover on-loss-of-keepalives
set chassis redundancy failover not-on-disk-underperform
set chassis redundancy graceful-switchover
set chassis aggregated-devices maximum-links 64
set chassis network-services enhanced-ip
set access-profile NOAUTH
set interfaces xe-6/0/0 flexible-vlan-tagging
set interfaces xe-6/0/0 auto-configure stacked-vlan-ranges dynamic-profile DP-AUTO-
VLAN accept dhcp-v4
set interfaces xe-6/0/0 auto-configure stacked-vlan-ranges dynamic-profile DP-AUTO-VLAN ranges 1000-
2000,any
set interfaces xe-6/0/0 auto-configure remove-when-no-subscribers
set interfaces xe-6/0/0 encapsulation flexible-ethernet-services
set interfaces af0 flexible-vlan-tagging
set interfaces af0 mtu 9216
set interfaces af0 encapsulation flexible-ethernet-services
set interfaces af0 unit 72 vlan-id 72
set interfaces af0 unit 72 family inet address 72.0.0.1/30
set interfaces af0 unit 72 family iso
set interfaces af0 unit 72 family inet6 address fec0::71.0.0.1/126
set interfaces af0 unit 72 family mpls  
set interfaces lo0 unit 0 family inet address 72.255.255.1/32
set interfaces lo0 unit 0 family inet address 100.100.255.254/32
set interfaces lo0 unit 0 family iso address 49.0001.7272.0255.0001.00
set interfaces lo0 unit 0 family inet6 address fec0::72.255.255.1/128
170 Appendix: Node Slicing Lab Configurations

set forwarding-options dhcp-relay server-group DHCPv4 99.99.10.10
set forwarding-options dhcp-relay group DHCPv4-ACTIVE active-server-group DHCPv4
set forwarding-options dhcp-relay group DHCPv4-ACTIVE interface xe-6/0/0.0
set forwarding-options dhcp-relay no-snoop
set accounting-options periodic-refresh disable
set routing-options nonstop-routing
set routing-options static route 100.100.0.0/16 discard
set routing-options static route 192.168.0.0/16 next-hop 172.30.181.1
set routing-options static route 172.16.0.0/12 next-hop 172.30.181.1
set routing-options static route 10.0.0.0/8 next-hop 172.30.181.1
set routing-options autonomous-system 65203
set protocols bgp group iBGP type internal
set protocols bgp group iBGP local-address 72.255.255.1
set protocols bgp group iBGP family inet unicast
set protocols bgp group iBGP family inet6 unicast
set protocols bgp group iBGP export BBE-POOL
set protocols bgp group iBGP neighbor 72.255.255.2
set protocols isis reference-bandwidth 100g
set protocols isis level 1 disable
set protocols isis level 2 wide-metrics-only
set protocols isis interface af0.72 point-to-point
set protocols isis interface lo0.0 passive
set protocols layer2-control nonstop-bridging
set policy-options policy-statement BBE-POOL term OK from protocol static
set policy-options policy-statement BBE-POOL term OK from route-filter 100.100.0.0/16 exact
set policy-options policy-statement BBE-POOL term OK then accept
set access profile NOAUTH authentication-order none

CORE-GNF-BGP
The GNF configuration for Chapter 4 cases:
set version 18.3R1.9
set groups re0 system host-name CORE-GNF-re0
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.181.178/24 master-only
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.181.179/24
set groups re1 system host-name CORE-GNF-re1
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.181.178/24 master-only
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.181.180/24
set apply-groups re0
set apply-groups re1
set system commit fast-synchronize
set system commit synchronize
set system login user magno uid 2000
set system login user magno class super-user
set system login user magno authentication ssh-rsa “ssh-rsa -- SNIP -- “
set system root-authentication encrypted-password “ -- SNIP -- “
set system services ftp
set system services ssh root-login allow
set system services netconf ssh
set system services rest http
set system syslog user * any emergency
set system syslog file messages any notice
set system syslog file messages authorization info
set system syslog file interactive-commands interactive-commands any
set system ntp boot-server 172.30.207.10
171 CORE-GNF-BGP

set system ntp server 172.30.207.10
set chassis redundancy failover on-loss-of-keepalives
set chassis redundancy failover not-on-disk-underperform
set chassis redundancy graceful-switchover
set interfaces xe-1/0/0 description “Link to IXIA LC 7 Port 8”
set interfaces xe-1/0/0 flexible-vlan-tagging
set interfaces xe-1/0/0 encapsulation flexible-ethernet-services
set interfaces xe-1/0/0 unit 10 vlan-id 10
set interfaces xe-1/0/0 unit 10 family inet address 99.99.10.1/24
set interfaces xe-1/0/0 unit 300 vlan-id 300
set interfaces xe-1/0/0 unit 300 family inet address 99.99.99.1/30
set interfaces xe-1/0/0 unit 300 family inet6 address 2002::99/126
set interfaces xe-1/0/0 unit 301 vlan-id 301
set interfaces xe-1/0/0 unit 301 family inet address 99.99.99.5/30
set interfaces xe-1/0/0 unit 302 vlan-id 302
set interfaces xe-1/0/0 unit 302 family inet address 99.99.99.9/30
set interfaces xe-1/0/0 unit 303 vlan-id 303
set interfaces xe-1/0/0 unit 303 family inet address 99.99.99.13/30
set interfaces xe-1/0/0 unit 304 vlan-id 304
set interfaces xe-1/0/0 unit 304 family inet address 99.99.99.17/30
set interfaces xe-1/0/0 unit 305 vlan-id 305
set interfaces xe-1/0/0 unit 305 family inet address 99.99.99.21/30
set interfaces xe-1/0/0 unit 306 vlan-id 306
set interfaces xe-1/0/0 unit 306 family inet address 99.99.99.25/30
set interfaces xe-1/0/0 unit 307 vlan-id 307
set interfaces xe-1/0/0 unit 307 family inet address 99.99.99.29/30
set interfaces xe-1/0/0 unit 308 vlan-id 308
set interfaces xe-1/0/0 unit 308 family inet address 99.99.99.33/30
set interfaces xe-1/0/0 unit 309 vlan-id 309
set interfaces xe-1/0/0 unit 309 family inet address 99.99.99.37/30
set interfaces xe-1/0/0 unit 310 vlan-id 310
set interfaces xe-1/0/0 unit 310 family inet address 99.99.99.41/30
set interfaces xe-1/0/0 unit 311 vlan-id 311
set interfaces xe-1/0/0 unit 311 family inet address 99.99.99.45/30
set interfaces xe-1/0/0 unit 312 vlan-id 312
set interfaces xe-1/0/0 unit 312 family inet address 99.99.99.49/30
set interfaces xe-1/0/0 unit 313 vlan-id 313
set interfaces xe-1/0/0 unit 313 family inet address 99.99.99.53/30
set interfaces xe-1/0/0 unit 314 vlan-id 314
set interfaces xe-1/0/0 unit 314 family inet address 99.99.99.57/30
set interfaces xe-1/0/0 unit 315 vlan-id 315
set interfaces xe-1/0/0 unit 315 family inet address 99.99.99.61/30
set interfaces xe-1/0/0 unit 316 vlan-id 316
set interfaces xe-1/0/0 unit 316 family inet address 99.99.99.65/30
set interfaces xe-1/0/0 unit 317 vlan-id 317
set interfaces xe-1/0/0 unit 317 family inet address 99.99.99.69/30
set interfaces xe-1/0/0 unit 318 vlan-id 318
set interfaces xe-1/0/0 unit 318 family inet address 99.99.99.73/30
set interfaces xe-1/0/0 unit 319 vlan-id 319
set interfaces xe-1/0/0 unit 319 family inet address 99.99.99.77/30
set interfaces xe-1/0/0 unit 320 vlan-id 320
set interfaces xe-1/0/0 unit 320 family inet address 99.99.99.81/30
set interfaces xe-1/0/0 unit 321 vlan-id 321
set interfaces xe-1/0/0 unit 321 family inet address 99.99.99.85/30
set interfaces xe-1/0/0 unit 322 vlan-id 322
set interfaces xe-1/0/0 unit 322 family inet address 99.99.99.89/30
set interfaces xe-1/0/0 unit 323 vlan-id 323
set interfaces xe-1/0/0 unit 323 family inet address 99.99.99.93/30
172 Appendix: Node Slicing Lab Configurations

set interfaces xe-1/0/0 unit 324 vlan-id 324
set interfaces xe-1/0/0 unit 324 family inet address 99.99.99.97/30
set interfaces xe-1/0/0 unit 325 vlan-id 325
set interfaces xe-1/0/0 unit 325 family inet address 99.99.99.101/30
set interfaces xe-1/0/0 unit 326 vlan-id 326
set interfaces xe-1/0/0 unit 326 family inet address 99.99.99.105/30
set interfaces xe-1/0/0 unit 327 vlan-id 327
set interfaces xe-1/0/0 unit 327 family inet address 99.99.99.109/30
set interfaces xe-1/0/0 unit 328 vlan-id 328
set interfaces xe-1/0/0 unit 328 family inet address 99.99.99.113/30
set interfaces xe-1/0/0 unit 329 vlan-id 329
set interfaces xe-1/0/0 unit 329 family inet address 99.99.99.117/30
set interfaces xe-1/0/0 unit 330 vlan-id 330
set interfaces xe-1/0/0 unit 330 family inet address 99.99.99.121/30
set interfaces xe-1/0/0 unit 331 vlan-id 331
set interfaces xe-1/0/0 unit 331 family inet address 99.99.99.125/30
set interfaces xe-1/0/0 unit 332 vlan-id 332
set interfaces xe-1/0/0 unit 332 family inet address 99.99.99.129/30
set interfaces xe-1/0/0 unit 333 vlan-id 333
set interfaces xe-1/0/0 unit 333 family inet address 99.99.99.133/30
set interfaces xe-1/0/0 unit 334 vlan-id 334
set interfaces xe-1/0/0 unit 334 family inet address 99.99.99.137/30
set interfaces xe-1/0/0 unit 335 vlan-id 335
set interfaces xe-1/0/0 unit 335 family inet address 99.99.99.141/30
set interfaces xe-1/0/0 unit 336 vlan-id 336
set interfaces xe-1/0/0 unit 336 family inet address 99.99.99.145/30
set interfaces xe-1/0/0 unit 337 vlan-id 337
set interfaces xe-1/0/0 unit 337 family inet address 99.99.99.149/30
set interfaces xe-1/0/0 unit 338 vlan-id 338
set interfaces xe-1/0/0 unit 338 family inet address 99.99.99.153/30
set interfaces xe-1/0/0 unit 339 vlan-id 339
set interfaces xe-1/0/0 unit 339 family inet address 99.99.99.157/30
set interfaces xe-1/0/0 unit 340 vlan-id 340
set interfaces xe-1/0/0 unit 340 family inet address 99.99.99.161/30
set interfaces xe-1/0/0 unit 341 vlan-id 341
set interfaces xe-1/0/0 unit 341 family inet address 99.99.99.165/30
set interfaces xe-1/0/0 unit 342 vlan-id 342
set interfaces xe-1/0/0 unit 342 family inet address 99.99.99.169/30
set interfaces xe-1/0/0 unit 343 vlan-id 343
set interfaces xe-1/0/0 unit 343 family inet address 99.99.99.173/30
set interfaces xe-1/0/0 unit 344 vlan-id 344
set interfaces xe-1/0/0 unit 344 family inet address 99.99.99.177/30
set interfaces xe-1/0/0 unit 345 vlan-id 345
set interfaces xe-1/0/0 unit 345 family inet address 99.99.99.181/30
set interfaces xe-1/0/0 unit 346 vlan-id 346
set interfaces xe-1/0/0 unit 346 family inet address 99.99.99.185/30
set interfaces xe-1/0/0 unit 347 vlan-id 347
set interfaces xe-1/0/0 unit 347 family inet address 99.99.99.189/30
set interfaces xe-1/0/0 unit 348 vlan-id 348
set interfaces xe-1/0/0 unit 348 family inet address 99.99.99.193/30
set interfaces xe-1/0/0 unit 349 vlan-id 349
set interfaces xe-1/0/0 unit 349 family inet address 99.99.99.197/30
set interfaces xe-1/0/0 unit 350 vlan-id 350
set interfaces xe-1/0/0 unit 350 family inet address 99.99.99.201/30
set interfaces xe-1/0/0 unit 351 vlan-id 351
set interfaces xe-1/0/0 unit 351 family inet address 99.99.99.205/30
set interfaces xe-1/0/0 unit 352 vlan-id 352
set interfaces xe-1/0/0 unit 352 family inet address 99.99.99.209/30
173 CORE-GNF-BGP

set interfaces xe-1/0/0 unit 353 vlan-id 353
set interfaces xe-1/0/0 unit 353 family inet address 99.99.99.213/30
set interfaces xe-1/0/0 unit 354 vlan-id 354
set interfaces xe-1/0/0 unit 354 family inet address 99.99.99.217/30
set interfaces xe-1/0/0 unit 355 vlan-id 355
set interfaces xe-1/0/0 unit 355 family inet address 99.99.99.221/30
set interfaces xe-1/0/0 unit 356 vlan-id 356
set interfaces xe-1/0/0 unit 356 family inet address 99.99.99.225/30
set interfaces xe-1/0/0 unit 357 vlan-id 357
set interfaces xe-1/0/0 unit 357 family inet address 99.99.99.229/30
set interfaces xe-1/0/0 unit 358 vlan-id 358
set interfaces xe-1/0/0 unit 358 family inet address 99.99.99.233/30
set interfaces xe-1/0/0 unit 359 vlan-id 359
set interfaces xe-1/0/0 unit 359 family inet address 99.99.99.237/30
set interfaces xe-1/0/0 unit 360 vlan-id 360
set interfaces xe-1/0/0 unit 360 family inet address 99.99.99.241/30
set interfaces xe-1/0/0 unit 361 vlan-id 361
set interfaces xe-1/0/0 unit 361 family inet address 99.99.99.245/30
set interfaces xe-1/0/0 unit 362 vlan-id 362
set interfaces xe-1/0/0 unit 362 family inet address 99.99.99.249/30
set interfaces xe-1/0/0 unit 363 vlan-id 363
set interfaces xe-1/0/0 unit 363 family inet address 99.99.99.253/30
set interfaces xe-1/0/0 unit 364 vlan-id 364
set interfaces xe-1/0/0 unit 364 family inet address 99.99.100.1/30
set interfaces xe-1/0/0 unit 365 vlan-id 365
set interfaces xe-1/0/0 unit 365 family inet address 99.99.100.5/30
set interfaces xe-1/0/0 unit 366 vlan-id 366
set interfaces xe-1/0/0 unit 366 family inet address 99.99.100.9/30
set interfaces xe-1/0/0 unit 367 vlan-id 367
set interfaces xe-1/0/0 unit 367 family inet address 99.99.100.13/30
set interfaces xe-1/0/0 unit 368 vlan-id 368
set interfaces xe-1/0/0 unit 368 family inet address 99.99.100.17/30
set interfaces xe-1/0/0 unit 369 vlan-id 369
set interfaces xe-1/0/0 unit 369 family inet address 99.99.100.21/30
set interfaces xe-1/0/0 unit 370 vlan-id 370
set interfaces xe-1/0/0 unit 370 family inet address 99.99.100.25/30
set interfaces xe-1/0/0 unit 371 vlan-id 371
set interfaces xe-1/0/0 unit 371 family inet address 99.99.100.29/30
set interfaces xe-1/0/0 unit 372 vlan-id 372
set interfaces xe-1/0/0 unit 372 family inet address 99.99.100.33/30
set interfaces xe-1/0/0 unit 373 vlan-id 373
set interfaces xe-1/0/0 unit 373 family inet address 99.99.100.37/30
set interfaces xe-1/0/0 unit 374 vlan-id 374
set interfaces xe-1/0/0 unit 374 family inet address 99.99.100.41/30
set interfaces xe-1/0/0 unit 375 vlan-id 375
set interfaces xe-1/0/0 unit 375 family inet address 99.99.100.45/30
set interfaces xe-1/0/0 unit 376 vlan-id 376
set interfaces xe-1/0/0 unit 376 family inet address 99.99.100.49/30
set interfaces xe-1/0/0 unit 377 vlan-id 377
set interfaces xe-1/0/0 unit 377 family inet address 99.99.100.53/30
set interfaces xe-1/0/0 unit 378 vlan-id 378
set interfaces xe-1/0/0 unit 378 family inet address 99.99.100.57/30
set interfaces xe-1/0/0 unit 379 vlan-id 379
set interfaces xe-1/0/0 unit 379 family inet address 99.99.100.61/30
set interfaces xe-1/0/0 unit 380 vlan-id 380
set interfaces xe-1/0/0 unit 380 family inet address 99.99.100.65/30
set interfaces xe-1/0/0 unit 381 vlan-id 381
set interfaces xe-1/0/0 unit 381 family inet address 99.99.100.69/30
174 Appendix: Node Slicing Lab Configurations

set interfaces xe-1/0/0 unit 382 vlan-id 382
set interfaces xe-1/0/0 unit 382 family inet address 99.99.100.73/30
set interfaces xe-1/0/0 unit 383 vlan-id 383
set interfaces xe-1/0/0 unit 383 family inet address 99.99.100.77/30
set interfaces xe-1/0/0 unit 384 vlan-id 384
set interfaces xe-1/0/0 unit 384 family inet address 99.99.100.81/30
set interfaces xe-1/0/0 unit 385 vlan-id 385
set interfaces xe-1/0/0 unit 385 family inet address 99.99.100.85/30
set interfaces xe-1/0/0 unit 386 vlan-id 386
set interfaces xe-1/0/0 unit 386 family inet address 99.99.100.89/30
set interfaces xe-1/0/0 unit 387 vlan-id 387
set interfaces xe-1/0/0 unit 387 family inet address 99.99.100.93/30
set interfaces xe-1/0/0 unit 388 vlan-id 388
set interfaces xe-1/0/0 unit 388 family inet address 99.99.100.97/30
set interfaces xe-1/0/0 unit 389 vlan-id 389
set interfaces xe-1/0/0 unit 389 family inet address 99.99.100.101/30
set interfaces xe-1/0/0 unit 390 vlan-id 390
set interfaces xe-1/0/0 unit 390 family inet address 99.99.100.105/30
set interfaces xe-1/0/0 unit 391 vlan-id 391
set interfaces xe-1/0/0 unit 391 family inet address 99.99.100.109/30
set interfaces xe-1/0/0 unit 392 vlan-id 392
set interfaces xe-1/0/0 unit 392 family inet address 99.99.100.113/30
set interfaces xe-1/0/0 unit 393 vlan-id 393
set interfaces xe-1/0/0 unit 393 family inet address 99.99.100.117/30
set interfaces xe-1/0/0 unit 394 vlan-id 394
set interfaces xe-1/0/0 unit 394 family inet address 99.99.100.121/30
set interfaces xe-1/0/0 unit 395 vlan-id 395
set interfaces xe-1/0/0 unit 395 family inet address 99.99.100.125/30
set interfaces xe-1/0/0 unit 396 vlan-id 396
set interfaces xe-1/0/0 unit 396 family inet address 99.99.100.129/30
set interfaces xe-1/0/0 unit 397 vlan-id 397
set interfaces xe-1/0/0 unit 397 family inet address 99.99.100.133/30
set interfaces xe-1/0/0 unit 398 vlan-id 398
set interfaces xe-1/0/0 unit 398 family inet address 99.99.100.137/30
set interfaces xe-1/0/0 unit 399 vlan-id 399
set interfaces xe-1/0/0 unit 399 family inet address 99.99.100.141/30
set interfaces af0 flexible-vlan-tagging
set interfaces af0 mtu 9224
set interfaces af0 encapsulation flexible-ethernet-services
set interfaces af0 unit 72 vlan-id 72
set interfaces af0 unit 72 family inet address 72.0.0.2/30
set interfaces af0 unit 72 family iso
set interfaces af0 unit 72 family inet6 address fec0::71.0.0.2/126
set interfaces af0 unit 72 family mpls
set interfaces lo0 unit 0 family inet address 72.255.255.2/32
set interfaces lo0 unit 0 family iso address 49.0001.7272.0255.0002.00
set interfaces lo0 unit 0 family inet6 address fec0::72.255.255.2/128
set routing-options nonstop-routing
set routing-options static route 10.0.0.0/8 next-hop 172.30.181.1
set routing-options static route 10.0.0.0/8 no-readvertise
set routing-options static route 172.16.0.0/12 next-hop 172.30.181.1
set routing-options static route 172.16.0.0/12 no-readvertise
set routing-options static route 192.168.0.0/16 next-hop 172.30.181.1
set routing-options static route 192.168.0.0/16 no-readvertise
set routing-options aggregate route 0.0.0.0/0 policy AGGR
set routing-options aggregate route 0.0.0.0/0 as-path origin igp
set routing-options router-id 72.255.255.2
set routing-options autonomous-system 65203
175 CORE-GNF-BGP

set protocols bgp precision-timers
set protocols bgp group eBGP family inet unicast
set protocols bgp group eBGP family inet6 unicast
set protocols bgp group eBGP export ADV-MINE
set protocols bgp group eBGP neighbor 99.99.99.2 peer-as 65400
set protocols bgp group eBGP neighbor 99.99.99.6 peer-as 65401
set protocols bgp group eBGP neighbor 99.99.99.10 peer-as 65402
set protocols bgp group eBGP neighbor 99.99.99.14 peer-as 65403
set protocols bgp group eBGP neighbor 99.99.99.18 peer-as 65404
set protocols bgp group eBGP neighbor 99.99.99.22 peer-as 65405
set protocols bgp group eBGP neighbor 99.99.99.26 peer-as 65406
set protocols bgp group eBGP neighbor 99.99.99.30 peer-as 65407
set protocols bgp group eBGP neighbor 99.99.99.34 peer-as 65408
set protocols bgp group eBGP neighbor 99.99.99.38 peer-as 65409
set protocols bgp group eBGP neighbor 99.99.99.42 peer-as 65410
set protocols bgp group eBGP neighbor 99.99.99.46 peer-as 65411
set protocols bgp group eBGP neighbor 99.99.99.50 peer-as 65412
set protocols bgp group eBGP neighbor 99.99.99.54 peer-as 65413
set protocols bgp group eBGP neighbor 99.99.99.58 peer-as 65414
set protocols bgp group eBGP neighbor 99.99.99.62 peer-as 65415
set protocols bgp group eBGP neighbor 99.99.99.66 peer-as 65416
set protocols bgp group eBGP neighbor 99.99.99.70 peer-as 65417
set protocols bgp group eBGP neighbor 99.99.99.74 peer-as 65418
set protocols bgp group eBGP neighbor 99.99.99.78 peer-as 65419
set protocols bgp group eBGP neighbor 99.99.99.82 peer-as 65420
set protocols bgp group eBGP neighbor 99.99.99.86 peer-as 65421
set protocols bgp group eBGP neighbor 99.99.99.90 peer-as 65422
set protocols bgp group eBGP neighbor 99.99.99.94 peer-as 65423
set protocols bgp group eBGP neighbor 99.99.99.98 peer-as 65424
set protocols bgp group eBGP neighbor 99.99.99.102 peer-as 65425
set protocols bgp group eBGP neighbor 99.99.99.106 peer-as 65426
set protocols bgp group eBGP neighbor 99.99.99.110 peer-as 65427
set protocols bgp group eBGP neighbor 99.99.99.114 peer-as 65428
set protocols bgp group eBGP neighbor 99.99.99.118 peer-as 65429
set protocols bgp group eBGP neighbor 99.99.99.122 peer-as 65430
set protocols bgp group eBGP neighbor 99.99.99.126 peer-as 65431
set protocols bgp group eBGP neighbor 99.99.99.130 peer-as 65432
set protocols bgp group eBGP neighbor 99.99.99.134 peer-as 65433
set protocols bgp group eBGP neighbor 99.99.99.138 peer-as 65434
set protocols bgp group eBGP neighbor 99.99.99.142 peer-as 65435
set protocols bgp group eBGP neighbor 99.99.99.146 peer-as 65436
set protocols bgp group eBGP neighbor 99.99.99.150 peer-as 65437
set protocols bgp group eBGP neighbor 99.99.99.154 peer-as 65438
set protocols bgp group eBGP neighbor 99.99.99.158 peer-as 65439
set protocols bgp group eBGP neighbor 99.99.99.162 peer-as 65440
set protocols bgp group eBGP neighbor 99.99.99.166 peer-as 65441
set protocols bgp group eBGP neighbor 99.99.99.170 peer-as 65442
set protocols bgp group eBGP neighbor 99.99.99.174 peer-as 65443
set protocols bgp group eBGP neighbor 99.99.99.178 peer-as 65444
set protocols bgp group eBGP neighbor 99.99.99.182 peer-as 65445
set protocols bgp group eBGP neighbor 99.99.99.186 peer-as 65446
set protocols bgp group eBGP neighbor 99.99.99.190 peer-as 65447
set protocols bgp group eBGP neighbor 99.99.99.194 peer-as 65448
set protocols bgp group eBGP neighbor 99.99.99.198 peer-as 65449
set protocols bgp group eBGP neighbor 99.99.99.202 peer-as 65450
set protocols bgp group eBGP neighbor 99.99.99.206 peer-as 65451
set protocols bgp group eBGP neighbor 99.99.99.210 peer-as 65452
set protocols bgp group eBGP neighbor 99.99.99.214 peer-as 65453
176 Appendix: Node Slicing Lab Configurations

set protocols bgp group eBGP neighbor 99.99.99.218 peer-as 65454
set protocols bgp group eBGP neighbor 99.99.99.222 peer-as 65455
set protocols bgp group eBGP neighbor 99.99.99.226 peer-as 65456
set protocols bgp group eBGP neighbor 99.99.99.230 peer-as 65457
set protocols bgp group eBGP neighbor 99.99.99.234 peer-as 65458
set protocols bgp group eBGP neighbor 99.99.99.238 peer-as 65459
set protocols bgp group eBGP neighbor 99.99.99.242 peer-as 65460
set protocols bgp group eBGP neighbor 99.99.99.246 peer-as 65461
set protocols bgp group eBGP neighbor 99.99.99.250 peer-as 65462
set protocols bgp group eBGP neighbor 99.99.99.254 peer-as 65463
set protocols bgp group eBGP neighbor 99.99.100.2 peer-as 65464
set protocols bgp group eBGP neighbor 99.99.100.6 peer-as 65465
set protocols bgp group eBGP neighbor 99.99.100.10 peer-as 65466
set protocols bgp group eBGP neighbor 99.99.100.14 peer-as 65467
set protocols bgp group eBGP neighbor 99.99.100.18 peer-as 65468
set protocols bgp group eBGP neighbor 99.99.100.22 peer-as 65469
set protocols bgp group eBGP neighbor 99.99.100.26 peer-as 65470
set protocols bgp group eBGP neighbor 99.99.100.30 peer-as 65471
set protocols bgp group eBGP neighbor 99.99.100.34 peer-as 65472
set protocols bgp group eBGP neighbor 99.99.100.38 peer-as 65473
set protocols bgp group eBGP neighbor 99.99.100.42 peer-as 65474
set protocols bgp group eBGP neighbor 99.99.100.46 peer-as 65475
set protocols bgp group eBGP neighbor 99.99.100.50 peer-as 65476
set protocols bgp group eBGP neighbor 99.99.100.54 peer-as 65477
set protocols bgp group eBGP neighbor 99.99.100.58 peer-as 65478
set protocols bgp group eBGP neighbor 99.99.100.62 peer-as 65479
set protocols bgp group eBGP neighbor 99.99.100.66 peer-as 65480
set protocols bgp group eBGP neighbor 99.99.100.70 peer-as 65481
set protocols bgp group eBGP neighbor 99.99.100.74 peer-as 65482
set protocols bgp group eBGP neighbor 99.99.100.78 peer-as 65483
set protocols bgp group eBGP neighbor 99.99.100.82 peer-as 65484
set protocols bgp group eBGP neighbor 99.99.100.86 peer-as 65485
set protocols bgp group eBGP neighbor 99.99.100.90 peer-as 65486
set protocols bgp group eBGP neighbor 99.99.100.94 peer-as 65487
set protocols bgp group eBGP neighbor 99.99.100.98 peer-as 65488
set protocols bgp group eBGP neighbor 99.99.100.102 peer-as 65489
set protocols bgp group eBGP neighbor 99.99.100.106 peer-as 65490
set protocols bgp group eBGP neighbor 99.99.100.110 peer-as 65491
set protocols bgp group eBGP neighbor 99.99.100.114 peer-as 65492
set protocols bgp group eBGP neighbor 99.99.100.118 peer-as 65493
set protocols bgp group eBGP neighbor 99.99.100.122 peer-as 65494
set protocols bgp group eBGP neighbor 99.99.100.126 peer-as 65495
set protocols bgp group eBGP neighbor 99.99.100.130 peer-as 65496
set protocols bgp group eBGP neighbor 99.99.100.134 peer-as 65497
set protocols bgp group eBGP neighbor 99.99.100.138 peer-as 65498
set protocols bgp group eBGP neighbor 99.99.100.142 peer-as 65499
set protocols bgp group iBGP type internal
set protocols bgp group iBGP local-address 72.255.255.2
set protocols bgp group iBGP import iBGP-TAG
set protocols bgp group iBGP family inet unicast
set protocols bgp group iBGP family inet6 unicast
set protocols bgp group iBGP export EXPORT-DEFAULT
set protocols bgp group iBGP neighbor 72.255.255.1
set protocols isis reference-bandwidth 100g
set protocols isis level 1 disable
set protocols isis level 2 wide-metrics-only
set protocols isis interface xe-1/0/0.10 passive
deactivate protocols isis interface xe-1/0/0.10
177 CONFIG-960-4-STANDALONE

set protocols isis interface af0.72 point-to-point
set protocols isis interface lo0.0 passive
set protocols layer2-control nonstop-bridging
set policy-options policy-statement ADV-MINE term OK from protocol bgp
set policy-options policy-statement ADV-MINE term OK from community INTERNAL
set policy-options policy-statement ADV-MINE term OK then community delete ALL
set policy-options policy-statement ADV-MINE term OK then accept
set policy-options policy-statement ADV-MINE term KO then reject
set policy-options policy-statement AGGR term OK from protocol bgp
set policy-options policy-statement AGGR term OK then accept
set policy-options policy-statement AGGR term KO then reject
set policy-options policy-statement EXPORT-DEFAULT term OK from route-filter 0.0.0.0/0 exact
set policy-options policy-statement EXPORT-DEFAULT term OK then accept
set policy-options policy-statement EXPORT-DEFAULT term KO then reject
set policy-options policy-statement iBGP-TAG term OK from protocol bgp
set policy-options policy-statement iBGP-TAG term OK then community add INTERNAL
set policy-options policy-statement iBGP-TAG term OK then accept
set policy-options community ALL members .*:.*
set policy-options community INTERNAL members 65203:100

CONFIG-960-4-STANDALONE
The starting configuration for Chapter 5:
set version 18.3R1.9
set groups re0 system host-name MX960-4-RE0
set groups re0 system backup-router 172.30.177.1
set groups re0 system backup-router destination 172.30.176.0/20
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.178.71/24
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.177.196/24 master-only
set groups re1 system host-name MX960-4-RE1
set groups re1 system backup-router 172.30.177.1
set groups re1 system backup-router destination 172.30.176.0/20
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.178.72/24
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.177.196/24 master-only
set groups isis-mpls interfaces <*-*> unit <*> family iso
set groups isis-mpls interfaces <*-*> unit <*> family mpls
set apply-groups re0
set apply-groups re1
set system commit fast-synchronize
set system commit synchronize
set system commit persist-groups-inheritance
set system configuration-database max-db-size 629145600
set system login user magno uid 2001
set system login user magno class super-user
set system login user magno authentication encrypted-password “ -- SNIP -- “
set system login user magno authentication ssh-rsa “ssh-rsa -- SNIP -- “
set system login user remote uid 2000
set system login user remote class super-user
set system root-authentication encrypted-password “ -- SNIP -- “
set system domain-name poc-nl.jnpr.net
set system backup-router 172.30.177.1
set system backup-router destination 172.30.176.0/20
set system time-zone Europe/Amsterdam
set system authentication-order password
set system authentication-order radius
set system name-server 172.30.207.10
178 Appendix: Node Slicing Lab Configurations

set system name-server 172.30.207.13
set system radius-server 172.30.176.9 secret “$9$DMHPTz36CtOqmBEclLXik.mfT6/t1Eyn/”
set system radius-server 172.30.176.9 retry 3
set system radius-server 172.30.177.4 secret “$9$CgY9p1EcylvWx0B7VwgUDtuOBIEleWNVYre”
set system radius-server 172.30.177.4 retry 3
set system dynamic-profile-options versioning
set system services ftp
set system services ssh root-login allow
set system services ssh max-sessions-per-connection 32
set system services ssh client-alive-interval 120
set system services telnet
set system services xnm-clear-text
set system services netconf ssh
set system services netconf yang-modules device-specific
set system services netconf yang-modules emit-extensions
set system services rest http
set system services rest enable-explorer
set system services web-management http
set system services subscriber-management enable
set system syslog user * any emergency
set system syslog host 172.30.189.13 any notice
set system syslog host 172.30.189.13 authorization info
set system syslog host 172.30.189.13 interactive-commands info
set system syslog host 172.30.189.14 any notice
set system syslog host 172.30.189.14 authorization info
set system syslog host 172.30.189.14 interactive-commands info
set system syslog file messages any notice
set system syslog file messages authorization info
set system compress-configuration-files
set system ntp boot-server 172.30.207.10
set system ntp server 172.30.207.10
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” no-traps
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” proxy-arp
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” vlan-
tags outer “$junos-stacked-vlan-id”
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” vlan-
tags inner “$junos-vlan-id”
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” demux-
options underlying-interface “$junos-underlying-interface”
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-
unit” family inet unnumbered-address lo0.0
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-
unit” family inet unnumbered-address preferred-source-address 100.100.255.254
set dynamic-profiles DP-IP-DEMUX interfaces demux0 unit “$junos-interface-unit” demux-
options underlying-interface “$junos-underlying-interface”
set dynamic-profiles DP-IP-DEMUX interfaces demux0 unit “$junos-interface-
unit” family inet unnumbered-address lo0.0
set chassis redundancy failover on-loss-of-keepalives
set chassis redundancy failover not-on-disk-underperform
set chassis redundancy graceful-switchover
set chassis fabric redundancy-mode increased-bandwidth
set chassis fpc 0 pic 0 pic-mode 100G
set chassis fpc 0 pic 1 pic-mode 100G
set chassis network-services enhanced-ip
set chassis network-slices guest-network-functions gnf 3 description “Single-GNF Setup”
set chassis network-slices guest-network-functions gnf 3 fpcs 1
set chassis network-slices guest-network-functions gnf 3 fpcs 6
set interfaces xe-1/0/0 description “Link to IXIA LC 7 Port 8”
set interfaces xe-1/0/0 flexible-vlan-tagging
179 CONFIG-960-4-STANDALONE

set interfaces xe-1/0/0 encapsulation flexible-ethernet-services
set interfaces xe-1/0/0 unit 10 vlan-id 10
set interfaces xe-1/0/0 unit 10 family inet address 99.99.10.1/24
set interfaces xe-1/0/0 unit 300 vlan-id 300

--- SNIP --- // See previous for BGP peering interface configuration //

set interfaces xe-6/0/0 flexible-vlan-tagging
set interfaces xe-6/0/0 auto-configure stacked-vlan-ranges dynamic-profile DP-AUTO-
VLAN accept dhcp-v4
set interfaces xe-6/0/0 auto-configure stacked-vlan-ranges dynamic-profile DP-AUTO-VLAN ranges 1000-
2000,any
set interfaces xe-6/0/0 auto-configure remove-when-no-subscribers
set interfaces xe-6/0/0 encapsulation flexible-ethernet-services
set interfaces lo0 unit 0 family inet address 72.255.255.1/32
set interfaces lo0 unit 0 family inet address 100.100.255.254/32
set interfaces lo0 unit 0 family iso address 49.0001.7272.0255.0001.00
set interfaces lo0 unit 0 family inet6 address fec0::72.255.255.1/128
set snmp location “AMS, EPOC location=3.09”
set snmp contact “emea-poc@juniper.net”
set snmp community public authorization read-only
set snmp community public clients 172.30.0.0/16
set snmp community public clients 0.0.0.0/0 restrict
set snmp community private authorization read-write
set snmp community private clients 172.30.0.0/16
set snmp community private clients 0.0.0.0/0 restrict
set snmp trap-options source-address 172.30.177.196
set snmp trap-group space targets 172.30.176.140
set forwarding-options dhcp-relay server-group DHCPv4 99.99.10.10
set forwarding-options dhcp-relay group DHCPv4-ACTIVE active-server-group DHCPv4
set forwarding-options dhcp-relay group DHCPv4-ACTIVE interface xe-6/0/0.0
set forwarding-options dhcp-relay no-snoop
set accounting-options periodic-refresh disable
set routing-options nonstop-routing
set routing-options static route 172.16.0.0/12 next-hop 172.30.177.1
set routing-options static route 172.16.0.0/12 no-readvertise
set routing-options static route 100.100.0.0/16 discard
set routing-options router-id 72.255.255.1
set routing-options autonomous-system 65203
set protocols bgp precision-timers
set protocols bgp group eBGP family inet unicast
set protocols bgp group eBGP family inet6 unicast
set protocols bgp group eBGP export BBE-POOL
set protocols bgp group eBGP neighbor 99.99.99.2 peer-as 65400

--- SNIP --- // See previous for eBGP peering configurations //

set protocols layer2-control nonstop-bridging
set policy-options policy-statement BBE-POOL term OK from protocol static
set policy-options policy-statement BBE-POOL term OK from route-filter 100.100.0.0/16 exact
set policy-options policy-statement BBE-POOL term OK then accept
set policy-options community ALL members .*:.*
set policy-options community INTERNAL members 65203:100
set access profile NOAUTH authentication-order none
180 Appendix: Node Slicing Lab Configurations

CONFIG-960-4-GNF-FINAL
The configuration once the MX960 was turned into a single GNF (Chapter 5):
set version 18.3R1.9
set groups re0 system host-name MX960-4-GNF-RE0
set groups re0 system backup-router 172.30.181.1
set groups re0 system backup-router destination 172.16.0.0/12
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.181.176/24
set groups re0 interfaces fxp0 unit 0 family inet address 172.30.181.175/24 master-only
set groups re1 system host-name MX960-4-GNF-RE1
set groups re1 system backup-router 172.30.177.1
set groups re1 system backup-router destination 172.16.0.0/12
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.181.177/24
set groups re1 interfaces fxp0 unit 0 family inet address 172.30.181.175/24 master-only
set apply-groups re0
set apply-groups re1
set system commit fast-synchronize
set system commit synchronize
set system commit persist-groups-inheritance
set system configuration-database max-db-size 629145600
set system login user magno uid 2001
set system login user magno class super-user
set system login user magno authentication encrypted-password “$6$ENdnoKrZ$qLSWO5899HXEDuRYFYL0alWe1
U0dmSXW0mWkMWTxOsrNQmL940pvLgUVoaCBU7.FQxzLxBiI3y271FS4cAAVr0”
set system login user magno authentication ssh-rsa “ssh-
rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDI21uVGR6oQGBUjU5yB+MkyDDOBbegLzGleGAlQnuNVe7RrhJ7XsEl/XG75xy/ifv/
R8ck21/6iQvaJC1pydOJFXqfY/
YvD8L1jbGRtkEv5F0HnaOhiOeYer4C2aqgu0I38YSdQftigFm0Gx8R0qTXZYvmkykgEHvCkzDvFUd6NHC2sITMFysZdsah9US/
Av6uokPMfG1z+cdoE2SdfKHfb2W6LJzl9EmhQPVE7nWySmKVnCMizG8YmNjw2RmCScVbmUzLz8/DmoT2EL1qT0fsP9teyK0+6oKR
HQGMPFA76/J1RfmPsugswbAI04fdpyQCZ2WFaA26Bn5lgxgxXm/N mmagnani@mmagnani-mbp15”
set system login user remote uid 2000
set system login user remote class super-user
set system root-authentication encrypted-password “$6$WFIYvOs8$RbSrJggMYcgpEMDjHe0FHHTvmMElAWMUhkFvl
aw.BH1SIvhC5nfAZDSbDBKQlOwW.nORYh3VHU8TIExC9t3JC/”
set system domain-name poc-nl.jnpr.net
set system backup-router 172.30.177.1
set system backup-router destination 172.30.176.0/20
set system time-zone Europe/Amsterdam
set system authentication-order password
set system authentication-order radius
set system name-server 172.30.207.10
set system name-server 172.30.207.13
set system radius-server 172.30.176.9 secret “$9$DMHPTz36CtOqmBEclLXik.mfT6/t1Eyn/”
set system radius-server 172.30.176.9 retry 3
set system radius-server 172.30.177.4 secret “$9$CgY9p1EcylvWx0B7VwgUDtuOBIEleWNVYre”
set system radius-server 172.30.177.4 retry 3
set system dynamic-profile-options versioning
set system services ftp
set system services ssh root-login allow
set system services ssh max-sessions-per-connection 32
set system services ssh client-alive-interval 120
set system services telnet
set system services xnm-clear-text
set system services netconf ssh
set system services rest http
set system services rest enable-explorer
181 CONFIG-960-4-GNF-FINAL

set system services web-management http
set system services subscriber-management enable
set system syslog user * any emergency
set system syslog host 172.30.189.13 any notice
set system syslog host 172.30.189.13 authorization info
set system syslog host 172.30.189.13 interactive-commands info
set system syslog host 172.30.189.14 any notice
set system syslog host 172.30.189.14 authorization info
set system syslog host 172.30.189.14 interactive-commands info
set system syslog file messages any notice
set system syslog file messages authorization info
set system syslog file messages match-strings “!*0x44b*”
set system syslog file default-log-messages any info
set system syslog file default-log-messages match “(requested ‘commit’ operation)|(requested ‘commit
synchronize’ operation)|(copying configuration to juniper.save)|(commit complete)|ifAdminStatus|(FRU
power)|(FRU removal)|(FRU insertion)|(link UP)|transitioned|Transferred|transfer-file|(license
add)|(license delete)|(package -X update)|(package -X delete)|(FRU Online)|(FRU Offline)|(plugged
in)|(unplugged)|CFMD_CCM_DEFECT| LFMD_3AH | RPD_MPLS_PATH_BFD|(Master Unchanged, Members
Changed)|(Master Changed, Members Changed)|(Master Detected, Members Changed)|(vc add)|(vc
delete)|(Master detected)|(Master changed)|(Backup detected)|(Backup changed)|(interface vcp-)|BR_
INFRA_DEVICE”
set system syslog file default-log-messages structured-data
set system compress-configuration-files
set system ntp boot-server 172.30.207.10
set system ntp server 172.30.207.10
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” no-traps
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” proxy-arp
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” vlan-
tags outer “$junos-stacked-vlan-id”
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” vlan-
tags inner “$junos-vlan-id”
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-unit” demux-
options underlying-interface “$junos-underlying-interface”
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-
unit” family inet unnumbered-address lo0.0
set dynamic-profiles DP-AUTO-VLAN interfaces demux0 unit “$junos-interface-
unit” family inet unnumbered-address preferred-source-address 100.100.255.254
set dynamic-profiles DP-IP-DEMUX interfaces demux0 unit “$junos-interface-unit” demux-
options underlying-interface “$junos-underlying-interface”
set dynamic-profiles DP-IP-DEMUX interfaces demux0 unit “$junos-interface-
unit” family inet unnumbered-address lo0.0
set chassis redundancy failover on-loss-of-keepalives
set chassis redundancy failover not-on-disk-underperform
set chassis redundancy graceful-switchover
set chassis fpc 0 pic 0 pic-mode 100G
set chassis fpc 0 pic 1 pic-mode 100G
set chassis network-services enhanced-ip
set chassis network-slices
set interfaces xe-1/0/0 description “Link to IXIA LC 7 Port 8”
set interfaces xe-1/0/0 flexible-vlan-tagging
set interfaces xe-1/0/0 encapsulation flexible-ethernet-services
set interfaces xe-1/0/0 unit 10 vlan-id 10
set interfaces xe-1/0/0 unit 10 family inet address 99.99.10.1/24
set interfaces xe-1/0/0 unit 300 vlan-id 300

--- SNIP --- // See previous for BGP peering interface configuration //

set interfaces xe-6/0/0 flexible-vlan-tagging
182 Appendix: Node Slicing Lab Configurations

set interfaces xe-6/0/0 auto-configure stacked-vlan-ranges dynamic-profile DP-AUTO-
VLAN accept dhcp-v4
set interfaces xe-6/0/0 auto-configure stacked-vlan-ranges dynamic-profile DP-AUTO-VLAN ranges 1000-
2000,any
set interfaces xe-6/0/0 auto-configure remove-when-no-subscribers
set interfaces xe-6/0/0 encapsulation flexible-ethernet-services
set interfaces lo0 unit 0 family inet address 72.255.255.1/32
set interfaces lo0 unit 0 family inet address 100.100.255.254/32
set interfaces lo0 unit 0 family iso address 49.0001.7272.0255.0001.00
set interfaces lo0 unit 0 family inet6 address fec0::72.255.255.1/128
set snmp location “AMS, EPOC location=3.09”
set snmp contact “emea-poc@juniper.net”
set snmp community public authorization read-only
set snmp community public clients 172.30.0.0/16
set snmp community public clients 0.0.0.0/0 restrict
set snmp community private authorization read-write
set snmp community private clients 172.30.0.0/16
set snmp community private clients 0.0.0.0/0 restrict
set snmp trap-options source-address 172.30.177.196
set snmp trap-group space targets 172.30.176.140
set forwarding-options dhcp-relay server-group DHCPv4 99.99.10.10
set forwarding-options dhcp-relay group DHCPv4-ACTIVE active-server-group DHCPv4
set forwarding-options dhcp-relay group DHCPv4-ACTIVE interface xe-6/0/0.0
set forwarding-options dhcp-relay no-snoop
set accounting-options periodic-refresh disable
set routing-options nonstop-routing
set routing-options static route 172.16.0.0/12 next-hop 172.30.181.1
set routing-options static route 172.16.0.0/12 no-readvertise
set routing-options static route 100.100.0.0/16 discard
set routing-options router-id 72.255.255.1
set routing-options autonomous-system 65203
set protocols bgp precision-timers
set protocols bgp group eBGP family inet unicast
set protocols bgp group eBGP family inet6 unicast
set protocols bgp group eBGP export BBE-POOL
set protocols bgp group eBGP neighbor 99.99.99.2 peer-as 65400

--- SNIP --- // See previous for eBGP peering configurations //

set protocols layer2-control nonstop-bridging
set policy-options policy-statement BBE-POOL term OK from protocol static
set policy-options policy-statement BBE-POOL term OK from route-filter 100.100.0.0/16 exact
set policy-options policy-statement BBE-POOL term OK then accept
set policy-options community ALL members .*:.*
set policy-options community INTERNAL members 65203:100
set access profile NOAUTH authentication-order none