About Gandlaf

31 Temmuz 2014 Perşembe

Explain UDP operations

Explain UDP operations

  • Starvation
  • Latency
when TCP traffic slows (slow start) to deal with dropped traffic, UDP traffic does not slow, resulting in queues being filled by UDP packets, starving TCP of bandwidth.
from:
Enterprise QoS Solution Reference
Network Design Guide
When TCP flows are combined with UDP flows within a single service-provider class and the class
experiences congestion, TCP flows continually lower their transmission rates, potentially giving up their bandwidth to UDP flows that are oblivious to drops. This effect is called TCP starvation/UDP
dominance.
TCP starvation/UDP dominance likely occurs if (TCP-based) Mission-Critical Data is assigned to the same service-provider class as (UDP-based) Streaming-Video and the class experiences sustained congestion. Even if WRED is enabled on the service-provider class, the same behavior would be observed because WRED (for the most part) manages congestion only on TCP-based flows.

Explain TCP operations

Explain TCP operations

1.1.e [ii] MSS
The maximum segment size (MSS) is a parameter of the TCP protocol that specifies the largest amount of data, specified in octets, that a computer or communications device can receive in a single TCP segment. It does not count the TCP header or the IP header.[1] The IP datagram containing a TCP segment may be self-contained within a single packet, or it may be reconstructed from several fragmented pieces; either way, the MSS limit applies to the total amount of data contained in the final, reconstructed TCP segment

30 Temmuz 2014 Çarşamba

ICMP Redirect

I will be working on the following topology:
icmp-redirect-img01
Host1 is set with one single gateway of 192.168.1.254 (Router1). When Host1 tries to reach Host4 by sending a ping to 192.168.3.4, Host1 identifies the destination IP is not on the same LAN so it sends to its default gateway. Router1, knows how to reach 192.168.3.0 network through Router2. – so Router1 forwards the packet to Router2 -it does so by using the same interface as where the packet was received! This will then trigger Router1 sending the ICMP Redirect message to Host1 saying – “Next time you need to send a packet to 192.168.3.0 network, just use Router2 straight, since it’s on the same segment as I am!
Notice this is different from standard routing where the normally, Router1 would forward the traffic out to another router, only this time, using another interface and not the same interface on which the packet was received!

Let’s see this in action …
The routing table on Host1 and Router1 look like this:
icmp-redirects-img-02
You can see that Router1 routes packets for network 192.168.3.0 through 192.168.1.253 (Router2); also, Host1 routes any packet through its default gateway which is set to 192.168.1.254 (Router1). Let’s send a ping to 192.168.3.1 now …
icmp-redirect-img03
Router1 receives the packet since it is the default gateway to Host1; Router1 then forwards the packet to Router2 as per it’s own routing table; in doing so, it used the same interface on which the same packet was received – so Router1 will also send a redirect message to the sender. As a result, you can see that Host1, automatically adds a route in its routing table pointing to 192.168.1.253 – despite its default gateway being 192.168.1.254!
Next time Host1 sends a packet to host 192.168.3.1, it will use the gateway of 192.168.1.253.
Notice also that the route added in Host1′s routing table, is a host route, not a network route. So clearly, this method presents a disadvantage – each host will have a route entry, even if they are part of the same network!

You can also disable this behaviour by using the interface command “no ip redirects“.


ICMP - Destination Unreachable Message



ICMP - Destination Unreachable Message

Introduction
This ICMP message is quite interesting, because it doesn't actually contain one message, but six! This means that the ICMP Destination unreachable futher breaks down into 6 different messages.
We will be looking at them all and analysing a few of them to help you get the idea.






To make sure you don't get confused, keep one thing in mind: The ICMP Destination unreachable is a generic ICMP message, the different code values or messages which are part of it are there to clarify the type of "Destination unreachable" message was received. It goes something like this: ICMP Destination "unreachable".

The ICMP - Destination net unreachable message is one which a user would usually get from the gateway when it doesn't know how to get to a particular network.
The ICMP - Destination host unreachable message is one which a user would usually get from the remote gateway when the destination host is unreachable.
If, in the destination host, the IP module cannot deliver the packet because the indicated protocol module or process port is not active, the destination host may send an ICMP destination protocol / port unreachable message to the source host.
In another case, when a packet received must be fragmented to be forwarded by a gateway but the "Don't Fragment" flag (DF) is on, the gateway must discard the packet and send an ICMP destination fragmentation needed and DF set unreachable message to the source host.
These ICMP messages are most useful when trying to troubleshoot a network. You can check to see if all routers and gateways are configured properly and have their routing tables updated and synchronised.
Let's look at the packet structure of an ICMP destination unreachable packet:


26 Temmuz 2014 Cumartesi

CEF Cisco express forwarding



CEF  Cisco express forwarding concepts…


RIB and FIB
Routers and MLS were once centralized, cache based systems combining the control and data planes. The control plane is comprised of the technologies that create and maintain the routing table. The data plane is comprised of the technologies that move data from ingress to egress.
This architecture has since split into the RIB and FIB (Routing Information Base, and Forwarding Information Base). The RIB operates in software, and the FIB takes the RIB’s best routes and places them in it’s own data construct which resides in faster hardware resources. Cisco’s implementation of this architecture is known as CEF (Cisco Express Forwarding).
Process Switching, Fast Switching and the evolution to CEF
Process Switching requires the Router/MLS to process every packet to make a forwarding decision.
Fast Switching evolved from Process Switching, whereby the initial packet’s forwarding decision is still derived from the Route Processor, but that destination is then held in cache for subsequent forwarding precluding the processor’s involvement.
With CEF, Cisco took fast switching a step further by introducing the FIB and Adjacency tables into the equation.
The FIB is a mirror image of the IP routing table. Changes to the routing table and next hop ip’s are reflected in the FIB. Fast switching route cache maintenance is thereby eliminated.
The adjacency table is populated with l2 next hop addresses for all FIB entries, hence adjacency. When an adjacency is established, as through ARP, a link layer header for that adjacency is stored in the adjacency table.
For a thorough overview click the link below.

ccie v5, cef…

1.1.b Identify Cisco express forwarding concepts
1.1.b (i) RIB, FIB, LFIB, Adjacency table
1.1.b (ii) Load balancing Hash
1.1.b (iii) Polarization concept and avoidance
Here is a description of how the hashing algorithm works:
When there are only two paths, the switch/router performs an exclusive-OR (XOR) operation on the lower-order bits (one bit when either of two links need to be selected, two bits for 3-4 links, and so on) of the SIP and DIP. The XOR operation of the same SIP and DIP always results in the packet use of the same link.
The packet then passes onto the distribution layer, where the same hashing algorithm is used along with the same hash input, and picks a single link for all flows, which leaves the other link underutilized. This process is called CEF polarization (use of the same hash algorithm and same hash input which results in the use of a single Equal-Cost Multi-Path (ECMP) link for ALL flows)
fib_adj

How CEF load balancing works

CEF is an advanced Layer 3 switching technology inside a router. Usually a router uses a route cache to speed up packet forwarding. The route cache is filled on demand when the first packet for a specific destination needs to be forwarded. If the destination is on a remote network reachable via a next hop router, the entry in the route cache is consisting of the destination network. If parallel paths exist this does not provide load balancing, as only one path would be used. Therefor the entry in the route cache now relates to a specific destination address, or host. If multiple hosts on the destination network are receiving traffic a route cache entry for each individual host is made, balancing the hosts over the available paths. This provides per destination load balancing. The problem that arises is that for a backbone router carrying traffic for several thousands of destination hosts a respective number of cache entries is needed. This consumes memory and makes cache maintenance a demanding task. In addition the decision about which path to use is done at the time the route-cache is filled, and it is based on the utilization of the individual links at that point in time. However the amount of traffic on individual connections can change over time, possibly leading to a situation where some links carry mostly idle connections and others are congested. CEF takes a different approach as it calculates all information necessary for the forwarding task in advance and decouples the forwarding information from the next hop adjacency, which allows for effective load balancing.
The two main components of CEF operation are the
Forwarding Information Base
Adjacency Tables

Forwarding Information Base

CEF uses a Forwarding Information Base (FIB) to make IP destination prefix-based switching decisions. The FIB is conceptually similar to a routing table or information base. It maintains a mirror image of the forwarding information contained in the IP routing table. When routing or topology changes occur in the network, the IP routing table is updated, and those changes are reflected in the FIB. The FIB maintains next-hop address information based on the information in the IP routing table. Because there is a one-to-one correlation between FIB entries and routing table entries, the FIB contains all known routes and eliminates the need for route cache maintenance that is associated with earlier switching paths such as fast switching and optimum switching.

Adjacency Tables

Network nodes in the network are said to be adjacent if they can reach each other with a single hop across a link layer. In addition to the FIB, CEF uses adjacency tables to prepend Layer 2 addressing information. The adjacency table maintains Layer 2 next-hop addresses for all FIB entries.
The adjacency table is populated as adjacencies are discovered. Each time an adjacency entry is created (such as through the ARP protocol), a link-layer header for that adjacent node is precomputed and stored in the adjacency table. Once a route is determined, it points to a next hop and corresponding adjacency entry. It is subsequently used for encapsulation during CEF switching of packets. A route might have several paths to a destination prefix, such as when a router is configured for simultaneous load balancing and redundancy. For each resolved path a pointer is added for the adjacency corresponding to the next-hop interface for that path. This mechanism is used for load balancing across several paths. For per destination load balancing a hash is computed out of the source and destination IP address. This hash points to exactly one of the adjacency entries in the adjacency table, providing that the same path is used for all packets with this source/destination address pair. If per packet load balancing is used the packets are distributed round robin over the available paths. In either case the information in the FIB and adjacency tables provide all the necessary forwarding information, just like for non-load balancing operation. The additional task for load balancing is to select one of the multiple adjacency entries for each forwarded packet.

24 Temmuz 2014 Perşembe

Cisco IOS , IOS- XE and IOS-XR

Currently there is three types of software on Cisco Routers. The classical IOS, the IOS XE and the IOS XR.


Cisco IOS :

Classical IOS is on the market for a long time. I start to work with IOS version 10 on Cisco 2500 routers. You will find this IOS on entry level routers like ISR or Enterprise switches like 6500 or 3750. This IOS is a monolythic OS. That means that all the features are in one file and if one function on the system fail most likely all the system fail. Also that mean that if you want to upgrade the Operating System, you need to reboot the system.

  • IOS is monolithic, completely adherent to the hardware, and does not provide any kind of isolation between “processes”, neither from a CPU nor memory point of view.
  • Virtual memory is shared by all IOS processes: nothing prevents buffer overflows.
  • Scheduler is non-preemptive: if SNMP decides it should keep CPU busy, it can, and other processes (BGP…) will be prevented from running.
  • You cannot upgrade IOS (or parts of it) without disruption unless you are running expensive dual-supervisor hardware.

Cisco IOS-XE :
  • IOS XE retains the exact same look and feel of IOS, while providing enhanced future-proofing and improved functionality.
  • In IOS XE, IOS 15.0 runs as a single daemon within a modern Linux operating system.
  • Additional system functions now run as additional, separate processes in the host OS environment.
  • The actual IOS XE software comes in seven individual sub-packages (files) which are combined into a complete consolidated package (file).















ubPackage
Purpose
RPBase
Provides the operating system software for the Route Processor.
RPControl
Controls the control plane processes that interface between the IOS process and the rest of the platform.
RPAccess
Exports processing of restricted components, such as Secure Socket Layer (SSL), Secure Shell (SSH), and other security features.
RPIOS
Provides the Cisco IOS kernel, which is where IOS features are stored and run.
Each consolidated package has a different RPIOS.
ESPBase
Provides the ESP operating system and control processes, and the ESP software.
SIPBase
Controls the SIP operating system and control processes.
SIPSPA
Provides the SPA driver and Field Programmable Device (FPD) images.

  •  Cisco IOS XE Software Subpackage Functions:
  1. RPBase:Provides the operating system software for the route processor
  2. RPControl : Controls the control plane processes that interface between Cisco IOS Software and the rest of the platform
  3. RPAccess : Provides software required for router access
  4. RPIOS:  Provides the Cisco IOS Software kernel, which is where Cisco IOS Software features are stored and run; each consolidated package has a different RPIOS subpackage
  5. ESPBase : Provides the ESP (Embedded Service Processor) operating system and control processes and the ESP software
  6. SIPSPA : Provides the shared port adaptor (SPA) driver and associated field-programmable device (FPD) images
  7. SIPBase : Controls the Session Initiation Protocol (SIP) carrier card operating system and control processes
  • Normally, the router boots from the single consolidated package which automatically loads each of the seven sub-packages into memory.
  • However, you can extract individual sub-packages yourself and specify which sub-packages you want loaded (maybe 5 instead of all 7).
  • When individual sub-packages are loaded “content from the RP is copied into memory on an as-needed basis only” which conserves memory.
  • The router can run at highest peak traffic load when configured to run using individual sub-packages.
  • IOS XE Software architecture













Individual Processes

Process
Purpose
Affected FRUs
SubPackage Mapping
Chassis Manager
Responsible for all chassis management functions, including management of the HA state, environmental monitoring, and FRU state control.
RP (one instance per RP)
SIP (one instance per SIP)
ESP (one instance per ESP)
RPControl

SIPBase

ESPBase
Host Manager
Provides an interface between the IOS process and many of the information-gathering functions of the underlying platform kernel and operating system.
RP (one instance per RP)
SIP (one instance per SIP)
ESP (one instance per ESP)
RPControl

SIPBase

ESPBase
Logger
Provides IOS facing logging services to processes running on each FRU.
RP (one instance per RP)
SIP (one instance per SIP)
ESP (one instance per ESP)
RPControl

SIPBase

ESPBase
Interface Manager
Provides an interface between the IOS process and the per-SPA interface processes on the SIP.
RP (one instance per RP)
SIP (one instance per SIP)
RPControl

SIPBase
IOS
The IOS process implements all forwarding and routing features for the router.
RP (one per software redundancy instance per RP). Maximum of two instances per RP.
RPIOS
Forwarding Manager
Manages the downloading of configuration to each of the ESPs and the communication of forwarding plane information, such as statistics, to the IOS process.
RP (one per software redundancy instance per RP). Maximum of two instances per RP.
ESP (one per ESP)
RPControl

ESPBase
Pluggable Services
The integration point between platform policy application, such as authentication and the IOS process.
RP (one per software redundancy instance per RP). Maximum of two instances per RP.
RPControl
Shell Manager
Provides all user interface features and handling related to features in the nonIOS image of the consolidated package, which are also the features available in diagnostic mode when the IOS process fails.
RP (one instance per RP)
RPControl
SPA driver process
Provides an isolated process driver for a specific SPA.
SPA (one instance per SPA per SIP)
SIPSPA
CPP driver process
Manages the CPP hardware forwarding engine on the ESP.
ESP (one instance per ESP)
ESPBase
CPP HA process
Manages HA state for the CPP hardware forwarding engine.
ESP (one instance per ESP)
ESPBase
CPP SP process
Performs high-latency tasks for the CPP-facing functionality in the ESP instance of the Forwarding Manager process.
ESP (one instance per ESP)
ESPBase



Cisco IOS- XR:


IOS XR is a Carrier Class IOS, the goal is to provide a more stable solution with process mirroring and advanced features. The interface is really different from the classical IOS. For example when you do change on the configuration, you need to validate the changes with a “commit”. It is pretty good because you could multiple changes and the activate all the changes in one command. Also, that allow you to decide when the changes will be activated. And finally, you’ve got the option to roll back the changes. On the configuration side, instead of having the configuration grouped by interface, it is grouped by process… So you’ve got all together, the OSPF config or the PIM config, instead of having a part of the configuration on the interface and a part of the configuration at the process level.