Routing algorithms operate on a network where data must be transfered from source to destination. With modeling and simulating BGP, a model of the network topology it operates on is also required. The network model includes the network links and the ASes.

Network Modeling

ASes are connected by TCP/IP links, where data is carried over various types of wired media (copper/optical fiber), switches, routers, etc. Bandwidth, signal propagation, contention at the switches and routers results in communication delays and dropped packages. Although these network characteristics influence the operation of BGP, the precise timing in milliseconds of the events is not of importance, but rather the occurrence of the event itself.

The biggest simplification that we opt for is no explicit modeling of the intra-AS network topology. That means that in the model each AS is modeled as simple, atomic entity without taking its complexity and geographical span into account. There are two reasons for doing that:

  1. There are about 27,000 ASes in the current Internet, which is already a lot. Trying to model their internal topology would require us to simulate hundreds of thousands of entities.
  2. The topology of each autonomous network is proprietary information of the entity controlling it. Obtaining and/or inferring such information is a complicated task and if we would like to model the today's Internet we would not be able to get creditable data.

We opt for modeling the BGP network on a high level: without taking link capacity and delay into account, ignoring packets such as keep-alive messages and treating each AS as an atomic entity. Due to these simplifications, we should be able to apply our model to very big networks (like todays Internet) and run simulations upon them.


In order to model and simulate the BGP protocol we have to decide which parts of it are relevant and which parts do not have much impact on its global behavior. Like in the previous section the main factor for choosing whether something is important is whether it is relevant for answering the questions about BGP dynamic behavior and impact of the network size on this behavior.

BGP's main principle is fairly simple: propagate routing information between peers to maintain connectivity and reachability, trying to keep forwarding paths short. In fact BGP is much more complicated than that. It has many properties that can be set in order to change its behavior. One of the distinctive protocol characteristics is that it is policy-driven and not every path will be propagated to every neighbor, and not always the shorter path is selected for propagation.

According to the standard, a BGP router is a Finite State Machine and its state changes when an event occurs. From our point of view the FSM is not needed to successfully model the BGP protocol. Our main interest is sending updates and the route propagation part of BGP, not router or connection maintenance. Of course our modeled BGP router also would have to have a state but it does not have to resemble the complete BGP Finite State Machine.


The design of the simulator is driven by the requirement for scalability of the system. If we want to be able to successfully run instances of the model (as a simulator) on a large scale, careful choices have to be made with regard to the implementation.

In order to achieve high scalability, solutions that introduce bottlenecks have to be avoided and the simulator has to be able to efficiently use available resources.
We do not impose any particular performance numbers, but every design and implementa- tion decision must be made with efficiency in mind with respect to computational and communication complexity as well as memory usage.
Relaxed Accuracy
Given the highly abstracted model, results will be less accurate. This is clearly a trade-off which has to be carefully balanced. We are neither interested in the BGP state of particular nodes, nor in the exact contents of the messages exchanged between peers. In our study, we are interested in statistical data and/or impact of significant policy changes.
It is impossible to clearly separate the significant and less significant parts of the protocol beforehand and also we do not a priori know the kind of experiments that we might want to conduct. Simulation software must be therefore extendable in order to be able to include components needed for further studies. This leads to a design using loosely coupled components so that each one of them can be easily changed, adjusted, exchanged, or removed.

Wed Sep 25 2013

© Stichting NLnet Labs

Science Park 400, 1098 XH Amsterdam, The Netherlands, subsidised by NLnet and SIDN.