Knowledge base : Knowledge Base > ByteBlower > Creating and running a test

ByteBlower Ports are actually logical ports, that need to be created on a physical location before they can be used.
In the Configuration View, you can link, or "dock" as we call it, each ByteBlower Port to a physical location on a ByteBlower Server.

On the left side of this view, all available ByteBlower Servers are displayed.
After refreshing a server, you can see the available interfaces : Trunking interfaces, which are connected to a switch, and Non-Trunking interfaces.
ByteBlower ports can only be docked to Non-Trunking interfaces, and on the ports of a switch, behind a Trunking interface.

On the right side of the Configuration View, you get an overview of all ByteBlower Ports in the current project.

By default, the "Hide docked ByteBlower Ports" option is enabled. This means that only ByteBlower Ports that still need docking will be visible here.

Docking can be done in several ways:
 * Using the Docking buttons : Use the arrow to the left to dock, and the arrow to the right to undock.
    Select a physical location on a ByteBlower Server
    Select the ByteBlower Port that you want to dock on it
    Then click on the "<-" arrow to dock the ByteBlower Port on the selected location.
    When the "Hide docked ByteBlower Ports" option is enabled, the ByteBlower Port will disappear from the list at the right side.
    The ByteBlower port will appear at the left side, as a sub-item under the selected location
    Also, in the Port View, you will see that the "Docked" status becomes "OK"
    The ByteBlower Port is now ready to be used in a test Scenario
 * Drag - Dropping
    Drag the ByteBlower Port that you want to dock to a Physical location, and drop it there.
    When the "Hide docked ByteBlower Ports" option is enabled, the ByteBlower Port will disappear from the list at the right side.
    The ByteBlower port will appear at the left side, as a sub-item under the selected location
    Also, in the Port View, you will see that the "Docked" status becomes "OK"
    The ByteBlower Port is now ready to be used in a test Scenario

Tips 'n Tricks
 * Only while a test Scenario is running, are ByteBlower Ports "alive" on the specified Physical Ports.
 * ByteBlower Ports can not be docked directly on a ByteBlower Server or on a Trunking Interface.
 * ByteBlower Ports can only be docked on a Non-Trunking Interface or on the Physical Ports of a switch behind a Trunking Interface.
 * When you dock ByteBlower Ports directly on a ByteBlower Server, they will be docked on the physical locations found in the sub-tree.

This is demonstrated here:

 * Multiple ByteBlower Ports can be docked in one action !
    Use the <Shift> and <Ctrl> keys to select multiple ByteBlower Ports.
    You can also use <Ctrl-A> to select all ByteBlower Ports
    Then select a Physical location at the left side, and click on the "Dock"-button
 * Multiple ByteBlower Ports can also be drag-dropped together !
 * Typically, you don't want all ByteBlower Ports to be docked on the same Physical location. Instead, most of the time, you want each ByteBlower Port to be docked on the next available Physical Port.
    This can be done by selecting multiple objects, both at the right and at the left side.

This is demonstrated here:

Hint: Enable or disable Quick select

The quick select button (green, fiigure) in the menubar is an aid to quickly find the network interface a ByteBlower port has been docked to. This very useful when you want to undock a ByteBlower port or dock a new one close an existing docked port.

The gif below shows it's behavior when enabled.

This option is can be very useful when working with multiple ports, but for single ports you might want to disable it. We have a couple hints below, but of course do experiment and see what works best for you.

Quickly Undock all ports

  • Enable the quick select
  • Select all ports
  • Press the right arrow ( )

Quickly Dock to a nearby interface

  • Enable the quick select
  • Select port you'd like to use as template
    • The associated ByteBlower interface is now focused. The left part of the ServerView jumps at the docked interface
  • Select the interface you wish to dock the ByteBlower Port to and use the arrow pointing left ()

Cherry-picking the interface

A disadvantage of the quick select is that can change your selection by accident. This can be especially annyoing when you wish to redock a single port,  in thi case you might want to disable the quick select.

If you have a big project to run but one of the devices under test isn't connected or will fail to initialize (DHCP,ARP,etc.. ) then we have an option to let the test continue. With this option you will not be forced to either fix the device nor edit your project.

You can find this option "Ignore initialization errors" in the Window->preferences under the Scenario tab.

Once this action is activated ByteBlower will disable each flow that fails to initialize. Your Scenario will then run with all the flows that are fully functional. The report will contain logs explaining which flows failed during the initialization phase.

The ByteBlower Wireless Endpoint is software that turns your mobile devices into ByteBlower Ports.
With minimal effort, you can integrate iPhones, iPads, Windows phones, Android devices, but also Linux machines, MacBooks, Windows PCs, ... into your network test scenarios.


The Wireless Endpoint runs on your SmartPhone, tablet or Laptop; it's the software that generates the testing traffic. You need to install the Wireless Endpoint on all devices you want to test. This Wireless Endpoint software is available for free on all major app stores. You can find the download links collected in the website below.

Tests are organized from the ByteBlower chassis. For the Wireless Endpoint to become active it needs to reach the management connection of the ByteBlower. When possible, we suggest connecting the second management interface into the Access Point. This is the Lan only option explained in the the link below:

On the ByteBlower chassis, it's the Meeting Point software that does the interaction with the Wireless Endpoint. It can be installed on any ByteBlower server when you have a valid license for it.

Basic Workflow

Setting up a Wireless Endpoint is identical on all supported devices.

Start the Wireless Endpoint

Enter the IP address of your Meeting Point. This is required the first time only. Next time, the app will fill in the previously used address by default.

Push the button to go to the next page to connect with the Meeting Point.
The Wireless Endpoint connects with the Meeting Point, and reaches the "Registered" state.


On top of the screen, the Wireless Endpoint displays its current state. There are five possible states

  • Ready - waiting until you enter the IP address of the Meeting Point.
  • Contacting - establishing a connection with the Meeting Point.
  • Registered - initial handshake with the Meeting Point succeeded. From now on, the Wireless Endpoint can be controlled using your GUI. The Wireless Endpoint starts sending heartbeat messages to the Meeting Point, to signal that it is still alive.
  • Armed - when you run a test scenario using the GUI, the entire set of instructions for this device is transmitted at the beginning. When all instructions are received, the device goes into the Armed state. Now, the Wireless Endpoint becomes quiet, and will no longer send heartbeat messages, so that the test traffic is not disturbed.
  • Running - when the start time has come, the Wireless Endpoint will begin sending/receiving network traffic.

At the bottom of the screen, there is an Abort button. When you push this button, the Wireless Endpoint will go back from the Registered to the Ready state. When in the Armed or Running, the Abort button is disabled.


From now on, you can sit back and control the entire test process using your GUI.
All you need to do is adding the Meeting Point in the Server View.

All connected Wireless Endpoints become visible.

Now you can dock ByteBlower Ports on your Wireless Endpoints to integrate them into your test scenarios.

Running Test Scenarios

When you start a test scenario using Wireless Endpoints, the Meeting Point will automatically initialize all Wireless Endpoints.
All Wireless Endpoints go into the "Armed" State.

While the actual test is running, there is complete radio silence between the Meeting Point and the Wireless Endpoints.
This way, the network test traffic itself is not influenced by any unwanted signals.
When you look at the Wireless Endpoint while a test is running, you can see the Current Speed.

After the test finishes, the Meeting Point gathers all results from the Wireless Endpoints.
And then the report is generated.

Have a great time using our Wireless Endpoints !

This article is a short guide on how to create an RFC-2544 test with the ByteBlower GUI. This test estimates the throughput of your network. The main advantage is that you have proper control over a number of parameters. The drawback is that RFC-2544 test runs tend to take quite some time.

The text begins with a short introduction and ends with a couple pointers on next steps.

Introduction: Creating a first RFC-2544 scenario

RFC-2544 tests are created with the RFC-2544 wizard. As you'll see further on, this wizard will create a couple ports, a flow and a scenario for you to run. This is very similar to any of the other traffic flows (FrameBlasting, TCP).

As the screenshot below shows, the RFC-2544 wizard is found in the menu bar under Wizard-menu at the top of the ByteBlower GUI.

The first 3 screens have an introduction and give you the chance to configure the source and destination of the test. These steps are the same as the other wizards. The result of the steps in the screenshots below are two new ByteBlower ports:

  • RFC_2544_SOURCE_1

Both are regalar ByteBlower ports.  The config below, they receive their IPv4 address through DHCP. As will be shown further, you can still change the ports afterwards.

Only in the final window you'll find RFC-2544 specific parameters. You'll need to configure following 3 items:

  • A list of frame sizes
  • The duration of a single iteration/step
  • and the acceptable loss level.

As we've said in the intro, the main drawback of RFC-2544 tests is their duration. Only the first two parameters have an impact on the duration. On each change, the total duration shown at the bottom of the window is updated. This total duration is an estimate, in most occasions the test will finish a lot sooner.

The end result of this Wizard is seen at these places:

  • Port view: there are 2 new ports.
  • Flow view: An extra flow that uses the newly created ports. The details of the template are shown in the info panel next at the left.
  • The Scenario view has an extra scenario using the new flow.

All three windows are shown in the screen screen shot below.

Running an RFC-2544 test

The results

Hints and tricks

Testing IPv6

The RFC-2544 flow can handle IPv6 ByteBlower ports. Since the wizard default generates IPv4 ports you'll need to reconfigure your project a bit. You've got two options:

  1. Drag the newly generated ports from the IPv4 panel to the IPv6 one.
  2. Create new IPv6 ports and use these as source and destination.

Editing fame sizes and loss level

To change the frame sizes and loss level, go tho the Flow-view and right-click on the Flow Template. In the popup-menu select "Jump to Edit..."

Here you now can change the frame sizes and the acceptable frameloss of the RFC2544 flow.


An internet stream isn't always a perfect flow of bits at a constant rate. So for your tests you want to create such a burst flow. This article describes how you can simulate a UDP flow with burst using our GUI.


To create a multiburst UDP flow we start with the basic components : A source port and a destination port. We provide them a correct mac and IP setting and dock the ports on our server. 

Next we create a frame we want to transmit. We give this frame our desired size and if needed the wanted UDP source/destination port numbers. In our example we will use a frame of 128 bytes.

It is now in the frameblasting template that we will define the speed and the burstiness of our flow.

Create a new template and add the frame (our frame of 128 bytes created in previous step). With the edit button in the middle plane we can configure the speed and add a timing modifier. It is this timing modifier that creates the burst profile.

Under the tab Speed we configure the speed of the flow. Lets configure this to a UDP flow of 2Mbit/s. The Timing Modifier tab allows us to add a time modifier to the flow to create a burst profile. Here we can configure 2 parameters.

Interburst Gap: this is the time between 2 burst. During this period no frames will be transmitted

Frames per Burst: How many frames need to be transmitted during a burst

In the Flow view we connect the source,destination and the flowtemplate together to create our flow. We can use this Flow now in our scenario and provide here the duration wanted for the flow. You can also define the number of frames you want to transmit. The GUI will then calculate the duration.

Now we are ready to run the project. Below you see the result report showing the bursts.

Example project

Attached on this KB you will find the project created during this article. Download and modify it to fit your needs.

The ByteBlower App for the Wireless Endpoint is used for reliable network testing. One can use it for any type of network, but most of its features are designed for wireless networks. Especially for such networks, there is a trade-off between loading the network and making it the App easy to use. As we will show in this article, cancelling a scenario is one such situation where both can't be met simultaneously. We'll discuss first how to stop a scenario on a Wireless Endpoint. Next we'll detail the reasoning behind our design.

A scenario-run is shown in the images below. The right one is taken from the App itself. The 'Running' state, shown top right, indicates that the device is currently performing a test. The same state is visible from the GUI. A red play-button is added to the Wireless Endpoint after a refresh


The cancel button in the ByteBlower GUI stops a scenario early. Pressing the button shown in the image, halts the traffic on the ByteBlower server and frees up the GUI for the next scenario. Unfortunately new scenarios with any Wireless Endpoint in the cancelled test will still wait for the remaining duration of the configured scenario.



On long tests, waiting for the end of the test might not be an option. In this case, one will need to restart the App manually. The MeetingPoint is robust against such changes and immediately picks up a fresh start. In the GUI, it might necessary to refresh the app.

The reason for this manual work is already hinted to in the introduction. The functionality on the Wireless Endpoints has a trade-off between user-friendliness and correctly testing the network. To complicate matters further, because of NAT, the Wireless Endpoint is often not directly reachable by the MeetingPoint. Heavily loaded networks with poor reception make this even more difficult.

To avoid changing the behaviour of the network, the management communication of the App remains quiet during test runs.
This is different from the management communication with the ByteBlower server. Commands between them are sent out over a different network, and the ByteBlower server proportioned for this extra traffic. During a test, the GUI remains in contact with the server.

Scenarios on the server can thus easily be cancelled. Since there is no communicating between GUI and Wireless Endpoints, it is not possible for the App to stop early. Of course, a cancellation feature is still being worked on and future versions of the App will add improvements.  

The TCP congestion avoidance algorithm affects how fast the throughput is able to recover after packet loss. There's no ideal solution, and over time several approaches have been described.

The ByteBlower GUI implementes a couple of the most popular ones, you can choose from following list: “None”, “New Reno”, “New Reno with Cubic”, “SACK” and “SACK with Cubic”.

Regardless of the option, the ByteBlower’s TCP implementation will always perform the basic congestion avoidance measures like exponential backoff and slow start. (See Wikipedia’s article on TCP congestion control for more information.).

The summation below explain each available option:

  • When selecting “None” then no fast recovery algorithm will be used. It does, however, implement fast retransmit.

  • “New Reno” is a loss recovery algorithm that improves recovery speed when multiple packets have been dropped.

  • “SACK” (short for “selective ack”) is an even faster recovery algorithm where the receiver can tell the sender which segments are missing . This algorithm can only be used if both sides support it. This is negotiated during connection establishment using the “SACK Permitted” option in the TCP header. If one of the sides does not support SACK then ByteBlower will use New Reno instead.

  • “Cubic” improves the TCP recovery speed on high latency networks while still providing good performance on low-latency networks. It can be combined with SACK and New Reno (by selecting "SACK with cubic" or "New Reno with Cubic")

  • “SACK with cubic” is the option that should provide the best performance in most situations.

This is a background article on throughput calculation on layered packet-based networks. For information about throughput values in ByteBlower GUI reports, see here.

Throughput in packet-based networks

Calculating throughput is very straightforward. We simple have to calculate how much of something is handled in a second. In the case of networking, this boils down to how much data is transmitted or received in a second:

throughput = data / time [s]

To calculate the throughput of something, we have to define that something. In other words, we should define what traffic is part of the data variable in the throughput calculation. This depends on the task at hand:

  • All outgoing data on an interface.
  • All incoming data on an interface.
  • All incoming data on an interface destined for that interface.
  • All incoming or outgoing data within a data flow.
  • All incoming data originating from a specific source.
  • ...

Furthermore, within the context of packet-based networks, the relevant data may be counted or measured in two different units.

  • number of packets
  • number of bits

The resulting throughput will thus be measured in packets per seconds (pps) or bits per second (bps). These two measurements are obviously closely related.

throughput_packets [pps] = data [frames] / time [s]

throughput_bits [bps]    = data [bits] / time [s]
                         = throughput_packets [pps] * packet_size [bits/packet]

The throughput in packets is typically called the packet rate or (depending on the protocol in question) frame rate, segment rate,....

After deciding what data is counted and in what unit (packets or bits) it is measured, there is only one thing left to decide: what does it mean to transmit or receive data. In layered networks such as TCP/IP, this differs from layer to layer.

Throughput in layered packet-based networks

In layered networks, the networking functionality is divided in multiple layers. For example, the TCP/IP stack consists of the following layers:

  • Layer 5 or application layer: offers communication logic to an application and defines network messages (syntax and semantics
  • Layer 4 or transport layer: reliable or unreliable end-to-end connectivity
  • Layer 3 or network layer: identifying network hosts and sending data in their direction
  • Layer 2 or data link layer: passing traffic over a link between two layer 3 hops
  • Layer 1 or physical layer: transmitting bits over the physical link

Network protocols implement the functionality of such a layer. Examples are:

  • Layer 5: HTTP, FTP, Telnet, SMTP, DHCP, ...
  • Layer 4: TCP (connection-oriented) and UDP (connectionless, best effort)
  • Layer 3: IPv4 and IPv6
  • Layer 2: Ethernet, WLAN, ATM, PPP, DOCSIS, ...
  • Layer 1: Ethernet physical layers (e.g. 1000BASE-TX), Wi-Fi physical layers (e.g. 802.11 n), SONET/SDH, DSL, ...

Each layer provides a service to the layer above. For example, layer 3 offers unreliable delivery over a network to the end-to-end layer 4 transport protocol. On the other hand, each protocols uses a combination of internal logic and services of the layer below to operate. For example, layer 3 delivers messages over a network by determining the direction of the destination (layer 3 logic) and sending the packet to the next hop in that direction (through a layer 2 service). This is clarified in the figure below:

<layer picture>

Due to the different scope and functionality of network layers, the definition of transmitting (TX) and receiving (RX) changes as well. For example, from a layer 5 point of view, transmission of a network message may be reliable end-to-end delivery of that message, while layer 2 sees transmission as taking data across a single network link.

To define a measure of throughput, we still needed to define what TX and RX ment. In the context of a layered network stack, it comes down to this rule:

The layer N throughput is the amount of data flowing accross the interface between layer N and layer N-1 below in one second. Down the stack at the TX side and up the stack at the RX side.

This means that we can calculate both the throughput in packets per second and the throughput in bits per second for each of the layers. This is put in practice for the TCP/IP stack in the next section.

TCP/IP network stack

The figure below shows a typical TCP/IP network stack when used over Ethernet and a typical network packet that comes through that stack. It consists of the following protocols:

  • Physical layer: Ethernet II
  • Data link layer: Ethernet II
  • Network layer: IPv4
  • Transport layer: TCP
  • Application layer: HTTP


When a layer uses a service of the layer below, this service takes up time and resources. In a stacked setup, bottlenecks throughout the stack may limit the throughput. Possible limitating factors are:

  • physicial bandwith of the TX or RX network link (layer 1)
  • retrensmission due to collision on link (layer 2)
  • fragmentation and reassembly of packets (layer 3)
  • retransmission due to lost packets (layer 4)
  • limited buffers (layer 4)

Furthermore, much of this functionality has in itself impact on the throughput. This will be examplified in the following subsections.

Layer 2 throughput: Ethernet

The Ethernet protocol spans both the data-link layer (layer 2) and the physical layer (layer 1) and there is no clear interface between those two parts. Because there is no single interface, there is no single defintion of throughput, because the definition depends on that interface (see above).

Frames per second (framerate)

To calculate the throughput in Ethernet frames per second, we simply have to count the number of frames in the relevant period of time.

Bits per second

On the other hand, the throughput in bits per second depends on whether we consider none, some or all of the following functionality as part of the layer 2 protocol (data link transfer) or the layer 1 protocol (physical transmission):

  • interleaving consecutive ethernet frames with pause bytes (interframe gap or IFG, 12 bytes)
  • prepending ethernet frames with a preamble for physical synchronization (preamble, 7 bytes)
  • announce the start of a frame with a specific sequence of bits (frame start delimiter or FSD, 1 byte)
  • adding a CRC checksum field to detect transmission errors (frame sequence check or FSC, 4 bytes)

Functionality that is considered part of layer 2 is located above the inter-layer interface used in the throughput definition. Therefore, the corresponding bytes in the network packet are also part of the layer 2 header and thus part of the data moving accross the interface to and from layer 1. This is clarified in picture X.

By placing more and more functions at layer 2, we get the following configurations:

Configuration Layer 2 frame size
MAC header size of layer 2 payload + 14 bytes
MAC header + FSC size of layer 2 payload + 18 bytes (14+4)
MAC header + FSC + preamble + FSD size of layer 2 payload + 26 bytes (14+4+7+1)
MAC header + FSC + preamble + FSD + interframe gap size of layer 2 payload + 38 bytes (14+4+7+1+12)

The effect on the throughput becomes clear when looking at an example. Consider a layer 2 payload of 1000 bytes and a frame rate (i.e. throughput in pps) of 1000 frames per second. The throughput in bits per second is then calculated follows:

Configuration Formula Result
MAC header 1014 byte/frame * 8 bit/byte * 1000 frames/seconds 8112 kbps
MAC header + FSC 1018 byte/frame * 8 bit/byte * 1000 frames/seconds 8144 kbps
MAC header + FSC + preamble + FSD 1026 byte/frame * 8 bit/byte * 1000 frames/seconds 8208 kbps
MAC header + FSC + preamble + FSD + interframe gap 1038 byte/frame * 8 bit/byte * 1000 frames/seconds 8304 kbps

Notice the effect of the configuration is limited. The results are only as the results only 2.3 percent apart. However, this effect becomes much larger when packets are smaller. Consider the same example with the minimal layer 2 payload size for ethernet, which is 46 bytes:

Configuration Formula Result
MAC header 60 byte/frame * 8 bit/byte * 1000 frames/seconds 480 kbps
MAC header + CRC 64 byte/frame * 8 bit/byte * 1000 frames/seconds 512 kbps
MAC header + CRC + preamble + FSD 72 byte/frame * 8 bit/byte * 1000 frames/seconds 576 kbps
MAC header + CRC + preamble + FSD + interframe gap 84 byte/frame * 8 bit/byte * 1000 frames/seconds 672 kbps

In this case the difference in throughput between the two extremes has risen to 40 percent!

Note: minimum size and padding

The Ethernet specification determines that the minimal size for a layer 2 Ethernet frame (including the MAC header but excluding the rest) is 60 bytes. This means that the minimal payload size is 46 bytes. When the layer 2 payload (typically an IP datagram) is less then 46 bytes, Ethernet will fill up the remaining bytes with padding to create a 60 byte frame.

This padding is therefore part of the layer 2 payload, but not of the layer 3 packet.

Layer 3 throughput: IP

Packets per second (packet rate)

When IP runs on top of ethernet, calculating the layer 3 throughput in packets per second is typically quite straightforward. An IP datagram corresponds exactly with the payload of a single Ethernet frame.

If the IP protocol receives a network segment (TCP) or datagram (UDP) from layer 4 that is too large for the interface to layer 2 it will either:

  • Fragment it into multiple IP datagrams. This is the case for IPv4 routers and IPv4/6 end-hosts)
  • Drop it. This is the case for IPv6 routers and  IPv4/6 datagrams with the don't fragment flag set).

In any case, the number of layer 3 IP datagrams is the same as the number of layer 2 Ethernet frames.

Bits per second

To calculate the throughput in bits per second, simply strip the Ethernet frame from all layer 2 information. This includes all Ethernet II header information and the padding. Use the resulting IP datagram size to calculate the throughput in bits per second.

Note: other protocols

The calculation for ethernet is straighforward because a single IP datagram always matches a single Ethernet frame. Notice that for other layer 2 protocols, things may not be that easy.

For example, running IP over ATM causes a single IP datagram to be chopped up to fit the fixed 48-byte payload size of ATM cells. At the receiving side of the link, ATM reassembles the IP datagram before throwing it over the inter-layer interface towards layer 3. This may influence the throughput in a number of ways:

  • The number of packets is different at layer 2 and layer 3. The number of ATM cells per second will be much higher then the number of IP datagrams per second. The difference grows with the size of the IP datagram.
  • The layer 2 overhead also depends on the size of the IP datagram. For each 48-byte layer 2 payload, 5 bytes of overhead are introduced at layer 2. This means that the data throughput at layer 3 will be significantly lower then on layer 2 and even more so for long datagrams.
  • If chopping up the IP datagram into cells or reassembling those cells into IP datagrams is a performence bottleneck, the number of layer 3 datagrams that can be processed by layer 2 in a second may limit the throughput.

Layer 4 throughput: TCP or UDP

The throughput definition is just as valid on layer 4: the amount of relevant data per second that is passed up or down the interface to layer 3 (IP). The packets passed down to or up from layer 3 are typically called segments in the case of TCP and segments or datagrams in the case of UDP. The layer 4 protocol may decide on their size.

The fragmentation and reassembly of both IPv4 and IPv6 (described above) is transparent to the layer 4 protocol running at the end-hosts. From the layer 4 point of view, the TCP segment or UDP datagram passed down at the sending host is received unchanged at the destination host.

However, the fragmentation at layer 3 may have impact on the layer 4 throughput:

  • If IP fragmentation or reassembly is a performance bottleneck, the speed at which TCP or UDP can send packets may be slowed down.
  • If no IP fragmentation is possible (due to the don't fragment flag or because IPv6 routers drop datagrams that are too large), nothing may get through!

Therefore, layer 4 will typically avoid layer 3 fragmentation by providing packets of the correct size. This may be done through end-to-end path Maximum Transmission Unit (MTU) discovery.

Segments or datagrams per second

For the connectionless UDP, defining the number of packets per second is trivial. The packet throughput is only limited by the performance of the underlying layers.

For the connection-oriented TCP all transferred segments such as ACK messages and possible retransmissions are included. The segment throughput may be influenced by:

  • The performance of the underlying layers.
  • Congestion of the network (TCP will stop sending when the congestion window is full).
  • Processing speed at the receiver (TCP will stop sending when the receiver window is full).
  • The congestion avoidance algorithm (e.g. may influence send rate and the number of retransmissions).

Due to this complexity, layer 4 throughput is not very interesting. If we want to know the end-user throughput, it is better to calculate the layer 5 throughput, which takes and end-user application point of view.

Bits per second

Once the number of TCP segments or UDP datagrams per second is defined, the throughput in bits per second can be easily calculated.

Layer 5 throughput: HTTP, FTP, ...

From the standpoint of an application protocol, such as HTTP, all functionality of the network stack is abstracted away.

  • Dividing in segments (layer 4)
  • Retransmissions (layer 4)
  • Flow control and congestion control policy (layer 4)
  • Packet-based networking (layer 3)
  • Possible fragmentation and reassembly (layer 3)
  • Possible data collisions on data links along the path (layer 2)
  • Data link bandwiths along the path (layer 1)

The interface between layer 4 and layer 5 is typically called a socket. The socket interface completely hides the packet based network. Instead it acts as a buffered input or output stream:

  1. An application may only write data to a socket when there is place in the layer 4 (send) buffer. When no data can be pushed to layer 4, the application protocol may decide to:
    • wait for buffer space, e.g. to send the rest of a file (TCP)
    • drop the data, e.g. some samples in a voice call (UDP)
  2. A TCP socket may transparently buffer small messages into a single segment before transmitting. However, the application may force TCP to immediatly send the buffered content. An UDP socket will handle application messages immediately.
  3. An application is in control of pulling data from the socket. It decides on the frequency of checking a socket and the amount of data that is pulled at once. This means that the layer 4 (receive) buffer may also get filled up when traffic comes in faster than the application can or will handle. When it is full, the layer 4 protocol may:
    • drop traffic (UDP)
    • informing the sender to stop sending (TCP)

The rule of thumb for layer 5 thus boils down to the amount of data moving through the socket and may depend on many factors:

  • Bandwith of the data links on the network path.
  • Loss on the network and possible retransmission (either end-to-end or on a single link).
  • Fragmentation and reassembly.
  • Congestion and QoS on the network.
  • How fast the transmitting application generates network traffic.
  • How fast the receiving application reads incoming traffic. This may be limited by either performance or application logic.

Since the notion of packets is gone at layer 5, only the throughput in bits per second is defined.

When you start running a test Scenario, the ByteBlower GUI/CLT tries to detect colliding Frames.
In this article, I'll explain what colliding Frames are, how they can corrupt your test results, and of course, how you can prevent or resolve these.

What are Colliding Frames ?

Each ByteBlower Port that is used as destination of a Flow, will contain filters to count the arriving frames, belonging to that specific Flow.
When your test involves multiple Flows, the filters will automatically be optimized, to make each one of them unique.
This way, we can be certain that only the Frames belonging to one specific Flow will be filtered out.
But when the frames used in different flows are identical or very similar, it is sometimes impossible to create unique filters.

At that point, it is no longer possible to determine which received Frame belongs to which Flow. 

As a result, the Frames belonging to multiple flows will be filtered by multiple destinations !
This typically results in negative loss. This means that you receive more packets than you have sent.

During the Scenario initialization, colliding Frames are detected, and a warning is generated.
Typically, a message will pop up to draw your attention, unless you switched this off in the Preferences.
In any case, the warning will be added at the bottom of the reports.

Colliding Frames Warning

Possible solutions

The solution to make the Colliding Frames warning go away is to make all involved Frames unique.
There are several options to do this :

  • By enabling the Unique Frame Modifier on Layer 4 of the Frames.  This option will make all Frames uniquely identifiable at all destinations.
  • By changing the length of one of the Frames
  • By changing one byte in the payload of one of the Frames
  • For UDP frames without NAT discovery one can pick different UDP ports.

All traffic in a ByteBlower is transmitted and received through ByteBlower Ports. Conceptually these Ports can be seen as small, virtual EndPoints in your network. As any other endpoint they need various network settings to communicate with the outside world. Physically they're docked (or attached) to a specific ByteBlower interface. This article explains the configuration options of a IPv6 ByteBlower Port.  We start from the screenshot below.


The name given to a ByteBlower port. This name is used solely within the ByteBlower GUI, it's not send out over the network. To avoid confusion, all ports in the same project have different names.

MAC Address

As any other network device, a ByteBlower port requires a MAC address. Default this address is used for all traffic in and out of the Port. Often it's a good idea to for this address to be unique, but this not required (e.g. dual-stack IPv4 and IPv6). For IPv6 ByteBlower ports, the MAC address is used to generate the link-local address.


The method used to obtain a global IPv6 address for the ByteBlower Port. This can either be:

  • Fixed, or user-supplied: The ByteBlower Port will use the values of the IPv6 Address, Router Address and Prefix Length column. Even in this mode, the ByteBlower port will still send out Neighbourhood solicitations and join the required Multicast groups.

  • Stateless Auto Configuration: The ByteBlower Port obtains its global address from the IPv6 router in the network. This method doesn't really have an analogue in IPv4. It's described in RFC 2462 or in

  • DHCPv6: A DHCPv6 server offers an address to the ByteBlower port. From a high-level perspective this is very similar to DHCP in IPv4. In ByteBlower GUI you reuse the same DHCP-settings object between IPv4 and IPv6.

The IPv6 Address and Router Address and Prefix length columns are solely are solely available for Fixed configurations. In the other configurations they are obtained from the network. More info on these fields is found in .

VLAN Stack

A ByteBlower port can be configured with a VLAN tag or even a VLAN stack (QiQ). These tags are known as Layer2.5 configurations. They are used to create virtual networks within a physical network. The ByteBlower GUI will automatically add these tags to all outgoing traffic. Similarly all arriving traffic at a ByteBlower Ports needs the same Layer2.5 configuration.


This columns configures the maximum size of the frames transmitted from a ByteBlower port. TCP traffic from this port will respect this size, similarly one can't transmit FrameBlasting frames larger than this size. This limit doesn't apply to received frames.

The MTU is expressed in bytes. It counts the size of a


As mentioned in the introduction, a ByteBlower port emulates an Endpoint in the network. For it to be useful it needs needs to be physically docked to a ByteBlower interface. All incoming and outgoing traffic of this ByteBlower Port goes through this interface.

This article explains the available options for TCP flows.


The Name of the of the TCP flow template. This needs to be unique from the other TCP and FrameBlasting Flow templates.


The column configures how long or how much data the TCP flow should transmit. There are two options here:

  • Payload based: The flow will transmit the configured payload and close the connection. This amount is the total data size. The overhead at the TCP layer and below is not omitted.

  • Time based: This is method is similar to FrameBlasting templates, the flow will transmit for a fixed duration. How long it should send is configurable from in the Scenario View.

Rate Limit

This applies an optional rate limit to the TCP flow. Similarly to e.g. video streaming or downloads, this rate limit is applied at the amount of data supplied to the TCP flow.

Unscaled Initial Receive Window

The TCP protocol uses this parameter to initialize the Receive Window. The Maximum Receive Window is 65535 bytes. The TCP Window is the amount of outstanding data (unacknowledged by the recipient) that can remain in the network. After sending that amount of data, the sender stops and waits for acknowledgement back from the receiver that it has gotten some of it. As such, this value together with its scaling factor is probably the single most important setting in tuning broadband internet connections.

Window Scaling

Enable or disable window scaling here. The TCP window scale option is an option to increase the TCP receive window size above its maximum value of 65535 bytes.

Receivers Window Scale Value

Here you can choose which window scale value should be used. Following options are possible:
  • 0 (multiply with 1)
  • 1 (multiply with 2)
  • 2 (multiply with 4)
  • 3 (multiply with 8)
  • 4 (multiply with 16)
  • 5 (multiply with 32)
  • 6 (multiply with 64)
  • 7 (multiply with 128)
  • 8 (multiply with 256)

Slow start threshold

This is a low-level parameter of the TCP algorithm. Throughput grows exponentially until this value, afterwards on speeds increments linearly and thus slow. Increasing this parameter results in steeper, faster throughput gains, but often also more instability. Reverse, a low value results in TCP flow that slowly increases throughput.

HTTP Request Method

Choose which method should be used to get the payload from server to the client. Following options are possible:

  • AUTO: The GUI will choose the appropriate method (GET or PUT). When the source ByteBlower Port of a TCP Flow is natted, a HTTP PUT command will be used, otherwise, a HTTP GET command will be used.

  • PUT: The source ByteBlower Port will initiate the data transfer by executing a HTTP PUT command. The data will be ”uploaded” from the source to the destination.

  • GET: The destination ByteBlower Port will initiate the data transfer by executing a HTTP GET command. The data will be ”downloaded” from the source to the destination.

Congestion Algorithm

The algorithm used by the TCP flow to deal with congestion.

Client and Server Ports

The TCP-ports used by the flow template. When left to automatic, the ByteBlower will pick an available port to setup the flow. In general this is the preferred option.

The ByteBlower GUI supports several different report file-types.
  • PDF
  • HTML
  • CSV
  • XLS
In the preferences you can select which reports you want to generate after a test. Or you can even disable them if you aren't interested in the results.
This animated GIF shows you how you can change this.


Port groups have been added to the ByteBlower GUI in version 2.8. They were added with DOCSIS 3.1 in mind, but you'll notice that they're also very useful with WiFi and other network configurations. This is a short article which won't reveal every possible use-case. The goal is to explain how you create a Port Group, and why you want one. Next we'll make a simple scenario and review its results.

The goal of a port group

When creating a test scenario for a particular high-bandwidth CMTS or access point, you'll often notice that you're repeating yourself. A single CPE/LAN interface isn't sufficient to fully load the modem or Access Point. An example is found on the figure below. To solve it, one would add several LAN interfaces and create several flows to them. The end-result is often much duplication and elaborate bookkeeping to find out how much bandwidth is transmitted. With the Port Groups this becomes much more easy: you can immediately create network traffic to a group of ByteBlower Ports.

How to use them

Creating a new Port Group starts in the Port view. You'll need to configure the individual ByteBlower Ports and dock each to the right interfaces. This hasn't changed. In the figure below we've created 4 Lan ports and docked from trunk-1-20 up till trunk-1-23.  A WAN port is docked to a 10 Gbit/s nontrunk interface. In our scenario, we'll send traffic from this WAN port to all LAN ports. To create the Port Group, we've selected the LAN ports and pick the Group action in the right-click menu.

The result is shown below. For ease of working we've renamed the group to LAN. If you expand the group, you'll find its members. The Group itself has no options, but you are still able to change the config of each member.

From here on, creating your testing scenario is very familiar. Next step is creating a Frame and adding it to a FrameBlasting template.  As shown in the first figure, we'll configure the speed of this template to 3 Gbit/s and use it for our Flow. As you'll see below, this Flow will transmit from the WAN port to the LAN group.
  • The WAN port will send out at a rate of 3 Gbit/s.
  • In total 3 Gbit/s is received by the LAN group. 
Behind the scenes, the transmitted data is evenly divided across all members of the group.

This flow is used in the scenario of the attached report.  A couple points stand out.
  • In IPv4 ByteBlower Ports section, you'll find the the group and its members again. There's a first table with the sole ungrouped ByteBlower Port. Below that, you'll find the listing of the LAN port group.

  • All following sections show the combined results.

We to help you!