The ByteBlower CLT

Introduction

This article is a brief introduction to the ByteBlower CLT. This tool is strongly tied to the ByteBlower GUI. With the Command-Line-Tool (CLT) you can run your *.bbp project files from Windows CMD (Microsoft) or Terminal (MacOS and Linux). This is a fast way to start using your ByteBlower for automated testing.

The installer for the ByteBlower CLT is found in the setup pages. This requires a stand-alone application, next to the ByteBlower GUI. It adds the CLT to the local path, it can thus be started immediately from the Command Line. To run the ByteBlower CLT, the GUI doesn't need to be installed.

We start this text with an example of the ByteBlower CLT. We will do a test run and store our reports in a specific folder. Next, the article briefly lists the command line arguments to the CLT. This article comes with several attachments, these are found at the bottom of the text.

Some familiarity with the ByteBlower GUI is assumed.

Example

The project file was created earlier in the ByteBlower GUI. These project files are stand-alone, they can be created everywhere. The file used is also attached to this article: it can be found at the bottom of the text.

In brief, the test project contains a single scenario called 'latency_under_load'. For this example, one might imagine running this scenario (and others!) as part of a standard modem test.
The scenario has several actions, but our main interest is running the scenario from a script. This scenario is ready to run, all ByteBlower ports are docked at the correct location. If you just downloaded the example, you would still need to perform this docking step (how: Open the project in the GUI and dock the ports. Don't forget to save the project).

By default, the CLT will store the results of a test-runs together with those of the ByteBlower GUI. As an added advantage, this allows you to access your CLT test runs from the GUI. Now, for the purpose of our example, we also wish to store all test-reports immediately at a convenient location. To this end, we've created the folder latency_reports/ locally.

We're ready to start our test-run. The ByteBlower GUI is closed (remember the same database-file) and we use following the command below. The explanation of the arguments is found lower. While the test-run is ongoing, the CLT will continuously output text. As we've preferred above, after the test run, we will find the generated reports in the requested folder.

[Linux/MacOS]
$ByteBlower-CLT -project <path_to>/\
clt_demo.bbp -scenario 'latency_under_load' -output latency_reports/

[Windows cmd]
C:\User\wouter.d\>ByteBlower-CLT \
-project <path_to>\clt_demo.bbp -scenario "latency_under_load" -output latency_reports\

[Python 3.6 (and higher)]

import os
import tempfile

with tempfile.TemporaryDirectory() as tmpdirname:
  os.system(f"ByteBlower-CLT -project clt_demo.bbp \
-scenario latency_under_load -output report -store {tmpdirname}")


NOTE:
Projects created with the GUI are saved by default at the following location:

  • WINDOWS : C:\Users\<username>\byteblower\workspace_v2\Projects\
  • MacOS : /Users/<username>/byteblower/workspace_v2/Projects/
  • Linux : /home/<username>/byteblower/workspace_v2/Projects/

This concludes our example. As mentioned above, the example project is attached to this article. In addition, you'll also find a zip-file with the generated reports. The next step from here is to include this scenario in a larger test-run. Different scenarios in several project files can be started one after the other.

Further steps include processing the generated reports in JSON or CSV format (see 📄 Post-processing of results). An example of such a report is included in the zip at the bottom of this article.

Of course, the ByteBlower CLT is limited to the capabilities of the GUI. Even more, scripting is possible with the ByteBlower API

Command line arguments

To conclude, a bit more detail on the ByteBlower CLT. The output below shows the available arguments. This list is also printed to the console on systems with a native shell.

$ ByteBlower-CLT -h
> usage: ByteBlower-CLT [-project] <project-file> (-scenario <scenario>|-batch <batch>)
> Runs the specified scenario or batch of the specified project and generates a report
>   -batch <batch-name> name of the batch to execute
>   -h,--help show this help
>   -help show this help
>   -output <output-dir-path> path to the output directory; defaults to archive dir
>   -project <project-file-path> path to project file to open
>   -regenerate <report-formats> Generates the report of the last testrun.
          This argument is very useful for tests stopped by CTRL-C.
          By default, all report formats are generated (html pdf csv xls xlsx json docx).
          You can also supply a selection of these formats as a list (e.g. 'html pdf csv'). 
          This argument makes the CLT ignore arguments -project, -scenario, -batch, -title
>   -scenario <scenario-name> name of the scenario to execute
>   -store <test-dir-path> path to the directory where to store the raw test data; defaults to test dir
>   -title <run-title> run title

Exit codes

After the CLT finishes, it returns a numeric value. This value can be used to verify that the test scenario finished successfully.
On Windows, the return value is stored in the $LastExitCode variable.
Example: 


Here is a list of possible return codes:

  • 0 (EX_OK)
    • Normal program exit.
  • 64 (EX_USAGE)
    • The command was used incorrectly. Most likely bad user arguments.
      The user will be provided with a hint for correction.
  • 65 (EX_DATAERR)
    • The supplied project file is not readable.
      It might be corrupted, but most likely project is created by a newer GUI version.
      Updating the CLT will fix this last issue.
  • 66 (EX_NOINPUT)
    • The requested input did not exist.
      This code will be used for the following situations:
      the project file itself does not exist, or the scenario or batch does not exist in the supplied project file.
  • 70 (EX_SOFTWARE)
    • The CLT encountered an internal problem.
      This should not happen; a bug report should be filed.
  • 75 (EX_TEMPFAIL)
    • Temporary failure, indicating something that is not really an error.
      For example, ARP failed.
      The test setup should be verified, and the scenario run should be reattempted.
Attachements
latency_reports.zip
zip
clt_demo.bbp
bbp