Knowledge base: Knowledge Base > ByteBlower > Examples
Introduction to the ByteBlower CLT
Posted by Pieter Vandercammen, Last modified by Wouter Debie on 01 August 2018 04:33 PM

This article is a brief introduction to the ByteBlower CLT. This tool is strongly tied to the ByteBlower GUI. With the Command-Line-Tool (CLT) you can run your *.bbp project files from Windows CMD (Microsoft) or Terminal (MacOS and Linux). This is a fast way to start using your ByteBlower for automated testing.

The installer for the ByteBlower CLT is found together with the GUI installers, linked from the download area. This installer will add the CLT to the local path.

We start this text with an example on the ByteBlower CLT. We will do a test run and store our reports in a specific folder. Next the article briefly lists the command line arguments to the CLT. This article comes with a number of attachments, these are found at the bottom the text.

Some familiarity with the ByteBlower GUI is assumed.

Example

We'll use a project file created in ByteBlower GUI. This file is attached to this article, and it is found at the bottom of the text.

In summary it contains a single scenario called 'latency_under_load'. For this example, one might imagine running this scenario (and others!) as part of standard modem test. The scenario has several actions, but our main interest is running the scenario from a script. This scenario is ready to run, all ByteBlower ports are docked at the correct location. If you just downloaded the example, you will still need to perform this docking step ( how: Open the project in the GUI and dock the ports. Don't forget to save the project).

By default the CLT will store the results of a test-runs together with those of the ByteBlower GUI. As an added advantage, this allows you to access your CLT test runs from the GUI. Now, for the purpose of our example, we also wish to store all test-reports immediately at a convenient location. To this end we've created the folder latency_reports/ locally.

We're ready to start our test-run. The ByteBlower GUI is closed (remember same database-file) and we use following the command below. The explanation of the arguments is found lower. While the test-run is ongoing, the CLT will continuously output text. As we've preferred above, after the test run we will find the generated reports in the requested folder.

[Linux/MacOS]
$ByteBlower-CLT -project <path_to>/clt_demo.bbp -scenario 'latency_under_load' -output latency_reports/

[Windows cmd]
C:\User\wouter.d\>ByteBlower-CLT -project <path_to>\clt_demo.bbp -scenario 'latency_under_load' -output latency_reports\



NOTE:
Projects created with the GUI are saved by default at following location:
WINDOWS : C:\Users\<username>\byteblower\workspace_v2\Projects\
MacOS : /Users/<username>/byteblower/workspace_v2/Projects/
Linux : /home/<username>/byteblower/workspace_v2/Projects/


This concludes our example. As mentioned above, the example project is attached to this article. In addition, you'll also find a zip-file with the generated reports. A next step from here is to include this scenario into a larger test-run. Different scenarios in several project files can be started one after the other.

Further steps include processing the generated reports in the CSV format. An example of such a file is found in the zip at the bottom of this article.

 

Of course, the ByteBlower CLT is limited to the capabilities of the GUI. Even more scriptability is possible with the ByteBlower API.

Command line arguments

To conclude, we add some more in detail on the ByteBlower CLT. The output below shows the available arguments. This list is also printed to the console on systems with a native shell.

$ ByteBlower-CLT -h
> usage: ByteBlower-CLT [-project] <project-file> (-scenario <scenario>|-batch <batch>)
> Runs the specified scenario or batch of the specified project and generates a report
>   -batch <batch-name> name of the batch to execute
>   -h,--help show this help
>   -help show this help
>   -output <output-dir-path> path to the output directory; defaults to archive dir
>   -project <project-file-path> path to project file to open
>   -scenario <scenario-name> name of the scenario to execute
>   -store <test-dir-path> path to the directory where to store the raw test data; defaults to test dir

 

 



Attachments 
 
 clt_demo.bbp (10.07 KB)
 latency_reports.zip (198.88 KB)
(2 vote(s))
Helpful
Not helpful

Comments (0)

We to help you!