Knowledge base: Knowledge Base > ByteBlower > Automation
Introduction to the ByteBlower CLT
Posted by Pieter Vandercammen, Last modified by Mathieu Strubbe on 15 September 2022 04:47 PM

This article is a brief introduction to the ByteBlower CLT. This tool is strongly tied to the ByteBlower GUI. With the Command-Line-Tool (CLT) you can run your *.bbp project files from Windows CMD (Microsoft) or Terminal (MacOS and Linux). This is a fast way to start using your ByteBlower for automated testing.

The installer for the ByteBlower CLT is found in the setup pages. This requires is a stand-alone application, next the ByteBlower GUI. It adds the CLT to the local path, it can thus be started immediately from the Command Line. To run the ByteBlower CLT, the GUI doesn't need to be installed.

We start this text with an example on the ByteBlower CLT. We will do a test run and store our reports in a specific folder. Next the article briefly lists the command line arguments to the CLT. This article comes with a number of attachments, these are found at the bottom the text.

Some familiarity with the ByteBlower GUI is assumed.

Example

The project file was created earlier in the ByteBlower GUI. These project files are stand-alone, they can be created everywhere. The file used is also attached to this article: it can be found at the bottom of the text.

In brief, the test project contains a single scenario called 'latency_under_load'. For this example, one might imagine running this scenario (and others!) as part of standard modem test.
The scenario has several actions, but our main interest is running the scenario from a script. This scenario is ready to run, all ByteBlower ports are docked at the correct location. If you just downloaded the example, you will still need to perform this docking step ( how: Open the project in the GUI and dock the ports. Don't forget to save the project).

By default the CLT will store the results of a test-runs together with those of the ByteBlower GUI. As an added advantage, this allows you to access your CLT test runs from the GUI. Now, for the purpose of our example, we also wish to store all test-reports immediately at a convenient location. To this end we've created the folder latency_reports/ locally.

We're ready to start our test-run. The ByteBlower GUI is closed (remember same database-file) and we use following the command below. The explanation of the arguments is found lower. While the test-run is ongoing, the CLT will continuously output text. As we've preferred above, after the test run we will find the generated reports in the requested folder.

[Linux/MacOS]
$ByteBlower-CLT -project <path_to>/clt_demo.bbp -scenario 'latency_under_load' -output latency_reports/

[Windows cmd]
C:\User\wouter.d\>ByteBlower-CLT -project <path_to>\clt_demo.bbp -scenario "latency_under_load" -output latency_reports\

[Python 3.6 (and higher)]

import os
import tempfile

with tempfile.TemporaryDirectory() as tmpdirname:
  os.system(f"ByteBlower-CLT -project clt_demo.bbp -scenario latency_under_load -output report -store {tmpdirname}")


NOTE:
Projects created with the GUI are saved by default at following location:

  • WINDOWS : C:\Users\<username>\byteblower\workspace_v2\Projects\
  • MacOS : /Users/<username>/byteblower/workspace_v2/Projects/
  • Linux : /home/<username>/byteblower/workspace_v2/Projects/

This concludes our example. As mentioned above, the example project is attached to this article. In addition, you'll also find a zip-file with the generated reports. A next step from here is to include this scenario into a larger test-run. Different scenarios in several project files can be started one after the other.

Further steps include processing the generated reports in the JSON or CSV format. An example of such a report is included in the zip at the bottom of this article.

 

Of course, the ByteBlower CLT is limited to the capabilities of the GUI. Even more scripting is possible with the ByteBlower API.

Command line arguments

To conclude, a bit more detail on the ByteBlower CLT. The output below shows the available arguments. This list is also printed to the console on systems with a native shell.

$ ByteBlower-CLT -h
> usage: ByteBlower-CLT [-project] <project-file> (-scenario <scenario>|-batch <batch>)
> Runs the specified scenario or batch of the specified project and generates a report
>   -batch <batch-name> name of the batch to execute
>   -h,--help show this help
>   -help show this help
>   -output <output-dir-path> path to the output directory; defaults to archive dir
>   -project <project-file-path> path to project file to open
>   -regenerate <report-formats> Generates the report of the last testrun.
          This argument is very useful for tests stopped by CTRL-C.
          By default all report formats are generated (html pdf csv xls xlsx json docx).
          You can also supply a selection of these formats as a list (e.g. 'html pdf csv'). 
          This argument makes the CLT ignore arguments -project, -scenario, -batch, -title
>   -scenario <scenario-name> name of the scenario to execute
>   -store <test-dir-path> path to the directory where to store the raw test data; defaults to test dir
>   -title <run-title> run title

Exit Codes

After the CLT finishes, it returns a numeric value. This value can be used to verify that the test scenario finished successfully.
On Windows, the return value is stored in the $LastExitCode variable.
Example: 


Here is a list of possible return codes:

  • 0 (EX_OK)
    • Normal program exit.
  • 64 (EX_USAGE)
    • The command was used incorrectly. Most likely bad user arguments.
      The user will be provided with a hint for correction.
  • 65 (EX_DATAERR)
    • The supplied project file is not readable.
      It might be corrupted, but most likely project is created by a newer GUI version.
      Updating the CLT will fix this last issue.
  • 66 (EX_NOINPUT)
    • The requested input did not exist.
      This code will be used for following situations:
      the project file itself does not exist, or scenario or batch do not exist in the supplied project file.
  • 70 (EX_SOFTWARE)
    • The CLT encountered an internal problem.
      This should not happen, a bug report should be filed.
  • 75 (EX_TEMPFAIL)
    • Temporary failure, indicating something that is not really an error.
      For example, ARP failed.
      The test setup should be verified and the scenario run should be reattempted.



Attachments 
 
 clt_demo.bbp (10.07 KB)
 latency_reports.zip (198.88 KB)
(3 vote(s))
Helpful
Not helpful

Comments (0)

We to help you!