umami.tests.integration package#

Submodules#

umami.tests.integration.test_examples module#

Integration tests for the scripts located in the examples directory.

umami.tests.integration.test_examples.test_example_plots(command, expected_result)#

Check the plotting of the example plots

Parameters:
  • command (str) – file containing the test to be run

  • expected_result (int) – expected test result

umami.tests.integration.test_input_vars_plot module#

Integration tests for variable plotting.

class umami.tests.integration.test_input_vars_plot.TestInputVarsPlotting(methodName='runTest')#

Bases: TestCase

Integration tests for variable plotting.

setUp()#

Download test files for input var plots.

test_plot_input_vars()#

Integration test of plot_input_vars.py script.

umami.tests.integration.test_input_vars_plot.get_configuration() object#

Load yaml file with settings for integration test of the input vars plotting.

Returns:

Loaded configuration file.

Return type:

object

Raises:

YAMLError – If a needed key is not in file.

umami.tests.integration.test_input_vars_plot.run_plot_input_vars(config: str) bool#

Call plot_input_vars.py.

Parameters:

config (str) – Path to config file.

Returns:

True if tests pass, False if tests fail.

Return type:

bool

umami.tests.integration.test_plotting_umami module#

This script integration tests the plotting of the training results of the different models.

class umami.tests.integration.test_plotting_umami.TestPlottingUmami(methodName='runTest')#

Bases: TestCase

Integration test class for the plotting of the training results.

This class creates a test folder and downloads all important files.

setUp()#

Download test files for running the dips training.

test_plotting_umami_dips()#

Testing the plotting of the DIPS trainings.

test_plotting_umami_dl1r()#

Testing the plotting of the DL1r trainings.

test_plotting_umami_umami()#

Testing the plotting of the Umami trainings.

umami.tests.integration.test_plotting_umami.get_configuration()#

Load yaml file with settings for integration test of dips training.

Returns:

Loaded configuration file.

Return type:

object

Raises:

YAMLError – If a needed key is not in file.

umami.tests.integration.test_plotting_umami.run_plotting(config, tagger)#

Call plotting_umami.py and try to plot the results of the previous tests. Return value True if training succeeded, False if one step did not succeed.

Parameters:
  • config (dict) – Dict with the needed configurations for the plotting.

  • tagger (str) – Name of the tagger which is to be plotted.

Raises:

AssertionError – If the plotting step fails.

Returns:

isSuccess – Preprocessing succeeded or not.

Return type:

bool

umami.tests.integration.test_preprocessing module#

This script integration tests the preprocessing methods.

class umami.tests.integration.test_preprocessing.TestPreprocessing(methodName='runTest')#

Bases: TestCase

Test class for the preprocessing.

This class sets up the needed configs for testing the preprocessing of Umami.

setUp()#

Download test files for running the preprocessing and dress preprocessing config file.

test_preprocessing_additional_jet_labels()#

Integration test of preprocessing.py script using GN1 variables, and allowing for additional jet labels

test_preprocessing_dips_count()#

Integration test of preprocessing.py script using DIPS variables.

test_preprocessing_dips_four_classes_pdf()#

Integration test of preprocessing.py script using DIPS variables and four classes.

test_preprocessing_dips_hits_count()#

Integration test of preprocessing.py script using DIPS with hits variables.

test_preprocessing_dips_pdf()#

Integration test of preprocessing.py script using DIPS variables.

test_preprocessing_dips_weighting()#

Integration test of preprocessing.py script using DIPS variables.

test_preprocessing_dl1r_count()#

Integration test of preprocessing.py script using DL1r variables.

test_preprocessing_dl1r_pdf()#

Integration test of preprocessing.py script using DL1r variables.

test_preprocessing_dl1r_weighting()#

Integration test of preprocessing.py script using DL1r variables.

test_preprocessing_umami_count()#

Integration test of preprocessing.py script using Umami variables.

test_preprocessing_umami_importance_no_replace()#

Integration test of preprocessing.py script using DL1r variables.

test_preprocessing_umami_pdf()#

Integration test of preprocessing.py script using Umami variables.

test_preprocessing_umami_weighting()#

Integration test of preprocessing.py script using Umami variables.

umami.tests.integration.test_preprocessing.get_configuration()#

Load yaml file with settings for integration test of preprocessing.

Returns:

Loaded configuration file.

Return type:

object

Raises:

YAMLError – If a needed key is not in file.

umami.tests.integration.test_preprocessing.run_preprocessing(config: dict, tagger: str, method: str, string_id: str, test_dir: str, flavours_to_process: list | None = None, sample_type_list: list | None = None, sample_usecase_list: list | None = None) bool#

Call all steps of the preprocessing for a certain configuration and variable dict input. Return value True if all steps succeeded.

Parameters:
  • config (dict) – Dict with the needed configurations for the preprocessing.

  • tagger (str) – Name of the tagger which the preprocessing should be done for. The difference is in saving tracks or not.

  • method (str) – Define which sampling method is used.

  • string_id (str) – Unique identifier to further specify which preprocessing was done.

  • test_dir (str) – Path to the directory where all the files are downloaded to etc.

  • flavours_to_process (list, optional) – List with the flavours that are to be processed. By default None

  • sample_type_list (list, optional) – List with the sample types to prepare. By default this will be [‘ttbar’, ‘zpext’]. By default None

  • sample_usecase_list (list, optional) – List with the sample usecases to prepare. By default this will be [‘training’, ‘validation’] for DL1r and [“training”] for all other taggers. By default None

Raises:
  • AssertionError – If the prepare step fails.

  • AssertionError – If the resampling step fails.

  • AssertionError – If the scaling step fails.

  • AssertionError – If the apply scaling step fails.

  • AssertionError – If the write step fails.

  • AssertionError – If the to records step fails.

  • KeyError – If the resampling method is not supported by the test.

Returns:

isSuccess – Preprocessing succeeded or not.

Return type:

bool

umami.tests.integration.test_preprocessing_upp module#

This script integration tests the preprocessing methods.

class umami.tests.integration.test_preprocessing_upp.TestPreprocessingUPP(methodName='runTest')#

Bases: TestCase

Test class for the preprocessing with UPP.

This class sets up the needed configs for testing the preprocessing of UPP.

setUp()#

Download test files for running the preprocessing and dress preprocessing config file.

test_preprocessing_dips_upp_countup()#

Integration test of preprocessing.py with UPP preprocessing.

test_preprocessing_dips_upp_pdf()#

Integration test of preprocessing.py with UPP preprocessing.

test_preprocessing_upp_flags()#

Integration test of preprocessing.py with UPP flags.

umami.tests.integration.test_preprocessing_upp.get_configuration()#

Load yaml file with settings for integration test of preprocessing.

Returns:

Loaded configuration file.

Return type:

object

umami.tests.integration.test_preprocessing_upp.run_preprocessing(config: dict, tagger: str, method: str, string_id: str, test_dir: str) bool#

Call all steps of the preprocessing for a certain configuration and variable dict input. Return value True if all steps succeeded.

Parameters:
  • config (dict) – Dict with the needed configurations for the preprocessing.

  • tagger (str) – Name of the tagger which the preprocessing should be done for. The difference is in saving tracks or not.

  • method (str) – Define which sampling method is used.

  • string_id (str) – Unique identifier to further specify which preprocessing was done.

  • test_dir (str) – Path to the directory where all the files are downloaded to etc.

Returns:

isSuccess – Preprocessing succeeded or not.

Return type:

bool

umami.tests.integration.test_train module#

This script integration tests the training of the different models.

class umami.tests.integration.test_train.TestTraining(methodName='runTest')#

Bases: TestCase

Integration test class for the training.

This class creates a test folder and downloads all important files.

setUp()#

Download test files for running the dips training.

test_evaluate_tagger_in_files()#

Integration test the evaluation of only the taggers available in the files.

test_train_cads()#

Integration test of train.py for CADS script.

test_train_cond_att_umami()#

Integration test of train.py for UMAMI Cond Att script.

test_train_dips_four_classes()#

Integration test of train.py for DIPS script with four classes.

test_train_dips_no_attention()#

Integration test of train.py for DIPS script.

test_train_dl1r()#

Integration test of train.py for DL1r script.

test_train_tfrecords_cads()#

Integration test of train.py for CADS script with TFRecords.

test_train_tfrecords_cond_att_umami()#

Integration test of train.py for UMAMI Cond Att script with TFRecords.

test_train_tfrecords_dips()#

Integration test of train.py for DIPS script with TFRecords.

test_train_tfrecords_dl1r()#

Integration test of train.py for DL1r script with TFRecords.

test_train_tfrecords_umami()#

Integration test of train.py for UMAMI script with TFRecords.

test_train_umami()#

Integration test of train.py for UMAMI script.

umami.tests.integration.test_train.get_configuration()#

Load yaml file with settings for integration test of dips training.

Returns:

Loaded configuration file.

Return type:

object

Raises:

YAMLError – If a needed key is not in file.

umami.tests.integration.test_train.prepare_config(tagger: str, test_dir: str, var_file_from: str | None = None, preprocess_files_from: str | None = None, four_classes_case: bool = False, use_tf_records: bool = False) dict#

Prepare the train config for the given tagger and save it.

Parameters:
  • tagger (str) – Name of the tagger for which the config is to be prepared.

  • test_dir (str) – Path to the test directory where the config file is to be saved.

  • var_file_from (str, optional) – Name of the tagger from which the variable file is used. Possible options are dips, umami, dl1r. If None is given, the value of tagger is used. By default None

  • preprocess_files_from (str) – Name of the preprocessing files that should be used. If not given, the preprocessing files from the tagger will be tried to use.

  • four_classes_case (bool) – Decide, if you want to run the test with four classes (light, c-, b- and tau)

  • use_tf_records (bool) – Decide, if the TFRecords files are used for training or not.

Returns:

config – Path to the created config that is to be used for the test.

Return type:

str

umami.tests.integration.test_train.run_training(config: dict, tagger: str) bool#

Call train.py for the given tagger. Return value True if training succeeded, False if one step did not succees.

Parameters:
  • config (dict) – Dict with the needed configurations for training.

  • tagger (str) – Name of the tagger that is to be trained.

Raises:
  • AssertionError – If train.py fails for the given tagger.

  • AssertionError – If plotting_epoch_performance.py fails for the given tagger.

  • AssertionError – If evaluate_model.py fails for the given tagger.

Returns:

isSuccess – Training succeeded or not.

Return type:

bool

Module contents#