umami package#
Subpackages#
- umami.configuration package
- umami.data_tools package
- umami.evaluation_tools package
- umami.helper_tools package
- umami.input_vars_tools package
- umami.metrics package
- umami.models package
- umami.plotting_tools package
- umami.preprocessing_tools package
- Subpackages
- umami.preprocessing_tools.resampling package
- Submodules
- umami.preprocessing_tools.resampling.count_sampling module
- umami.preprocessing_tools.resampling.importance_sampling_no_replace module
- umami.preprocessing_tools.resampling.pdf_sampling module
- umami.preprocessing_tools.resampling.resampling_base module
- umami.preprocessing_tools.resampling.weighting module
- Module contents
- umami.preprocessing_tools.resampling package
- Submodules
- umami.preprocessing_tools.configuration module
GeneralSettings
GeneralSettings.apply_atlas_style
GeneralSettings.as_dict()
GeneralSettings.atlas_first_tag
GeneralSettings.atlas_second_tag
GeneralSettings.compression
GeneralSettings.concat_jet_tracks
GeneralSettings.convert_to_tfrecord
GeneralSettings.dict_file
GeneralSettings.legend_sample_category
GeneralSettings.outfile_name
GeneralSettings.outfile_name_validation
GeneralSettings.plot_name
GeneralSettings.plot_options_as_dict()
GeneralSettings.plot_type
GeneralSettings.precision
GeneralSettings.use_atlas_tag
GeneralSettings.var_file
Preparation
PreprocessConfiguration
Sample
Sampling
SamplingOptions
SamplingOptions.as_dict()
SamplingOptions.bool_attach_sample_weights
SamplingOptions.custom_n_jets_initial
SamplingOptions.fractions
SamplingOptions.intermediate_index_file
SamplingOptions.intermediate_index_file_validation
SamplingOptions.max_upsampling_ratio
SamplingOptions.n_jets
SamplingOptions.n_jets_scaling
SamplingOptions.n_jets_to_plot
SamplingOptions.n_jets_validation
SamplingOptions.samples_training
SamplingOptions.samples_validation
SamplingOptions.sampling_fraction
SamplingOptions.sampling_variables
SamplingOptions.save_track_labels
SamplingOptions.save_tracks
SamplingOptions.target_distribution
SamplingOptions.tracks_names
SamplingOptions.weighting_target_flavour
check_key()
- umami.preprocessing_tools.merging module
- umami.preprocessing_tools.preparation module
- umami.preprocessing_tools.scaling module
- umami.preprocessing_tools.ttbar_merge module
- umami.preprocessing_tools.utils module
- umami.preprocessing_tools.writing_train_file module
- Module contents
- Subpackages
- umami.tests package
- Subpackages
- umami.tests.integration package
- Submodules
- umami.tests.integration.test_examples module
- umami.tests.integration.test_input_vars_plot module
- umami.tests.integration.test_plotting_umami module
- umami.tests.integration.test_preprocessing module
- umami.tests.integration.test_preprocessing_upp module
- umami.tests.integration.test_train module
- Module contents
- umami.tests.unit package
- umami.tests.integration package
- Module contents
- Subpackages
- umami.tf_tools package
- Submodules
- umami.tf_tools.convert_to_record module
- umami.tf_tools.generators module
- umami.tf_tools.layers module
- umami.tf_tools.load_tfrecord module
- umami.tf_tools.models module
- umami.tf_tools.tddgenerators module
TDDCadsGenerator
TDDDipsGenerator
TDDDl1Generator
TDDGenerator
TDDGenerator.calculate_weights()
TDDGenerator.get_n_dim()
TDDGenerator.get_n_jet_features()
TDDGenerator.get_n_jets()
TDDGenerator.get_n_trk_features()
TDDGenerator.get_n_trks()
TDDGenerator.get_normalisation_arrays()
TDDGenerator.get_track_vars()
TDDGenerator.load_in_memory()
TDDGenerator.scale_input()
TDDGenerator.scale_tracks()
TDDUmamiConditionGenerator
TDDUmamiGenerator
filter_dictionary()
get_generator()
- umami.tf_tools.tools module
- Module contents
- umami.tools package
- umami.train_tools package
- Submodules
- umami.train_tools.configuration module
EvaluationSettingsConfig
EvaluationSettingsConfig.add_eval_variables
EvaluationSettingsConfig.calculate_saliency
EvaluationSettingsConfig.eff_max
EvaluationSettingsConfig.eff_min
EvaluationSettingsConfig.eval_batch_size
EvaluationSettingsConfig.evaluate_traind_model
EvaluationSettingsConfig.extra_classes_to_evaluate
EvaluationSettingsConfig.figsize
EvaluationSettingsConfig.frac_max
EvaluationSettingsConfig.frac_min
EvaluationSettingsConfig.frac_step
EvaluationSettingsConfig.frac_values
EvaluationSettingsConfig.frac_values_comp
EvaluationSettingsConfig.n_jets
EvaluationSettingsConfig.results_filename_extension
EvaluationSettingsConfig.saliency_effs
EvaluationSettingsConfig.saliency_ntrks
EvaluationSettingsConfig.shapley
EvaluationSettingsConfig.tagger
EvaluationSettingsConfig.working_point
EvaluationSettingsConfig.x_axis_granularity
NNStructureConfig
NNStructureConfig.activations
NNStructureConfig.attention_condition
NNStructureConfig.attention_sizes
NNStructureConfig.batch_normalisation
NNStructureConfig.batch_size
NNStructureConfig.check_class_labels()
NNStructureConfig.check_n_conditions()
NNStructureConfig.check_options()
NNStructureConfig.class_labels
NNStructureConfig.dense_condition
NNStructureConfig.dense_sizes
NNStructureConfig.dips_dense_units
NNStructureConfig.dips_loss_weight
NNStructureConfig.dips_ppm_condition
NNStructureConfig.dips_ppm_units
NNStructureConfig.dl1_units
NNStructureConfig.dropout_rate
NNStructureConfig.dropout_rate_f
NNStructureConfig.dropout_rate_phi
NNStructureConfig.epochs
NNStructureConfig.evaluate_trained_model
NNStructureConfig.intermediate_units
NNStructureConfig.learning_rate
NNStructureConfig.load_optimiser
NNStructureConfig.lrr
NNStructureConfig.lrr_cooldown
NNStructureConfig.lrr_factor
NNStructureConfig.lrr_min_learning_rate
NNStructureConfig.lrr_mode
NNStructureConfig.lrr_monitor
NNStructureConfig.lrr_patience
NNStructureConfig.lrr_verbose
NNStructureConfig.main_class
NNStructureConfig.n_conditions
NNStructureConfig.n_jets_train
NNStructureConfig.nfiles_tfrecord
NNStructureConfig.pooling
NNStructureConfig.ppm_condition
NNStructureConfig.ppm_sizes
NNStructureConfig.repeat_end
NNStructureConfig.tagger
NNStructureConfig.use_sample_weights
TrainConfiguration
TrainConfigurationObject
TrainConfigurationObject.continue_training
TrainConfigurationObject.evaluate_trained_model
TrainConfigurationObject.exclude
TrainConfigurationObject.model_file
TrainConfigurationObject.model_name
TrainConfigurationObject.preprocess_config
TrainConfigurationObject.test_files
TrainConfigurationObject.tracks_name
TrainConfigurationObject.train_data_structure
TrainConfigurationObject.train_file
TrainConfigurationObject.validation_files
ValidationSettingsConfig
ValidationSettingsConfig.atlas_first_tag
ValidationSettingsConfig.atlas_second_tag
ValidationSettingsConfig.figsize
ValidationSettingsConfig.n_jets
ValidationSettingsConfig.plot_args
ValidationSettingsConfig.plot_datatype
ValidationSettingsConfig.tagger_label
ValidationSettingsConfig.taggers_from_file
ValidationSettingsConfig.trained_taggers
ValidationSettingsConfig.use_atlas_tag
ValidationSettingsConfig.val_batch_size
ValidationSettingsConfig.working_point
- umami.train_tools.nn_tools module
CallbackBase
MyCallback
MyCallbackUmami
calc_validation_metrics()
create_metadata_folder()
evaluate_model()
evaluate_model_umami()
get_dropout_rates()
get_epoch_from_string()
get_jet_feature_indices()
get_jet_feature_position()
get_metrics_file_name()
get_model_path()
get_parameters_from_validation_dict_name()
get_test_file()
get_test_sample()
get_test_sample_trks()
get_unique_identifiers()
load_validation_data()
setup_output_directory()
- Module contents
Submodules#
umami.evaluate_model module#
Execution script for training model evaluations.
- umami.evaluate_model.evaluate_model(args: object, train_config: object, test_file: str, data_set_name: str, tagger: str)#
Evaluate only the taggers in the files or also the UMAMI tagger.
- Parameters:
args (object) – Loaded argparser.
train_config (object) – Loaded train config.
test_file (str) – Path to the files which are to be tested. Wildcards are supported.
data_set_name (str) – Dataset name for the results files. The results will be saved in dicts. The key will be this dataset name.
tagger (str) – Name of the tagger that is to be evaluated. Can either be umami or umami_cond_att depending which architecture is used.
- Raises:
ValueError – If no epoch is given when evaluating UMAMI.
ValueError – If the given tagger argument in train config is not a list.
ValueError – If Shapley is called but the tagger is not DL1
- umami.evaluate_model.get_parser()#
Argument parser for the evaluation script.
- Returns:
args
- Return type:
parse_args
umami.plot_input_variables module#
This script plots the given input variables of the given files and also a comparison.
- umami.plot_input_variables.get_parser()#
Argument parser for Preprocessing script.
- Returns:
args
- Return type:
parse_args
- umami.plot_input_variables.plot_jets_variables(plot_config, plot_type)#
Plot jet variables.
- Parameters:
plot_config (object) – plot configuration
plot_type (str) – Plottype, like pdf or png
- umami.plot_input_variables.plot_trks_variables(plot_config, plot_type)#
Plot track variables.
- Parameters:
plot_config (object) – plot configuration
plot_type (str) – Plottype, like pdf or png
umami.plotting_epoch_performance module#
Execution script for epoch performance plotting.
- umami.plotting_epoch_performance.get_parser()#
Argument parser for Preprocessing script.
- Returns:
args
- Return type:
parse_args
- umami.plotting_epoch_performance.main(args, train_config)#
Executes plotting of epoch performance plots
- Parameters:
args (parser.parse_args) – command line argument parser options
train_config (object) – configuration file used for training
- Raises:
ValueError – If the given tagger is not supported.
umami.plotting_umami module#
This script allows to plot the ROC curves (and ratios to other models), the confusion matrix and the output scores (pb, pc, pu). A configuration file has to be provided. See umami/examples/plotting_umami_config*.yaml for examples. This script works on the output of the evaluate_model.py script and has to be specified in the config file as ‘evaluation_file’.
- umami.plotting_umami.get_parser()#
Argument parser for Preprocessing script.
- Returns:
args
- Return type:
parse_args
- umami.plotting_umami.plot_confusion_matrix(plot_name: str, plot_config: dict, eval_params: dict, eval_file_dir: str) None #
Plot confusion matrix.
- Parameters:
plot_name (str) – Full path of the plot.
plot_config (dict) – Dict with the plot configs.
eval_params (dict) – Dict with the evaluation parameters.
eval_file_dir (str) – Path to the results directory of the model.
- umami.plotting_umami.plot_frac_contour(plot_name: str, plot_config: dict, eval_params: dict, eval_file_dir: str, print_model: bool) None #
Plot the fraction contour plot.
- Parameters:
plot_name (str) – Full path + name of the plot
plot_config (dict) – Loaded plotting config as dict.
eval_params (dict) – Evaluation parameters from the plotting config.
eval_file_dir (str) – File which is to use for plotting.
print_model (bool) – Print the logger while plotting.
- umami.plotting_umami.plot_probability(plot_name: str, plot_config: dict, eval_params: dict, eval_file_dir: str, print_model: bool) None #
Plots probability comparison.
- Parameters:
plot_name (str) – Full path of the plot.
plot_config (dict) – Dict with the plot configs.
eval_params (dict) – Dict with the evaluation parameters.
eval_file_dir (str) – Path to the results directory of the model.
print_model (bool) – Print the models which are plotted while plotting.
- umami.plotting_umami.plot_roc(plot_name: str, plot_config: dict, eval_params: dict, eval_file_dir: str, print_model: bool) None #
Plot ROCs.
- Parameters:
plot_name (str) – Full path of the plot.
plot_config (dict) – Dict with the plot configs.
eval_params (dict) – Dict with the evaluation parameters.
eval_file_dir (str) – Path to the results directory of the model.
print_model (bool) – Print the models which are plotted while plotting.
- Raises:
AttributeError – If the needed n_jets per class used to calculate the rejections is not in the rej_per_epoch results file.
- umami.plotting_umami.plot_saliency(plot_name: str, plot_config: dict, eval_params: dict, eval_file_dir: str) None #
Plot saliency maps.
- Parameters:
plot_name (str) – Full path of the plot.
plot_config (dict) – Dict with the plot configs.
eval_params (dict) – Dict with the evaluation parameters.
eval_file_dir (str) – Path to the results directory of the model.
- umami.plotting_umami.plot_score(plot_name: str, plot_config: dict, eval_params: dict, eval_file_dir: str, print_model: bool) None #
Plot score comparison.
- Parameters:
plot_name (str) – Full path of the plot.
plot_config (dict) – Dict with the plot configs.
eval_params (dict) – Dict with the evaluation parameters.
eval_file_dir (str) – Path to the results directory of the model.
print_model (bool) – Print the models which are plotted while plotting.
- umami.plotting_umami.plot_var_vs_eff(plot_name: str, plot_config: dict, eval_params: dict, eval_file_dir: str, print_model: bool) None #
Plot pT vs efficiency.
- Parameters:
plot_name (str) – Full path of the plot.
plot_config (dict) – Dict with the plot configs.
eval_params (dict) – Dict with the evaluation parameters.
eval_file_dir (str) – Path to the results directory of the model.
print_model (bool) – Print the models which are plotted while plotting.
- umami.plotting_umami.set_up_plots(plot_config_dict: dict, plot_dir: str, eval_file_path: str, file_format: str, print_model: bool) None #
Setting up plot settings.
- Parameters:
plot_config_dict (dict) – Dict with the plot settings.
plot_dir (str) – Path to the output directory of the plots.
eval_file_path (str) – Path to the directory where the result files are saved.
file_format (str) – String of the file format.
print_model (bool) – Print the logger while plotting.
- Raises:
NameError – If given plottype is not supported.
umami.preprocessing module#
Execution script to run preprocessing steps.
- umami.preprocessing.get_parser()#
Argument parser for Preprocessing script.
- Returns:
args
- Return type:
parse_args
umami.sample_merging module#
Execution script to run merging of ttbar samples.
- umami.sample_merging.get_parser()#
Argument parser for Preprocessing script.
- Returns:
args
- Return type:
parse_args
umami.train module#
Training script to perform various tagger trainings.
- umami.train.check_train_file_format(input_file: str)#
_summary_
- Parameters:
input_file (str) – Path to input h5 file to check
- Raises:
KeyError – If the specified key is not present in the input file
- umami.train.get_parser()#
Argument parser for the train executable.
- Returns:
args
- Return type:
parse_args
Module contents#
Umami framework used in ATLAS FTAG for dataset preparation and tagger training.