Plot Results¶
A convenience script to extract useful information out of the results create by the runners.
This script can return one or all of the below information:
- Get best run: Returns the best hyperparameter setting for each optimizer in each test problem.
- Plot learning rate sensitivity: Creates a plot for each test problem showing the relative performance of each optimizer against the learning rate to get a sense of how difficult the tuning process was.
- Plot performance: Creates a plot for the
small
andlarge
benchmark set, plotting (if available) all four performance metric (losses
andaccuracies
for both the test and the train data set) for each optimizer.- Plot table: Creates the overall performance table for the
small
andlarge
benchmark set including metrics for the performance, speed and tuneability of each optimizer on each test problem.
If the path to the baseline folder is given, this script will also plot the performances of SGD, Momentum and Adam.
Usage:
Plotting tool for DeepOBS.
usage: deepobs_plot_results.py [-h] [--get_best_run] [--plot_lr_sensitivity]
[--plot_performance] [--plot_table] [--full]
[--baseline_path BASELINE_PATH]
path
Positional Arguments¶
path | Path to the results folder |
Named Arguments¶
--get_best_run | Return best hyperparameter setting per optimizer and testproblem. Default: False |
--plot_lr_sensitivity | |
Plot 'sensitivity' plot for the learning rates. Default: False | |
--plot_performance | |
Plot performance plot compared to the baselines. Default: False | |
--plot_table | Plot overall performance table including speed and hyperparameters. Default: False |
--full | Run a full analysis and plot all figures. Default: False |
--baseline_path | |
Path to baseline folder. Default: "baselines_deepobs" |