CIME.tests package

Submodules

CIME.tests.base module

class CIME.tests.base.BaseTestCase(methodName='runTest')[source]

Bases: TestCase

FAST_ONLY = None
GLOBAL_TIMEOUT = None
MACHINE = None
NO_BATCH = None
NO_CMAKE = None
NO_FORTRAN_RUN = None
NO_TEARDOWN = None
SCRIPT_DIR = '/home/runner/work/cime/cime/scripts'
TEST_COMPILER = None
TEST_MPILIB = None
TEST_ROOT = None
TOOLS_DIR = '/home/runner/work/cime/cime/CIME/Tools'
assert_dashboard_has_build(build_name, expected_count=1)[source]
assert_test_status(test_name, test_status_obj, test_phase, expected_stat)[source]
get_casedir(case_fragment, all_cases)[source]
kill_python_subprocesses(sig=Signals.SIGKILL, expected_num_killed=None)[source]
kill_subprocesses(name=None, sig=Signals.SIGKILL, expected_num_killed=None)[source]
run_cmd_assert_result(cmd, from_dir=None, expected_stat=0, env=None, verbose=False, shell=True)[source]
setUp()[source]

Hook method for setting up the test fixture before exercising it.

setup_proxy()[source]
tearDown()[source]

Hook method for deconstructing the test fixture after testing it.

verify_perms(root_dir)[source]
CIME.tests.base.typed_os_environ(key, default_value, expected_type=None)[source]

CIME.tests.case_fake module

This module contains a fake implementation of the Case class that can be used for testing the tests.

class CIME.tests.case_fake.CaseFake(case_root, create_case_root=True)[source]

Bases: object

case_setup(clean=False, test_mode=False, reset=False)[source]
copy(newcasename, newcaseroot)[source]

Create and return a copy of self, but with CASE and CASEBASEID set to newcasename, CASEROOT set to newcaseroot, and RUNDIR set appropriately.

Args:

newcasename (str): new value for CASE newcaseroot (str): new value for CASEROOT

create_clone(newcase, keepexe=False, mach_dir=None, project=None, cime_output_root=None, exeroot=None, rundir=None)[source]

Create a clone of the current case. Also creates the CASEROOT directory for the clone case (given by newcase).

Args:
newcase (str): full path to the new case. This directory should not

already exist; it will be created

keepexe (bool, optional): Ignored mach_dir (str, optional): Ignored project (str, optional): Ignored cime_output_root (str, optional): New CIME_OUTPUT_ROOT for the clone exeroot (str, optional): New EXEROOT for the clone rundir (str, optional): New RUNDIR for the clone

Returns the clone case object

flush()[source]
get_value(item)[source]

Get the value of the given item

Returns None if item isn’t set for this case

Args:

item (str): variable of interest

load_env(reset=False)[source]
make_rundir()[source]

Make directory given by RUNDIR

set_exeroot()[source]

Assumes CASEROOT is already set; sets an appropriate EXEROOT (nested inside CASEROOT)

set_initial_test_values()[source]
set_rundir()[source]

Assumes CASEROOT is already set; sets an appropriate RUNDIR (nested inside CASEROOT)

set_value(item, value)[source]

Set the value of the given item to the given value

Args:

item (str): variable of interest value (any type): new value for item

CIME.tests.custom_assertions_test_status module

This module contains a class that extends unittest.TestCase, adding custom assertions that can be used when testing TestStatus.

class CIME.tests.custom_assertions_test_status.CustomAssertionsTestStatus(methodName='runTest')[source]

Bases: TestCase

assert_core_phases(output, test_name, fails)[source]

Asserts that ‘output’ contains a line for each of the core test phases for the given test_name. All results should be PASS except those given by the fails list, which should be FAILS.

assert_num_expected_unexpected_fails(output, num_expected, num_unexpected)[source]

Asserts that the number of occurrences of expected and unexpected fails in ‘output’ matches the given numbers

assert_phase_absent(output, phase, test_name)[source]

Asserts that ‘output’ does not contain a status line for the given phase and test_name

assert_status_of_phase(output, status, phase, test_name, xfail=None)[source]

Asserts that ‘output’ contains a line showing the given status for the given phase for the given test_name.

‘xfail’ should have one of the following values: - None (the default): assertion passes regardless of whether there is an

EXPECTED/UNEXPECTED string

  • ‘no’: The line should end with the phase, with no additional text after that

  • ‘expected’: After the phase, the line should contain ‘(EXPECTED FAILURE)’

  • ‘unexpected’: After the phase, the line should contain ‘(UNEXPECTED’

CIME.tests.scripts_regression_tests module

Script containing CIME python regression test suite. This suite should be run to confirm overall CIME correctness.

CIME.tests.scripts_regression_tests.cleanup(test_root)[source]
CIME.tests.scripts_regression_tests.configure_tests(timeout, no_fortran_run, fast, no_batch, no_cmake, no_teardown, machine, compiler, mpilib, test_root, **kwargs)[source]
CIME.tests.scripts_regression_tests.setup_arguments(parser)[source]
CIME.tests.scripts_regression_tests.write_provenance_info(machine, test_compiler, test_mpilib, test_root)[source]

CIME.tests.test_sys_bless_tests_results module

class CIME.tests.test_sys_bless_tests_results.TestBlessTestResults(methodName='runTest')[source]

Bases: BaseTestCase

setUp()[source]

Hook method for setting up the test fixture before exercising it.

tearDown()[source]

Hook method for deconstructing the test fixture after testing it.

test_bless_test_results()[source]
test_rebless_namelist()[source]

CIME.tests.test_sys_build_system module

class CIME.tests.test_sys_build_system.TestBuildSystem(methodName='runTest')[source]

Bases: BaseTestCase

test_clean_rebuild()[source]

CIME.tests.test_sys_cime_case module

class CIME.tests.test_sys_cime_case.TestCimeCase(methodName='runTest')[source]

Bases: BaseTestCase

test_case_clean()[source]
test_case_submit_interface()[source]
test_cime_case()[source]
test_cime_case_allow_failed_prereq()[source]
test_cime_case_build_threaded_1()[source]
test_cime_case_build_threaded_2()[source]
test_cime_case_force_pecount()[source]
test_cime_case_mpi_serial()[source]
test_cime_case_prereq()[source]
test_cime_case_resubmit_immediate()[source]
test_cime_case_st_archive_resubmit()[source]
test_cime_case_test_custom_project()[source]
test_cime_case_test_walltime_mgmt_1()[source]
test_cime_case_test_walltime_mgmt_2()[source]
test_cime_case_test_walltime_mgmt_3()[source]
test_cime_case_test_walltime_mgmt_4()[source]
test_cime_case_test_walltime_mgmt_5()[source]
test_cime_case_test_walltime_mgmt_6()[source]
test_cime_case_test_walltime_mgmt_7()[source]
test_cime_case_test_walltime_mgmt_8()[source]
test_cime_case_xmlchange_append()[source]
test_configure()[source]
test_create_test_longname()[source]
test_env_loading()[source]
test_self_build_cprnc()[source]
test_xml_caching()[source]

CIME.tests.test_sys_cime_performance module

class CIME.tests.test_sys_cime_performance.TestCimePerformance(methodName='runTest')[source]

Bases: BaseTestCase

test_cime_case_ctrl_performance()[source]

CIME.tests.test_sys_create_newcase module

class CIME.tests.test_sys_create_newcase.TestCreateNewcase(methodName='runTest')[source]

Bases: BaseTestCase

classmethod setUpClass()[source]

Hook method for setting up class fixture before running tests in the class.

tearDown()[source]

Hook method for deconstructing the test fixture after testing it.

classmethod tearDownClass()[source]

Hook method for deconstructing the class fixture after running all tests in the class.

test_a_createnewcase()[source]
test_aa_no_flush_on_instantiate()[source]
test_b_user_mods()[source]
test_c_create_clone_keepexe()[source]
test_d_create_clone_new_user()[source]
test_dd_create_clone_not_writable()[source]
test_e_xmlquery()[source]
test_f_createnewcase_with_user_compset()[source]
test_g_createnewcase_with_user_compset_and_env_mach_pes()[source]
test_h_primary_component()[source]
test_j_createnewcase_user_compset_vs_alias()[source]

Create a compset using the alias and another compset using the full compset name and make sure they are the same by comparing the namelist files in CaseDocs. Ignore the modelio files and clean the directory names out first.

test_k_append_config()[source]
test_ka_createnewcase_extra_machines_dir()[source]
test_m_createnewcase_alternate_drivers()[source]
test_n_createnewcase_bad_compset()[source]

CIME.tests.test_sys_full_system module

class CIME.tests.test_sys_full_system.TestFullSystem(methodName='runTest')[source]

Bases: BaseTestCase

test_full_system()[source]

CIME.tests.test_sys_grid_generation module

class CIME.tests.test_sys_grid_generation.TestGridGeneration(methodName='runTest')[source]

Bases: BaseTestCase

classmethod setUpClass()[source]

Hook method for setting up class fixture before running tests in the class.

classmethod tearDownClass()[source]

Hook method for deconstructing the class fixture after running all tests in the class.

test_gen_domain()[source]

CIME.tests.test_sys_jenkins_generic_job module

class CIME.tests.test_sys_jenkins_generic_job.TestJenkinsGenericJob(methodName='runTest')[source]

Bases: BaseTestCase

assert_num_leftovers(suite)[source]
setUp()[source]

Hook method for setting up the test fixture before exercising it.

simple_test(expect_works, extra_args, build_name=None)[source]
tearDown()[source]

Hook method for deconstructing the test fixture after testing it.

test_jenkins_generic_job()[source]
test_jenkins_generic_job_kill()[source]
test_jenkins_generic_job_realistic_dash()[source]
test_jenkins_generic_job_save_timing()[source]
threaded_test(expect_works, extra_args, build_name=None)[source]

CIME.tests.test_sys_manage_and_query module

class CIME.tests.test_sys_manage_and_query.TestManageAndQuery(methodName='runTest')[source]

Bases: BaseTestCase

Tests various scripts to manage and query xml files

setUp()[source]

Hook method for setting up the test fixture before exercising it.

test_query_testlists_count_runs()[source]

Make sure that query_testlists runs successfully with the –count argument

test_query_testlists_define_testtypes_runs()[source]

Make sure that query_testlists runs successfully with the –define-testtypes argument

test_query_testlists_list_runs()[source]

Make sure that query_testlists runs successfully with the –list argument

test_query_testlists_runs()[source]

Make sure that query_testlists runs successfully

This simply makes sure that query_testlists doesn’t generate any errors when it runs. This helps ensure that changes in other utilities don’t break query_testlists.

CIME.tests.test_sys_query_config module

class CIME.tests.test_sys_query_config.TestQueryConfig(methodName='runTest')[source]

Bases: BaseTestCase

setUp()[source]

Hook method for setting up the test fixture before exercising it.

test_query_components()[source]
test_query_compsets()[source]
test_query_grids()[source]
test_query_machines()[source]

CIME.tests.test_sys_run_restart module

class CIME.tests.test_sys_run_restart.TestRunRestart(methodName='runTest')[source]

Bases: BaseTestCase

test_run_restart()[source]
test_run_restart_too_many_fails()[source]

CIME.tests.test_sys_save_timings module

class CIME.tests.test_sys_save_timings.TestSaveTimings(methodName='runTest')[source]

Bases: BaseTestCase

simple_test(manual_timing=False)[source]
test_save_timings()[source]
test_save_timings_manual()[source]
test_success_recording()[source]

CIME.tests.test_sys_single_submit module

class CIME.tests.test_sys_single_submit.TestSingleSubmit(methodName='runTest')[source]

Bases: BaseTestCase

test_single_submit()[source]

CIME.tests.test_sys_test_scheduler module

class CIME.tests.test_sys_test_scheduler.TestTestScheduler(methodName='runTest')[source]

Bases: BaseTestCase

test_a_phases()[source]
test_b_full()[source]
test_c_use_existing()[source]
test_chksum(strftime)[source]
test_d_retry()[source]
test_e_test_inferred_compiler()[source]
test_force_rebuild()[source]

CIME.tests.test_sys_unittest module

class CIME.tests.test_sys_unittest.TestUnitTest(methodName='runTest')[source]

Bases: BaseTestCase

classmethod setUpClass()[source]

Hook method for setting up class fixture before running tests in the class.

classmethod tearDownClass()[source]

Hook method for deconstructing the class fixture after running all tests in the class.

test_a_unit_test()[source]
test_b_cime_f90_unit_tests()[source]

CIME.tests.test_sys_user_concurrent_mods module

class CIME.tests.test_sys_user_concurrent_mods.TestUserConcurrentMods(methodName='runTest')[source]

Bases: BaseTestCase

test_user_concurrent_mods()[source]

CIME.tests.test_sys_wait_for_tests module

class CIME.tests.test_sys_wait_for_tests.TestWaitForTests(methodName='runTest')[source]

Bases: BaseTestCase

live_test_impl(testdir, expected_results, last_phase, last_status)[source]
setUp()[source]

Hook method for setting up the test fixture before exercising it.

simple_test(testdir, expected_results, extra_args='', build_name=None)[source]
tearDown()[source]

Hook method for deconstructing the test fixture after testing it.

test_wait_for_test_all_pass()[source]
test_wait_for_test_cdash_kill()[source]
test_wait_for_test_cdash_pass()[source]
test_wait_for_test_no_wait()[source]
test_wait_for_test_test_status_integration_pass()[source]
test_wait_for_test_test_status_integration_submit_fail()[source]
test_wait_for_test_timeout()[source]
test_wait_for_test_wait_for_missing_run_phase()[source]
test_wait_for_test_wait_for_pend()[source]
test_wait_for_test_wait_kill()[source]
test_wait_for_test_with_fail()[source]
threaded_test(testdir, expected_results, extra_args='', build_name=None)[source]

CIME.tests.test_unit_aprun module

class CIME.tests.test_unit_aprun.TestUnitAprun(methodName='runTest')[source]

Bases: TestCase

test_aprun()[source]
test_aprun_extra_args()[source]

CIME.tests.test_unit_baselines_performance module

class CIME.tests.test_unit_baselines_performance.TestUnitBaselinesPerformance(methodName='runTest')[source]

Bases: TestCase

test__perf_get_memory(get_latest_cpl_logs, get_cpl_mem_usage)[source]
test__perf_get_memory_override(get_latest_cpl_logs, get_cpl_mem_usage)[source]
test__perf_get_throughput(get_latest_cpl_logs, get_cpl_throughput)[source]
test_get_cpl_mem_usage(isfile)[source]
test_get_cpl_mem_usage_gz()[source]
test_get_cpl_throughput()[source]
test_get_cpl_throughput_no_file()[source]
test_get_latest_cpl_logs()[source]
test_get_latest_cpl_logs_found_multiple()[source]
test_get_latest_cpl_logs_found_single()[source]
test_perf_compare_memory_baseline(get_latest_cpl_logs, read_baseline_file, get_cpl_mem_usage)[source]
test_perf_compare_memory_baseline_above_threshold(get_latest_cpl_logs, read_baseline_file, get_cpl_mem_usage)[source]
test_perf_compare_memory_baseline_no_baseline(get_latest_cpl_logs, read_baseline_file, get_cpl_mem_usage)[source]
test_perf_compare_memory_baseline_no_baseline_file(get_latest_cpl_logs, read_baseline_file, get_cpl_mem_usage)[source]
test_perf_compare_memory_baseline_no_tolerance(get_latest_cpl_logs, read_baseline_file, get_cpl_mem_usage)[source]
test_perf_compare_memory_baseline_not_enough_samples(get_latest_cpl_logs, read_baseline_file, get_cpl_mem_usage)[source]
test_perf_compare_throughput_baseline(get_latest_cpl_logs, read_baseline_file, _perf_get_throughput)[source]
test_perf_compare_throughput_baseline_above_threshold(get_latest_cpl_logs, read_baseline_file, _perf_get_throughput)[source]
test_perf_compare_throughput_baseline_no_baseline(get_latest_cpl_logs, read_baseline_file, _perf_get_throughput)[source]
test_perf_compare_throughput_baseline_no_baseline_file(get_latest_cpl_logs, read_baseline_file, _perf_get_throughput)[source]
test_perf_compare_throughput_baseline_no_tolerance(get_latest_cpl_logs, read_baseline_file, _perf_get_throughput)[source]
test_perf_get_memory()[source]
test_perf_get_memory_default(_perf_get_memory)[source]
test_perf_get_throughput()[source]
test_perf_get_throughput_default(_perf_get_throughput)[source]
test_perf_write_baseline(perf_get_throughput, perf_get_memory, write_baseline_file)[source]
test_read_baseline_file_content()[source]
test_read_baseline_file_multi_line()[source]
test_write_baseline_file()[source]
test_write_baseline_runtimeerror(perf_get_throughput, perf_get_memory, write_baseline_file)[source]
test_write_baseline_skip(perf_get_throughput, perf_get_memory, write_baseline_file)[source]
CIME.tests.test_unit_baselines_performance.create_mock_case(tempdir, get_latest_cpl_logs=None)[source]

CIME.tests.test_unit_bless_test_results module

class CIME.tests.test_unit_bless_test_results.TestUnitBlessTestResults(methodName='runTest')[source]

Bases: TestCase

test_baseline_name_none(get_test_status_files, TestStatus, Case, bless_namelists)[source]
test_baseline_root_none(get_test_status_files, TestStatus, Case)[source]
test_bless_all(get_test_status_files, TestStatus, Case)[source]
test_bless_hist_only(get_test_status_files, TestStatus, Case, bless_history)[source]
test_bless_history(compare_baseline)[source]
test_bless_history_fail(compare_baseline, generate_baseline)[source]
test_bless_history_force(compare_baseline, generate_baseline)[source]
test_bless_memory(perf_compare_memory_baseline)[source]
test_bless_memory_file_not_found_error(perf_compare_memory_baseline, perf_write_baseline)[source]
test_bless_memory_force(perf_compare_memory_baseline, perf_write_baseline)[source]
test_bless_memory_force_error(perf_compare_memory_baseline, perf_write_baseline)[source]
test_bless_memory_general_error(perf_compare_memory_baseline, perf_write_baseline)[source]
test_bless_memory_only(get_test_status_files, TestStatus, Case, _bless_memory, _bless_throughput)[source]
test_bless_memory_report_only(perf_compare_memory_baseline)[source]
test_bless_namelists_fail(run_cmd, get_scripts_root)[source]
test_bless_namelists_force(run_cmd, get_scripts_root)[source]
test_bless_namelists_new_test_id(run_cmd, get_scripts_root)[source]
test_bless_namelists_new_test_root(run_cmd, get_scripts_root)[source]
test_bless_namelists_only(get_test_status_files, TestStatus, Case, bless_namelists)[source]
test_bless_namelists_pes_file(run_cmd, get_scripts_root)[source]
test_bless_namelists_report_only()[source]
test_bless_perf(get_test_status_files, TestStatus, Case, _bless_memory, _bless_throughput)[source]
test_bless_tests_no_match(get_test_status_files, TestStatus, Case)[source]
test_bless_tests_results_fail(get_test_status_files, TestStatus, Case, bless_namelists, bless_history, _bless_throughput, _bless_memory)[source]
test_bless_tests_results_homme(get_test_status_files, TestStatus, Case, bless_namelists, bless_history, _bless_throughput, _bless_memory)[source]
test_bless_throughput(perf_compare_throughput_baseline)[source]
test_bless_throughput_file_not_found_error(perf_compare_throughput_baseline, perf_write_baseline)[source]
test_bless_throughput_force(perf_compare_throughput_baseline, perf_write_baseline)[source]
test_bless_throughput_force_error(perf_compare_throughput_baseline, perf_write_baseline)[source]
test_bless_throughput_general_error(perf_compare_throughput_baseline)[source]
test_bless_throughput_only(get_test_status_files, TestStatus, Case, _bless_memory, _bless_throughput)[source]
test_bless_throughput_report_only(perf_compare_throughput_baseline)[source]
test_exclude(get_test_status_files, TestStatus, Case)[source]
test_is_bless_needed()[source]
test_is_bless_needed_baseline_fail()[source]
test_is_bless_needed_no_run_phase()[source]
test_is_bless_needed_no_skip_fail()[source]
test_is_bless_needed_overall_fail()[source]
test_is_bless_needed_run_phase_fail()[source]
test_multiple_files(get_test_status_files, TestStatus, Case)[source]
test_no_skip_pass(get_test_status_files, TestStatus, Case, bless_namelists, bless_history, _bless_throughput, _bless_memory)[source]
test_specific(get_test_status_files, TestStatus, Case)[source]

CIME.tests.test_unit_case module

class CIME.tests.test_unit_case.TestCase(methodName='runTest')[source]

Bases: TestCase

setUp()[source]

Hook method for setting up the test fixture before exercising it.

test_copy(getuser, getfqdn, configure, create_caseroot, apply_user_mods, set_lookup_value, lock_file, strftime, read_xml)[source]
test_create(get_user, getfqdn, configure, create_caseroot, apply_user_mods, set_lookup_value, lock_file, strftime, read_xml)[source]
test_fix_sys_argv_quotes(read_xml)[source]
test_fix_sys_argv_quotes_incomplete(read_xml)[source]
test_fix_sys_argv_quotes_kv(read_xml)[source]
test_fix_sys_argv_quotes_val(read_xml)[source]
test_fix_sys_argv_quotes_val_quoted(read_xml)[source]
test_new_hash(getuser, getfqdn, strftime, read_xml)[source]
class CIME.tests.test_unit_case.TestCaseSubmit(methodName='runTest')[source]

Bases: TestCase

test__submit(lock_file, unlock_file, basename)[source]
test_check_case()[source]
test_check_case_test()[source]
test_submit(read_xml, get_value, init, _submit)[source]
class CIME.tests.test_unit_case.TestCase_RecordCmd(methodName='runTest')[source]

Bases: TestCase

assert_calls_match(calls, expected)[source]
setUp()[source]

Hook method for setting up the test fixture before exercising it.

test_cmd_arg(get_value, flush, init)[source]
test_error(strftime, get_value, flush, init)[source]
test_init(strftime, get_value, flush, init)[source]
test_sub_relative(strftime, get_value, flush, init)[source]
CIME.tests.test_unit_case.make_valid_case(path)[source]

Make the given path look like a valid case to avoid errors

CIME.tests.test_unit_case_fake module

This module contains unit tests of CaseFake

class CIME.tests.test_unit_case_fake.TestCaseFake(methodName='runTest')[source]

Bases: TestCase

setUp()[source]

Hook method for setting up the test fixture before exercising it.

tearDown()[source]

Hook method for deconstructing the test fixture after testing it.

test_create_clone()[source]

CIME.tests.test_unit_case_setup module

class CIME.tests.test_unit_case_setup.TestCaseSetup(methodName='runTest')[source]

Bases: TestCase

test_create_macros(_create_macros_cmake)[source]
test_create_macros_cmake(copy_depends_files)[source]
test_create_macros_copy_extra()[source]
test_create_macros_copy_user()[source]
CIME.tests.test_unit_case_setup.chdir(path)[source]
CIME.tests.test_unit_case_setup.create_machines_dir()[source]

Creates temp machines directory with fake content

CIME.tests.test_unit_compare_test_results module

This module contains unit tests for compare_test_results

class CIME.tests.test_unit_compare_test_results.TestCaseFake(methodName='runTest')[source]

Bases: TestCase

setUp()[source]

Hook method for setting up the test fixture before exercising it.

tearDown()[source]

Hook method for deconstructing the test fixture after testing it.

test_baseline()[source]
test_failed_early()[source]
test_hist_only()[source]
test_namelists_only()[source]

CIME.tests.test_unit_compare_two module

This module contains unit tests of the core logic in SystemTestsCompareTwo.

class CIME.tests.test_unit_compare_two.Call(method, arguments)

Bases: tuple

arguments

Alias for field number 1

method

Alias for field number 0

class CIME.tests.test_unit_compare_two.SystemTestsCompareTwoFake(case1, run_one_suffix='base', run_two_suffix='test', separate_builds=False, multisubmit=False, case2setup_raises_exception=False, run_one_should_pass=True, run_two_should_pass=True, compare_should_pass=True)[source]

Bases: SystemTestsCompareTwo

run_indv(suffix='base', st_archive=False, submit_resubmits=None, keep_init_generated_files=False)[source]

This fake implementation appends to the log and raises an exception if it’s supposed to

Note that the Call object appended to the log has the current CASE name in addition to the method arguments. (This is mainly to ensure that the proper suffix is used for the proper case, but this extra check can be removed if it’s a maintenance problem.)

class CIME.tests.test_unit_compare_two.TestSystemTestsCompareTwo(methodName='runTest')[source]

Bases: TestCase

get_caseroots(casename='mytest')[source]

Returns a tuple (case1root, case2root)

get_compare_phase_name(mytest)[source]

Returns a string giving the compare phase name for this test

setUp()[source]

Hook method for setting up the test fixture before exercising it.

tearDown()[source]

Hook method for deconstructing the test fixture after testing it.

test_compare_fails()[source]
test_compare_passes()[source]
test_internal_calls_multisubmit_failed_state()[source]
test_resetup_case_single_exe()[source]
test_run1_fails()[source]
test_run2_fails()[source]
test_run_phase_internal_calls()[source]
test_run_phase_internal_calls_multisubmit_phase1()[source]
test_run_phase_internal_calls_multisubmit_phase2()[source]
test_run_phase_passes()[source]
test_setup()[source]
test_setup_case2_exists()[source]
test_setup_error()[source]
test_setup_separate_builds_sharedlibroot()[source]

CIME.tests.test_unit_config module

class CIME.tests.test_unit_config.TestConfig(methodName='runTest')[source]

Bases: TestCase

test_class()[source]
test_class_external()[source]
test_load()[source]
test_overwrite()[source]

CIME.tests.test_unit_cs_status module

class CIME.tests.test_unit_cs_status.TestCsStatus(methodName='runTest')[source]

Bases: CustomAssertionsTestStatus

create_test_dir(test_dir)[source]

Creates the given test directory under testroot.

Returns the full path to the created test directory.

static create_test_status_core_passes(test_dir_path, test_name)[source]

Creates a TestStatus file in the given path, with PASS status for all core phases

setUp()[source]

Hook method for setting up the test fixture before exercising it.

set_last_core_phase_to_fail(test_dir_path, test_name)[source]

Sets the last core phase to FAIL

Returns the name of this phase

static set_phase_to_status(test_dir_path, test_name, phase, status)[source]

Sets the given phase to the given status for this test

tearDown()[source]

Hook method for deconstructing the test fixture after testing it.

test_count_fails()[source]

Test the count of fails with three tests

For first phase of interest: First test FAILs, second PASSes, third FAILs; count should be 2, and this phase should not appear individually for each test.

For second phase of interest: First test PASSes, second PASSes, third FAILs; count should be 1, and this phase should not appear individually for each test.

test_expected_fails()[source]

With the expected_fails_file flag, expected failures should be flagged as such

test_fails_only()[source]

With fails_only flag, only fails and pends should appear in the output

test_force_rebuild()[source]
test_single_test()[source]

cs_status for a single test should include some minimal expected output

test_two_tests()[source]

cs_status for two tests (one with a FAIL) should include some minimal expected output

CIME.tests.test_unit_custom_assertions_test_status module

This module contains unit tests of CustomAssertionsTestStatus

class CIME.tests.test_unit_custom_assertions_test_status.TestCustomAssertions(methodName='runTest')[source]

Bases: CustomAssertionsTestStatus

static output_line(status, test_name, phase, extra='')[source]
test_assertCorePhases_missingPhase_fails()[source]

assert_core_phases fails if there is a missing phase

test_assertCorePhases_passes()[source]

assert_core_phases passes when it should

test_assertCorePhases_wrongName_fails()[source]

assert_core_phases fails if the test name is wrong

test_assertCorePhases_wrongStatus_fails()[source]

assert_core_phases fails if a phase has the wrong status

test_assertPhaseAbsent_fails()[source]

assert_phase_absent should fail when the phase is present for the given test_name

test_assertPhaseAbsent_passes()[source]

assert_phase_absent should pass when the phase is absent for the given test_name

test_assertStatusOfPhase_withExtra_passes()[source]

Make sure assert_status_of_phase passes when there is some extra text at the end of the line

test_assertStatusOfPhase_xfailExpected_fails()[source]

assert_status_of_phase should fail when xfail=’expected’ but the line does NOT contain the EXPECTED comment

test_assertStatusOfPhase_xfailExpected_passes()[source]

assert_status_of_phase should pass when xfail=’expected’ and the line contains the EXPECTED comment

test_assertStatusOfPhase_xfailNo_fails()[source]

assert_status_of_phase should fail when xfail=’no’ but the line contains the EXPECTED comment

test_assertStatusOfPhase_xfailNo_passes()[source]

assert_status_of_phase should pass when xfail=’no’ and there is no EXPECTED/UNEXPECTED on the line

test_assertStatusOfPhase_xfailUnexpected_fails()[source]

assert_status_of_phase should fail when xfail=’unexpected’ but the line does NOT contain the UNEXPECTED comment

test_assertStatusOfPhase_xfailUnexpected_passes()[source]

assert_status_of_phase should pass when xfail=’unexpected’ and the line contains the UNEXPECTED comment

CIME.tests.test_unit_doctest module

class CIME.tests.test_unit_doctest.TestDocs(methodName='runTest')[source]

Bases: BaseTestCase

test_lib_docs()[source]

CIME.tests.test_unit_expected_fails_file module

class CIME.tests.test_unit_expected_fails_file.TestExpectedFailsFile(methodName='runTest')[source]

Bases: TestCase

setUp()[source]

Hook method for setting up the test fixture before exercising it.

tearDown()[source]

Hook method for deconstructing the test fixture after testing it.

test_basic()[source]

Basic test of the parsing of an expected fails file

test_invalid_file()[source]

Given an invalid file, an exception should be raised in schema validation

test_same_test_appears_twice()[source]

If the same test appears twice, its information should be appended.

This is not the typical, expected layout of the file, but it should be handled correctly in case the file is written this way.

CIME.tests.test_unit_grids module

This module tests some functionality of CIME.XML.grids

class CIME.tests.test_unit_grids.TestComponentGrids(methodName='runTest')[source]

Bases: TestCase

Tests the _ComponentGrids helper class defined in CIME.XML.grids

test_check_num_elements_right_ndomains()[source]

With the right number of domains for a component, check_num_elements should pass

test_check_num_elements_right_nmaps()[source]

With the right number of maps between two components, check_num_elements should pass

test_check_num_elements_wrong_ndomains()[source]

With the wrong number of domains for a component, check_num_elements should fail

test_check_num_elements_wrong_nmaps()[source]

With the wrong number of maps between two components, check_num_elements should fail

class CIME.tests.test_unit_grids.TestGrids(methodName='runTest')[source]

Bases: TestCase

Tests some functionality of CIME.XML.grids

Note that much of the functionality of CIME.XML.grids is NOT covered here

assert_grid_info_f09_g17(grid_info)[source]

Asserts that expected grid info is present and correct when using _MODEL_GRID_F09_G17

assert_grid_info_f09_g17_3glc(grid_info)[source]

Asserts that all domain info is present & correct for _MODEL_GRID_F09_G17_3GLC

setUp()[source]

Hook method for setting up the test fixture before exercising it.

tearDown()[source]

Hook method for deconstructing the test fixture after testing it.

test_get_grid_info_3glc()[source]

Test of get_grid_info with 3 glc grids

test_get_grid_info_basic()[source]

Basic test of get_grid_info

test_get_grid_info_extra_gridmaps()[source]

Test of get_grid_info with some extra gridmaps

test_get_grid_info_extra_required_gridmaps()[source]

Test of get_grid_info with some extra required gridmaps

class CIME.tests.test_unit_grids.TestGridsFunctions(methodName='runTest')[source]

Bases: TestCase

Tests helper functions defined in CIME.XML.grids

These tests are in a separate class to avoid the unnecessary setUp and tearDown function of the main test class.

test_add_grid_info_existing()[source]

Test of _add_grid_info when the given key already exists

test_add_grid_info_existing_with_value_for_multiple()[source]

Test of _add_grid_info when the given key already exists and value_for_multiple is provided

test_add_grid_info_initial()[source]

Test of _add_grid_info for the initial add of a given key

test_strip_grid_from_name_badname()[source]

_strip_grid_from_name should raise an exception for a name not ending with _grid

test_strip_grid_from_name_basic()[source]

Basic test of _strip_grid_from_name

CIME.tests.test_unit_hist_utils module

class CIME.tests.test_unit_hist_utils.TestHistUtils(methodName='runTest')[source]

Bases: TestCase

test_copy_histfiles(safe_copy)[source]
test_copy_histfiles_exclude(safe_copy)[source]

CIME.tests.test_unit_nmlgen module

class CIME.tests.test_unit_nmlgen.TestNamelistGenerator(methodName='runTest')[source]

Bases: TestCase

test_init_defaults()[source]

CIME.tests.test_unit_paramgen module

This module tests some functionality of CIME.ParamGen.paramgen’s ParamGen class

class CIME.tests.test_unit_paramgen.DummyCase[source]

Bases: object

A dummy Case class that mimics CIME class objects’ get_value method.

get_value(varname)[source]
class CIME.tests.test_unit_paramgen.TestParamGen(methodName='runTest')[source]

Bases: TestCase

Tests some basic functionality of the CIME.ParamGen.paramgen’s ParamGen class

test_expandable_vars()[source]

Tests the reduce method of ParamGen expandable vars in guards.

test_formula_expansion()[source]

Tests the formula expansion feature of ParamGen.

test_init_data()[source]

Tests the ParamGen initializer with and without an initial data.

test_match()[source]

Tests the default behavior of returning the last match and the optional behavior of returning the first match.

test_nested_reduce()[source]

Tests the reduce method of ParamGen on data with nested guards.

test_outer_guards()[source]

Tests the reduce method on data with outer guards enclosing parameter definitions.

test_reduce()[source]

Tests the reduce method of ParamGen on data with explicit guards (True or False).

test_undefined_var()[source]

Tests the reduce method of ParamGen on nested guards where an undefined expandable var is specified below a guard that evaluates to False. The undefined var should not lead to an error since the enclosing guard evaluates to false.

class CIME.tests.test_unit_paramgen.TestParamGenXmlConstructor(methodName='runTest')[source]

Bases: TestCase

A unit test class for testing ParamGen’s xml constructor.

test_default_var()[source]

Test to check if default val is assigned when all guards eval to False

test_duplicate_entry_error()[source]

Test to make sure duplicate ids raise the correct error when the “no_duplicates” flag is True.

test_mixed_guard()[source]

Tests multiple key=value guards mixed with explicit (flexible) guards.

test_mixed_guard_first()[source]

Tests multiple key=value guards mixed with explicit (flexible) guards with match=first option.

test_no_match()[source]

Tests an xml entry with no match, i.e., no guards evaluating to True.

test_single_key_val_guard()[source]

Test xml entry values with single key=value guards

class CIME.tests.test_unit_paramgen.TestParamGenYamlConstructor(methodName='runTest')[source]

Bases: TestCase

A unit test class for testing ParamGen’s yaml constructor.

test_input_data_list()[source]

Test mom.input_data_list file generation via a subset of original input_data_list.yaml

test_mom_input()[source]

Test MOM_input file generation via a subset of original MOM_input.yaml

CIME.tests.test_unit_system_tests module

class CIME.tests.test_unit_system_tests.TestUnitSystemTests(methodName='runTest')[source]

Bases: TestCase

test_check_for_memleak(get_latest_cpl_logs, perf_get_memory_list, append_testlog, load_coupler_customization)[source]
test_check_for_memleak_found(get_latest_cpl_logs, perf_get_memory_list, append_testlog, load_coupler_customization)[source]
test_check_for_memleak_not_enough_samples(get_latest_cpl_logs, perf_get_memory_list, append_testlog, load_coupler_customization)[source]
test_check_for_memleak_runtime_error(get_latest_cpl_logs, perf_get_memory_list, append_testlog, load_coupler_customization)[source]
test_compare_memory(append_testlog, perf_compare_memory_baseline)[source]
test_compare_memory_erorr_diff(append_testlog, perf_compare_memory_baseline)[source]
test_compare_memory_erorr_fail(append_testlog, perf_compare_memory_baseline)[source]
test_compare_throughput(append_testlog, perf_compare_throughput_baseline)[source]
test_compare_throughput_error_diff(append_testlog, perf_compare_throughput_baseline)[source]
test_compare_throughput_fail(append_testlog, perf_compare_throughput_baseline)[source]
test_dry_run()[source]
test_generate_baseline()[source]
test_kwargs()[source]
CIME.tests.test_unit_system_tests.create_mock_case(tempdir, idx=None, cpllog_data=None)[source]

CIME.tests.test_unit_test_status module

class CIME.tests.test_unit_test_status.TestTestStatus(methodName='runTest')[source]

Bases: CustomAssertionsTestStatus

setUp()[source]

Hook method for setting up the test fixture before exercising it.

test_current_is()[source]
test_get_latest_phase()[source]
test_psdump_corePhasesPass()[source]
test_psdump_oneCorePhaseFails()[source]
test_psdump_oneCorePhaseFailsAbsentFromXFails()[source]

One phase fails. There is an expected fails list, but that phase is not in it.

test_psdump_oneCorePhaseFailsInXFails()[source]

One phase fails. That phase is in the expected fails list.

test_psdump_oneCorePhasePassesInXFails()[source]

One phase passes despite being in the expected fails list.

test_psdump_skipPasses()[source]

With the skip_passes argument, only non-passes should appear

test_psdump_unexpectedPass_shouldBePresent()[source]

Even with the skip_passes argument, an unexpected PASS should be present

CIME.tests.test_unit_user_mod_support module

class CIME.tests.test_unit_user_mod_support.TestUserModSupport(methodName='runTest')[source]

Bases: TestCase

assertResults(expected_user_nl_cpl, expected_shell_commands_result, expected_sourcemod, msg='')[source]

Asserts that the contents of the files in self._caseroot match expectations

If msg is provided, it is printed for some failing assertions

createUserMod(name, include_dirs=None)[source]

Create a user_mods directory with the given name.

This directory is created within self._user_mods_parent_dir

For name=’foo’, it will contain:

  • A user_nl_cpl file with contents: foo

  • A shell_commands file with contents: echo foo >> /PATH/TO/CASEROOT/shell_commands_result

  • A file in _SOURCEMODS named myfile.F90 with contents: foo

If include_dirs is given, it should be a list of strings, giving names of other user_mods directories to include. e.g., if include_dirs is [‘foo1’, ‘foo2’], then this will create a file ‘include_user_mods’ that contains paths to the ‘foo1’ and ‘foo2’ user_mods directories, one per line.

setUp()[source]

Hook method for setting up the test fixture before exercising it.

tearDown()[source]

Hook method for deconstructing the test fixture after testing it.

test_basic()[source]
test_duplicate_includes()[source]

Test multiple includes, where both include the same base mod.

The base mod should only be included once.

test_include()[source]

If there is an included mod, the main one should appear after the included one so that it takes precedence.

test_keepexe()[source]
test_two_applications()[source]

If apply_user_mods is called twice, the second should appear after the first so that it takes precedence.

CIME.tests.test_unit_user_nl_utils module

class CIME.tests.test_unit_user_nl_utils.TestUserNLCopier(methodName='runTest')[source]

Bases: TestCase

assertFileContentsEqual(expected, filepath, msg=None)[source]

Asserts that the contents of the file given by ‘filepath’ are equal to the string given by ‘expected’. ‘msg’ gives an optional message to be printed if the assertion fails.

setUp()[source]

Hook method for setting up the test fixture before exercising it.

tearDown()[source]

Hook method for deconstructing the test fixture after testing it.

test_append()[source]
test_append_list()[source]
test_append_multiple_files()[source]
test_append_without_files_raises_exception()[source]
write_user_nl_file(component, contents, suffix='')[source]

Write contents to a user_nl file in the case directory. Returns the basename (i.e., not the full path) of the file that is created.

For a component foo, with the default suffix of ‘’, the file name will be user_nl_foo

If the suffix is ‘_0001’, the file name will be user_nl_foo_0001

CIME.tests.test_unit_utils module

class CIME.tests.test_unit_utils.MockTime[source]

Bases: object

class CIME.tests.test_unit_utils.TestFileContainsPythonFunction(methodName='runTest')[source]

Bases: TestCase

Tests of file_contains_python_function

create_test_file(contents)[source]

Creates a test file with the given contents, and returns the path to that file

setUp()[source]

Hook method for setting up the test fixture before exercising it.

tearDown()[source]

Hook method for deconstructing the test fixture after testing it.

test_contains_correct_def_and_others()[source]

Test file_contains_python_function with a correct def mixed with other defs

test_does_not_contain_correct_def()[source]

Test file_contains_python_function without the correct def

class CIME.tests.test_unit_utils.TestIndentStr(methodName='runTest')[source]

Bases: TestCase

Test the indent_string function.

test_indent_string_multiline()[source]

Test the indent_string function with a multi-line string

test_indent_string_singleline()[source]

Test the indent_string function with a single-line string

class CIME.tests.test_unit_utils.TestLineDefinesPythonFunction(methodName='runTest')[source]

Bases: TestCase

Tests of _line_defines_python_function

test_def_barfoo()[source]

Test of a def of a different function

test_def_foo()[source]

Test of a def of the function of interest

test_def_foo_indented()[source]

Test of a def of the function of interest, but indented

test_def_foo_no_parens()[source]

Test of a def of the function of interest, but without parentheses

test_def_foo_space()[source]

Test of a def of the function of interest, with an extra space before the parentheses

test_def_foobar()[source]

Test of a def of a different function

test_import_barfoo()[source]

Test of an import of a different function

test_import_foo()[source]

Test of an import of the function of interest

test_import_foo_indented()[source]

Test of an import of the function of interest, but indented

test_import_foo_space()[source]

Test of an import of the function of interest, with trailing spaces

test_import_foo_then_others()[source]

Test of an import of the function of interest, along with others

test_import_foobar()[source]

Test of an import of a different function

test_import_others_then_foo()[source]

Test of an import of the function of interest, after others

class CIME.tests.test_unit_utils.TestUtils(methodName='runTest')[source]

Bases: TestCase

assertMatchAllLines(tempdir, test_lines)[source]
setUp()[source]

Hook method for setting up the test fixture before exercising it.

test_copy_globs(safe_copy, glob)[source]
test_import_and_run_sub_or_cmd()[source]
test_import_and_run_sub_or_cmd_cime_py(importmodule)[source]
test_import_and_run_sub_or_cmd_import(importmodule)[source]
test_import_and_run_sub_or_cmd_run(func, isfile)[source]
test_import_from_file()[source]
test_run_and_log_case_status()[source]
test_run_and_log_case_status_case_submit_error_on_batch()[source]
test_run_and_log_case_status_case_submit_no_batch()[source]
test_run_and_log_case_status_case_submit_on_batch()[source]
test_run_and_log_case_status_custom_msg()[source]
test_run_and_log_case_status_custom_msg_error_on_batch()[source]
test_run_and_log_case_status_error()[source]
CIME.tests.test_unit_utils.match_all_lines(data, lines)[source]

CIME.tests.test_unit_xml_archive_base module

class CIME.tests.test_unit_xml_archive_base.TestXMLArchiveBase(methodName='runTest')[source]

Bases: TestCase

test_exclude_testing()[source]
test_extension_included()[source]
test_match_files()[source]
test_suffix()[source]

CIME.tests.test_unit_xml_env_batch module

class CIME.tests.test_unit_xml_env_batch.TestXMLEnvBatch(methodName='runTest')[source]

Bases: TestCase

test_get_job_deps()[source]
test_get_queue_specs(get)[source]
test_get_submit_args()[source]
test_get_submit_args_job_queue()[source]
test_set_job_defaults(get_default_queue, select_best_queue, get_queue_specs, text)[source]
test_set_job_defaults_honor_walltimemax(get_default_queue, select_best_queue, get_queue_specs, text)[source]
test_set_job_defaults_honor_walltimemin(get_default_queue, select_best_queue, get_queue_specs, text)[source]
test_set_job_defaults_user_walltime(get_default_queue, select_best_queue, get_queue_specs, text)[source]
test_set_job_defaults_walltimedef(get_default_queue, select_best_queue, get_queue_specs, text)[source]
test_set_job_defaults_walltimemax_none(get_default_queue, select_best_queue, get_queue_specs, text)[source]
test_set_job_defaults_walltimemin_none(get_default_queue, select_best_queue, get_queue_specs, text)[source]
test_submit_jobs(_submit_single_job)[source]
test_submit_jobs_dependency(_submit_single_job, get_batch_script_for_job, isfile)[source]
test_submit_jobs_single(_submit_single_job, get_batch_script_for_job, isfile)[source]

CIME.tests.test_unit_xml_env_mach_specific module

class CIME.tests.test_unit_xml_env_mach_specific.TestXMLEnvMachSpecific(methodName='runTest')[source]

Bases: TestCase

test_aprun_get_args()[source]
test_cmd_path(text, get_optional_child)[source]
test_find_best_mpirun_match()[source]
test_get_aprun_mode_default()[source]
test_get_aprun_mode_not_valid()[source]
test_get_aprun_mode_user_defined()[source]
test_get_mpirun()[source]
test_init_path(text, get_optional_child)[source]

CIME.tests.test_unit_xml_machines module

class CIME.tests.test_unit_xml_machines.TestUnitXMLMachines(methodName='runTest')[source]

Bases: TestCase

setUp()[source]

Hook method for setting up the test fixture before exercising it.

test_has_batch_system()[source]
test_is_valid_MPIlib()[source]
test_is_valid_compiler()[source]

CIME.tests.test_unit_xml_namelist_definition module

class CIME.tests.test_unit_xml_namelist_definition.TestXMLNamelistDefinition(methodName='runTest')[source]

Bases: TestCase

test_set_nodes()[source]

CIME.tests.test_unit_xml_tests module

class CIME.tests.test_unit_xml_tests.TestXMLTests(methodName='runTest')[source]

Bases: TestCase

setUp()[source]

Hook method for setting up the test fixture before exercising it.

test_support_single_exe(_setup_cases_if_not_yet_done)[source]
test_support_single_exe_error(_setup_cases_if_not_yet_done)[source]

CIME.tests.utils module

class CIME.tests.utils.CMakeTester(parent, cmake_string)[source]

Bases: object

Helper class for checking CMake output.

Public methods: __init__ query_var assert_variable_equals assert_variable_matches

assert_variable_equals(var_name, value, env=None, var=None)[source]

Assert that a variable in the CMakeLists has a given value.

Arguments: var_name - Name of variable to check. value - The string that the variable value should be equal to. env - Optional. Dict of environment variables to set when calling cmake. var - Optional. Dict of CMake variables to set when calling cmake.

assert_variable_matches(var_name, regex, env=None, var=None)[source]

Assert that a variable in the CMkeLists matches a regex.

Arguments: var_name - Name of variable to check. regex - The regex to match. env - Optional. Dict of environment variables to set when calling cmake. var - Optional. Dict of CMake variables to set when calling cmake.

query_var(var_name, env, var)[source]

Request the value of a variable in Macros.cmake, as a string.

Arguments: var_name - Name of the variable to query. env - A dict containing extra environment variables to set when calling

cmake.

var - A dict containing extra CMake variables to set when calling cmake.

class CIME.tests.utils.MakefileTester(parent, make_string)[source]

Bases: object

Helper class for checking Makefile output.

Public methods: __init__ query_var assert_variable_equals assert_variable_matches

assert_variable_equals(var_name, value, env=None, var=None)[source]

Assert that a variable in the Makefile has a given value.

Arguments: var_name - Name of variable to check. value - The string that the variable value should be equal to. env - Optional. Dict of environment variables to set when calling make. var - Optional. Dict of make variables to set when calling make.

assert_variable_matches(var_name, regex, env=None, var=None)[source]

Assert that a variable in the Makefile matches a regex.

Arguments: var_name - Name of variable to check. regex - The regex to match. env - Optional. Dict of environment variables to set when calling make. var - Optional. Dict of make variables to set when calling make.

query_var(var_name, env, var)[source]

Request the value of a variable in the Makefile, as a string.

Arguments: var_name - Name of the variable to query. env - A dict containing extra environment variables to set when calling

make.

var - A dict containing extra make variables to set when calling make.
(The distinction between env and var actually matters only for

CMake, though.)

class CIME.tests.utils.MockMachines(name, os_)[source]

Bases: object

A mock version of the Machines object to simplify testing.

get_default_MPIlib(attributes=None)[source]
get_default_compiler()[source]
get_machine_name()[source]

Return the name we were given.

get_value(var_name)[source]

Allow the operating system to be queried.

is_valid_MPIlib(_)[source]

Assume all MPILIB settings are valid.

is_valid_compiler(_)[source]

Assume all compilers are valid.

class CIME.tests.utils.Mocker(ret=None, cmd=None, return_value=None, side_effect=None)[source]

Bases: object

assert_called()[source]
assert_called_with(i=None, args=None, kwargs=None)[source]
property calls
property method_calls
patch(module, method=None, ret=None, is_property=False, update_value_only=False)[source]
property ret
revert_mocks()[source]
class CIME.tests.utils.TemporaryDirectory[source]

Bases: object

CIME.tests.utils.make_fake_teststatus(path, testname, status, phase)[source]
CIME.tests.utils.parse_test_status(line)[source]

Module contents