Common XML Files

Common XML files in CIMEROOT/config.

CIMEROOT/config/config_headers.xml

Headers for the CASEROOT env_*.xml files created by create_newcase.

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="config_definition.xsl" ?>

<files>
  <file name="env_batch.xml">
    <header>
      These variables may be changed anytime during a run, they
      control arguments to the batch submit command.
    </header>
  </file>

  <file name="env_workflow.xml">
    <header>
      These variables may be changed anytime during a run, they
      control jobs that will be submitted and their dependancies.
    </header>
  </file>

  <file name="env_case.xml">
    <header>
    These variables CANNOT BE CHANGED once a case has been created.
    Invoke create_newcase again if a different grid or component
    combination is required.
    </header>
  </file>

  <file name ="env_run.xml">
    <header>
    These variables MAY BE CHANGED ANYTIME during a run.
    Additional machine speific variables that can be changed
    during a run are contained in the env_mach_specific file
    Note1: users SHOULD NOT modify BUILD_COMPETE in env_build.xml
    this is done automatically by the scripts.
    </header>
  </file>

  <file name ="env_mach_specific.xml">
    <header>
    These variables control the machine dependent environment including
    the paths to compilers and libraries external to cime such as netcdf,
    environment variables for use in the running job should also be set	here.
    </header>
  </file>

  <file name="env_build.xml">
    <header>
    These variables SHOULD NOT be changed once the model has been built.
    urrently, these variables are not cached.
    Note1: users SHOULD NOT modify BUILD_COMPETE below
    this is done automatically by the scripts.
    </header>
  </file>

  <file name="env_mach_pes.xml">
    <header>
    These variables CANNOT be modified once case_setup has been
    invoked without first invoking case_setup -reset

    NTASKS: the total number of MPI tasks, a negative value indicates nodes rather than tasks.
    NTHRDS: the number of OpenMP threads per MPI task.
    ROOTPE: the global mpi task of the component root task, if negative, indicates nodes rather than tasks.
    PSTRID: the stride of MPI tasks across the global set of pes (for now set to 1)
    NINST : the number of component instances (will be spread evenly across NTASKS)

    for example, for NTASKS = 8, NTHRDS = 2, ROOTPE = 32, NINST  = 2
    the MPI tasks would be placed starting on global pe 32 and each pe would be threaded 2-ways
    These tasks will be divided amongst both instances (4 tasks each).

    Note: PEs that support threading never have an MPI task associated
    with them for performance reasons.  As a result, NTASKS and ROOTPE
    are relatively independent of NTHRDS and they determine
    the layout of mpi processors between components.  NTHRDS is used
    to determine how those mpi tasks should be placed across the machine.

    The following values should not be set by the user since they'll be
    overwritten by scripts: TOTALPES, NTASKS_PER_INST
    </header>
  </file>

  <file name="env_archive.xml">
    <header>
      These are the variables specific to the short term archiver.
      See  ./case.st_archive --help for details on running the short term archiver script.
      To validate the env_archive.xml file using xmllint, run
      xmllint -schema $SRCROOT/cime/config/xml_schemas/env_archive.xsd env_archive.xml
      from the case root.
      The patterns below are Python regular expressions.
      The file names created from these patterns will add an optional digit
      to them and will enclose them in a pair of '.'.
      Some useful Python metacharacters are:
         [] = any single character inside the brackets
         \d = a digit = [0123456789] = [0-9]
          ? = 0 or 1 of the previous character
          * = 0 or more of the previous character (greedy!)
          + = 1 or more of the previous character (greedy!)
         \. = a period
          . = any non-newline character
      Use them carefully.  They're often confused with shell-type
      wild card characters.
    </header>
  </file>

  <file name="env_test.xml">
    <header>These are the variables specific to a test case.</header>
  </file>

</files>

CIMEROOT/config/config_tests.xml

Descriptions and XML settings for the CIME regression tests.

<?xml version="1.0"?>

<!--

The following are the test functionality categories:
   1) smoke tests
   2) basic reproducibility tests
   3) restart tests
   4) threading/pe-count modification tests
   5) sequencing (layout) modification tests
   6) multi-instance tests
   7) archiving (short-term and long-term) tests
   8) performance tests
   9) spinup tests
  10) data assimilation tests
  11) other component-specific tests

NOTES:
- specifying the attribute INFRASTRUCTURE_TEST="TRUE" declares the given
  test type to be an infrastructure-only test that is only meant to be
  run by scripts_regression_tests.
- unless otherwise noted everything is run in one executable directory
- suffix: denotes the component history file suffixes that are added as part of the test
- IOP test is done along with regular tests - not as a separate test
- IOP test is only currently valid for SMS, ERS and PET

======================================================================
    Smoke Tests
======================================================================

SMS    smoke startup test (default length)
       do a 5 day initial test (suffix: base)
       if $IOP_ON is set then suffix is base_iop
       success for non-iop is just a successful coupler

SBN    smoke build-namelist test (just run preview_namelist and check_input_data)

======================================================================
    Basic reproducibility Tests
======================================================================

REP    reproducibility: do two identical runs give the same results?

======================================================================
    Restart Tests
======================================================================

ERS    exact restart from startup (default 6 days + 5 days)
       do an 11 day initial test - write a restart at day 6     (suffix: base)
       if $IOP_ON is set then suffix is base_iop
       do a  5  day restart test starting from restart at day 6 (suffix: rest)
       if $IOP_ON is set then suffix is rest_iop
       compare component history files ".base" and ".rest" at day 11

ERS2    exact restart from startup (default 6 days + 5 days)
       do an 11 day initial test without making restarts     (suffix: base)
       if $IOP_ON is set then suffix is base_iop
       do an 11 day restart test stopping at day 6 with a restart, then resuming from restart at day 6 (suffix: rest)
       if $IOP_ON is set then suffix is rest_iop
       compare component history files ".base" and ".rest" at day 11

ERP    pes counts hybrid (open-MP/MPI) restart bfb test from startup, default 6 days + 5 days (previousy PER)
       initial pes set up out of the box
       do an 11 day initial test - write a restart at day 6     (suffix base)
       half the number of tasks and threads for each component
       do a  5  day restart test starting from restart at day 6 (suffix rest)
       this is just like an ERS test but the pe-counts/threading count are modified on retart

ERI    hybrid/branch/exact restart test, default (by default STOP_N is 22 days)
       (1) ref1case
           do an initial for ${STOP_N}/6 writing restarts at ${STOP_N}/6
           ref1 case is a clone of the main case (by default this will be 4 days)
           short term archiving is on
       (2) ref2case
           do a hybrid for ${STOP_N}-${STOP_N}/6 running with ref1 restarts from ${STOP_N}/6
           and writing restarts at ( ${STOP_N} - ${STOP_N}/6 )/2 +1
	   (by default will run for 18 days and write a restart after 10 days)
           ref2 case is a clone of the main case
           short term archiving is on
       (3) case
           do a branch run starting from restart written in ref2 case
           and run for ???  days
       (4) case do a restart run from the branch case

======================================================================
    Threading/PE-Counts/Pe-Sequencing Tests
======================================================================

PET    modified threading openmp bfb test (seq tests)
       do an initial run where all components are threaded by default (suffix: base)
       do another initial run with nthrds=1 for all components        (suffix: single_thread)
       compare base and single_thread

PEM    modified pe counts mpi bfb test (seq tests)
       do an initial run with default pe layout                               (suffix: base)
       do another initial run with modified pes (NTASKS_XXX => NTASKS_XXX/2)  (suffix: modpes)
       compare base and single_thread

PMT    modified-task and modified-thread count bfb test (previousy OEM)
       do an initial run                              (suffix: base)
       do a second run with half tasks, twice threads (suffix: modpes)
       (***note that PTM_script and PEM_script are the same - but PEM_build.csh and PTM_build.csh are different***)
       *** REMOVING  this test in the pythonized version ***

PEA    single pe bfb test
       do an initial run on 1 pe with mpi     (suffix: base)
       do the same run on 1 pe with mpiserial (suffix: mpiserial)

======================================================================
    Sequencing (layout) Tests (smoke)
======================================================================

SEQ    different sequencing bfb test
       do an initial run test with out-of-box PE-layout (suffix: base)
       do a second run where all root pes are at pe-0   (suffix: seq)
       compare base and seq

======================================================================
    Multi-Instance Tests (smoke)
======================================================================

NCK    multi-instance validation vs single instance - sequential PE for instances (default length)
       do an initial run test with NINST 1 (suffix: base)
       do an initial run test with NINST 2 (suffix: multiinst for both _0001 and _0002)
       compare base and _0001 and _0002

NCR    multi-instance validation vs single instance - concurrent PE for instances  (default length)
       do an initial run test with NINST 1 (suffix: base)
       do an initial run test with NINST 2 (suffix: multiinst for both _0001 and _0002)
        compare base and _0001 and _0002

NOC    multi-instance validation for single instance ocean (default length)
       do an initial run test with NINST 2 (other than ocn), with mod to instance 1 (suffix: inst1_base, inst2_mod)
       do an initial run test with NINST 2 (other than ocn), with mod to instance 2 (suffix: inst1_base, inst2_mod)
       compare inst1_base with inst2_base
       compare inst1_mod  with inst2_mod

======================================================================
    Performance Tests
======================================================================

PFS    system performance test
ICP    cice performance test
OCP    pop performance test

======================================================================
    SPINUP tests
======================================================================

SPO    smoke spinup-ocean test

======================================================================
    Archiving Tests
======================================================================

LAR    long term archive test

======================================================================
    Data Assimilation Tests
======================================================================

DAE    data assimilation test: non answer changing

PRE    pause-resume test: by default a BFB test of pause-resume cycling

======================================================================
    Other component-specific tests should be located with components
======================================================================


======================================================================
    Infrastructural tests for CIME. These are used by scripts_regression_tests.
    Users won't generally run these.
======================================================================


TESTBUILDFAIL     Insta-fail build step. Used to confirm that failed
                  builds are caught and reported correctly.

TESTBUILDFAILEXC  Insta-fail build step by failing to init. Used to test
                  correct behavior when exceptions are generated.

TESTRUNFAIL       Insta-fail run step. Used to confirm that model run
                  failures are caught and reported correctly.

TESTRUNFAILEXC    Insta-fail run step via exception. Used to test correct
                  correct behavior when exceptions are generated.

TESTRUNPASS       Insta-pass run step. Used to test that run that work
                  are reported correctly.

TESTMEMLEAKFAIL   Insta-fail memleak step. Used to test that memleaks are
                  detected and reported correctly.

TESTMEMLEAKPASS   Insta-pass memleak step. Used to test that non-memleaks are
                  reported correctly.

TESTRUNDIFF       Produces a canned hist file. Env var TESTRUNDIFF_ALTERNATE can
                  be used to cause a DIFF. Used to check that baseline diffs are
                  detected and reported correctly.

TESTTESTDIFF      Simulates internal test diff (non baseline). Used to check that
                  internal comparison failures are detected and reported correctly.

TESTRUNSLOWPASS   After 5 minutes of sleep, pass run step. Used to test timeouts
                  and kills.

NODEFAIL          Tests restart upon detected node failure. Generates fake failures,
                  the number of which is controlled by NODEFAIL_NUM_FAILS.

-->

<config_test>

  <test NAME="DAE">
    <DESC>data assimilation test, default two 2-day DA cycles, no data modification</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <STOP_OPTION>ndays</STOP_OPTION>
    <STOP_N>4</STOP_N>
    <HIST_OPTION>ndays</HIST_OPTION>
    <HIST_N>1</HIST_N>
    <DOUT_S>FALSE</DOUT_S>
  </test>

  <test NAME="ERI">
    <DESC>hybrid/branch/exact restart test, default 3+19/10+9/5+4 days</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <STOP_OPTION>ndays</STOP_OPTION>
    <STOP_N>22</STOP_N>
    <DOUT_S>FALSE</DOUT_S>
    <RUN_TYPE>startup</RUN_TYPE>
    <RUN_REFCASE>case.std</RUN_REFCASE>
    <RUN_REFDATE>0001-01-01</RUN_REFDATE>
    <RUN_REFTOD>00000</RUN_REFTOD>
    <REST_OPTION>ndays</REST_OPTION>
    <REST_N>$STOP_N</REST_N>
    <GET_REFCASE>FALSE</GET_REFCASE>
    <CONTINUE_RUN>FALSE</CONTINUE_RUN>
    <HIST_OPTION>never</HIST_OPTION>
    <HIST_N>-999</HIST_N>
  </test>

  <test NAME="ERP">
    <DESC>pes counts hybrid (open-MP/MPI) restart bfb test from startup, default 6 days + 5 days</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <BFBFLAG>TRUE</BFBFLAG>
    <STOP_OPTION>ndays</STOP_OPTION>
    <STOP_N>11</STOP_N>
    <DOUT_S>FALSE</DOUT_S>
    <FORCE_BUILD_SMP>TRUE</FORCE_BUILD_SMP>
    <REST_N>$STOP_N / 2 + 1 </REST_N>
    <REST_OPTION>$STOP_OPTION</REST_OPTION>
    <HIST_OPTION>$STOP_OPTION</HIST_OPTION>
    <HIST_N>$STOP_N</HIST_N>
  </test>

  <test NAME="ERS">
    <DESC>exact restart from startup, default 6 days + 5 days</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <STOP_OPTION>ndays</STOP_OPTION>
    <STOP_N>11</STOP_N>
    <REST_N>$STOP_N / 2 + 1</REST_N>
    <REST_OPTION>$STOP_OPTION</REST_OPTION>
    <HIST_N>$STOP_N</HIST_N>
    <HIST_OPTION>$STOP_OPTION</HIST_OPTION>
    <DOUT_S>FALSE</DOUT_S>
    <CONTINUE_RUN>FALSE</CONTINUE_RUN>
  </test>

  <test NAME="ERS2">
    <DESC>exact restart from startup, default 6 days + 5 days</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <STOP_OPTION>ndays</STOP_OPTION>
    <STOP_N>11</STOP_N>
    <REST_N>$STOP_N</REST_N>
    <REST_OPTION>$STOP_OPTION</REST_OPTION>
    <HIST_N>$STOP_N</HIST_N>
    <HIST_OPTION>$STOP_OPTION</HIST_OPTION>
    <DOUT_S>FALSE</DOUT_S>
    <CONTINUE_RUN>FALSE</CONTINUE_RUN>
  </test>

  <test NAME="IRT">
    <DESC>exact restart from startup, default 4 days + 7 days with restart from interim file</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <STOP_OPTION>ndays</STOP_OPTION>
    <STOP_N>11</STOP_N>
    <REST_N>$STOP_N / 2 - 1</REST_N>
    <REST_OPTION>$STOP_OPTION</REST_OPTION>
    <HIST_N>$STOP_N</HIST_N>
    <HIST_OPTION>$STOP_OPTION</HIST_OPTION>
    <DOUT_S_SAVE_INTERIM_RESTART_FILES>TRUE</DOUT_S_SAVE_INTERIM_RESTART_FILES>
    <CONTINUE_RUN>FALSE</CONTINUE_RUN>
  </test>

  <test NAME="ERIO">
    <DESC>exact restart from startup with different PIO methods, default 6 days + 5 days</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <STOP_OPTION>ndays</STOP_OPTION>
    <STOP_N>11</STOP_N>
    <DOUT_S>FALSE</DOUT_S>
    <CONTINUE_RUN>FALSE</CONTINUE_RUN>
  </test>

  <test NAME="ERR">
    <DESC>exact restart from startup with resubmit, default 4 days + 3 days</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <STOP_OPTION>ndays</STOP_OPTION>
    <STOP_N>7</STOP_N>
    <REST_N>$STOP_N / 2 + 1</REST_N>
    <REST_OPTION>$STOP_OPTION</REST_OPTION>
    <HIST_N>$STOP_N</HIST_N>
    <HIST_OPTION>$STOP_OPTION</HIST_OPTION>
    <RESUBMIT>1</RESUBMIT>
    <CONTINUE_RUN>FALSE</CONTINUE_RUN>
  </test>

  <test NAME="ERRI">
    <DESC>exact restart from startup with resubmit, default 4 days + 3 days. Tests incomplete logs option for st_archive</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <STOP_OPTION>ndays</STOP_OPTION>
    <STOP_N>7</STOP_N>
    <REST_N>$STOP_N / 2 + 1</REST_N>
    <REST_OPTION>$STOP_OPTION</REST_OPTION>
    <HIST_N>$STOP_N</HIST_N>
    <HIST_OPTION>$STOP_OPTION</HIST_OPTION>
    <DOUT_S>TRUE</DOUT_S>
    <RESUBMIT>1</RESUBMIT>
    <CONTINUE_RUN>FALSE</CONTINUE_RUN>
  </test>

  <test NAME="ERT">
    <DESC>exact restart from startup, default 2 month + 1 month (ERS with info dbug = 1)</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <STOP_OPTION>nmonths</STOP_OPTION>
    <STOP_N></STOP_N>
    <AVGHIST_OPTION>nmonths</AVGHIST_OPTION>
    <AVGHIST_N>1</AVGHIST_N>
    <DOUT_S>FALSE</DOUT_S>
  </test>

  <test NAME="HOMME">
    <DESC>Run homme tests. Only works with the ACME version of the atmosphere component.</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <STOP_OPTION>ndays</STOP_OPTION>
    <STOP_N>11</STOP_N>
    <CHECK_TIMING>FALSE</CHECK_TIMING>
    <DOUT_S>FALSE</DOUT_S>
  </test>

  <test NAME="LDSTA">
    <DESC>Tests the short term archiver's last date functionality.</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <STOP_OPTION>ndays</STOP_OPTION>
    <HIST_OPTION>ndays</HIST_OPTION>
    <REST_OPTION>ndays</REST_OPTION>
    <STOP_N>10</STOP_N>
    <HIST_N>1</HIST_N>
    <REST_N>1</REST_N>
    <RUN_STARTDATE>0001-12-29</RUN_STARTDATE>
    <DOUT_S>FALSE</DOUT_S>
  </test>

  <test NAME="PRE">
    <DESC>pause/resume test, default 5 hours, five pause/resume cycles, no data modification</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <STOP_OPTION>nhours</STOP_OPTION>
    <STOP_N>5</STOP_N>
    <HIST_OPTION>nhours</HIST_OPTION>
    <HIST_N>1</HIST_N>
    <DOUT_S>FALSE</DOUT_S>
  </test>

  <test NAME="TESTBUILDFAIL" INFRASTRUCTURE_TEST="TRUE">
    <DESC>For testing infra only. Insta-fail build step.</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <STOP_OPTION>ndays</STOP_OPTION>
    <STOP_N>11</STOP_N>
    <CHECK_TIMING>FALSE</CHECK_TIMING>
    <DOUT_S>FALSE</DOUT_S>
  </test>

  <test NAME="TESTBUILDFAILEXC" INFRASTRUCTURE_TEST="TRUE">
    <DESC>For testing infra only. Insta-fail build step by failing to init.</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <STOP_OPTION>ndays</STOP_OPTION>
    <STOP_N>11</STOP_N>
    <CHECK_TIMING>FALSE</CHECK_TIMING>
    <DOUT_S>FALSE</DOUT_S>
  </test>

  <test NAME="TESTRUNFAIL" INFRASTRUCTURE_TEST="TRUE">
    <DESC>For testing infra only. Insta-fail run step.</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <STOP_OPTION>ndays</STOP_OPTION>
    <STOP_N>11</STOP_N>
    <CHECK_TIMING>FALSE</CHECK_TIMING>
    <DOUT_S>FALSE</DOUT_S>
  </test>

  <test NAME="TESTRUNFAILEXC" INFRASTRUCTURE_TEST="TRUE">
    <DESC>For testing infra only. Insta-fail run step via exception.</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <STOP_OPTION>ndays</STOP_OPTION>
    <STOP_N>11</STOP_N>
    <CHECK_TIMING>FALSE</CHECK_TIMING>
    <DOUT_S>FALSE</DOUT_S>
  </test>

  <test NAME="TESTRUNPASS" INFRASTRUCTURE_TEST="TRUE">
    <DESC>For testing infra only. Insta-pass run step.</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <STOP_OPTION>ndays</STOP_OPTION>
    <STOP_N>11</STOP_N>
    <CHECK_TIMING>FALSE</CHECK_TIMING>
    <DOUT_S>FALSE</DOUT_S>
  </test>

  <test NAME="TESTMEMLEAKFAIL" INFRASTRUCTURE_TEST="TRUE">
    <DESC>For testing infra only. Insta-fail memleak step.</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <STOP_OPTION>ndays</STOP_OPTION>
    <STOP_N>11</STOP_N>
    <CHECK_TIMING>FALSE</CHECK_TIMING>
    <DOUT_S>FALSE</DOUT_S>
  </test>

  <test NAME="TESTMEMLEAKPASS" INFRASTRUCTURE_TEST="TRUE">
    <DESC>For testing infra only. Insta-pass memleak step.</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <STOP_OPTION>ndays</STOP_OPTION>
    <STOP_N>11</STOP_N>
    <CHECK_TIMING>FALSE</CHECK_TIMING>
    <DOUT_S>FALSE</DOUT_S>
  </test>

  <test NAME="TESTRUNDIFF" INFRASTRUCTURE_TEST="TRUE">
    <DESC>For testing infra only. Produces a canned hist file.</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <STOP_OPTION>ndays</STOP_OPTION>
    <STOP_N>11</STOP_N>
    <CHECK_TIMING>FALSE</CHECK_TIMING>
    <DOUT_S>FALSE</DOUT_S>
  </test>

  <test NAME="TESTTESTDIFF" INFRASTRUCTURE_TEST="TRUE">
    <DESC>For testing infra only. Simulates internal test diff (non baseline)</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <STOP_OPTION>ndays</STOP_OPTION>
    <STOP_N>11</STOP_N>
    <CHECK_TIMING>FALSE</CHECK_TIMING>
    <DOUT_S>FALSE</DOUT_S>
  </test>

  <test NAME="TESTRUNSLOWPASS" INFRASTRUCTURE_TEST="TRUE">
    <DESC>For testing infra only. After 5 minutes of sleep, pass run step.</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <STOP_OPTION>ndays</STOP_OPTION>
    <STOP_N>11</STOP_N>
    <CHECK_TIMING>FALSE</CHECK_TIMING>
    <DOUT_S>FALSE</DOUT_S>
  </test>

  <test NAME="NODEFAIL" INFRASTRUCTURE_TEST="TRUE">
    <DESC>For testing infra only. Tests restart upon detected node failure</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <STOP_OPTION>nsteps</STOP_OPTION>
    <OCN_NCPL>$ATM_NCPL</OCN_NCPL>
    <STOP_N>11</STOP_N>
    <REST_N>$STOP_N / 2 + 1</REST_N>
    <REST_OPTION>$STOP_OPTION</REST_OPTION>
    <HIST_N>$STOP_N</HIST_N>
    <HIST_OPTION>$STOP_OPTION</HIST_OPTION>
    <CONTINUE_RUN>FALSE</CONTINUE_RUN>
    <CHECK_TIMING>FALSE</CHECK_TIMING>
    <NODE_FAIL_REGEX>JGF FAKE NODE FAIL</NODE_FAIL_REGEX>
    <FORCE_SPARE_NODES>3</FORCE_SPARE_NODES>
  </test>

  <test NAME="ICP">
    <DESC>cice performance test</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <STOP_OPTION>ndays</STOP_OPTION>
    <STOP_N>0</STOP_N>
    <REST_OPTION>none</REST_OPTION>
    <COMP_RUN_BARRIERS>TRUE</COMP_RUN_BARRIERS>
    <DOUT_S>FALSE</DOUT_S>
  </test>

  <test NAME="PEA">
    <DESC>single pe bfb test (default length)</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <DOUT_S>FALSE</DOUT_S>
    <CONTINUE_RUN>FALSE</CONTINUE_RUN>
    <REST_OPTION>never</REST_OPTION>
    <HIST_OPTION>$STOP_OPTION</HIST_OPTION>
    <HIST_N>$STOP_N</HIST_N>
  </test>

  <test NAME="PEM">
    <DESC>pes counts mpi bfb test (seq tests; default length)</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <BFBFLAG>TRUE</BFBFLAG>
    <REST_OPTION>never</REST_OPTION>
    <DOUT_S>FALSE</DOUT_S>
    <HIST_OPTION>$STOP_OPTION</HIST_OPTION>
    <HIST_N>$STOP_N</HIST_N>
    <CONTINUE_RUN>FALSE</CONTINUE_RUN>
  </test>

  <test NAME="PET">
    <DESC>openmp bfb test (seq tests; default length)</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <BFBFLAG>TRUE</BFBFLAG>
    <FORCE_BUILD_SMP>TRUE</FORCE_BUILD_SMP>
    <CONTINUE_RUN>FALSE</CONTINUE_RUN>
    <REST_OPTION>none</REST_OPTION>
    <HIST_OPTION>$STOP_OPTION</HIST_OPTION>
    <HIST_N>$STOP_N</HIST_N>
    <DOUT_S>FALSE</DOUT_S>
    <RESUBMIT>1</RESUBMIT>
  </test>

  <test NAME="PFS">
    <DESC>performance test setup</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <STOP_OPTION>ndays</STOP_OPTION>
    <STOP_N>20</STOP_N>
    <REST_OPTION>none</REST_OPTION>
    <DOUT_S>FALSE</DOUT_S>
    <CONTINUE_RUN>FALSE</CONTINUE_RUN>
  </test>

  <test NAME="MCC">
    <DESC>multi-driver validation vs single-instance (default length)</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <DOUT_S>FALSE</DOUT_S>
    <CONTINUE_RUN>FALSE</CONTINUE_RUN>
    <REST_OPTION>none</REST_OPTION>
    <HIST_OPTION>$STOP_OPTION</HIST_OPTION>
    <HIST_N>$STOP_N</HIST_N>
    <MULTI_DRIVER>TRUE</MULTI_DRIVER>
  </test>

  <test NAME="NCK">
    <DESC>multi-instance validation vs single instance (default length)</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <DOUT_S>FALSE</DOUT_S>
    <CONTINUE_RUN>FALSE</CONTINUE_RUN>
    <REST_OPTION>none</REST_OPTION>
    <HIST_OPTION>$STOP_OPTION</HIST_OPTION>
    <HIST_N>$STOP_N</HIST_N>
  </test>

  <test NAME="NCR">
    <!-- Note that this is untested and may not be working currently -->
    <DESC>multi-instance validation sequential vs concurrent (default length)</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <DOUT_S>FALSE</DOUT_S>

    <CONTINUE_RUN>FALSE</CONTINUE_RUN>
    <REST_OPTION>none</REST_OPTION>
    <HIST_OPTION>$STOP_OPTION</HIST_OPTION>
    <HIST_N>$STOP_N</HIST_N>
  </test>

  <test NAME="OCP">
    <DESC>pop performance test</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <STOP_OPTION>ndays</STOP_OPTION>
    <STOP_N>10</STOP_N>
    <REST_OPTION>none</REST_OPTION>
    <COMP_RUN_BARRIERS>TRUE</COMP_RUN_BARRIERS>
    <DOUT_S>FALSE</DOUT_S>
  </test>

  <test NAME="REP">
    <DESC>reproducibility test: do two runs give the same answers?</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <DOUT_S>FALSE</DOUT_S>
    <CONTINUE_RUN>FALSE</CONTINUE_RUN>
    <REST_OPTION>never</REST_OPTION>
    <HIST_OPTION>$STOP_OPTION</HIST_OPTION>
    <HIST_N>$STOP_N</HIST_N>
  </test>

  <test NAME="SBN">
    <DESC>smoke build-namelist test (just run preview_namelist and check_input_data)</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <DOUT_S>FALSE</DOUT_S>
  </test>

  <test NAME="SEQ">
    <DESC>sequencing bfb test (10 day seq,conc tests)</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <BFBFLAG>TRUE</BFBFLAG>
    <STOP_OPTION>ndays</STOP_OPTION>
    <STOP_N>10</STOP_N>
    <REST_OPTION>never</REST_OPTION>
    <DOUT_S>FALSE</DOUT_S>
    <CONTINUE_RUN>FALSE</CONTINUE_RUN>
    <HIST_OPTION>$STOP_OPTION</HIST_OPTION>
    <HIST_N>$STOP_N</HIST_N>
  </test>

  <test NAME="SMS">
    <DESC>smoke startup test (default length)</DESC>
    <INFO_DBUG>1</INFO_DBUG>
    <DOUT_S>FALSE</DOUT_S>
    <CONTINUE_RUN>FALSE</CONTINUE_RUN>
    <REST_OPTION>none</REST_OPTION>
    <HIST_OPTION>$STOP_OPTION</HIST_OPTION>
    <HIST_N>$STOP_N</HIST_N>
  </test>

</config_test>