GaussView Job Submission

On Greenplanet, Gaussview can submit jobs directly through the slurm queuing system. Once you have signed the Gaussian Confidentiality Agreement, you will ba added to the "gaussian" auxiliary group.

To load the Gaussian environment, type this into the terminal:

ml gaussian

To then start GaussView, enter:


after which a few windows should pop up on your local X11 display. The "&" just runs GaussView in the background, letting you continue to use the terminal.

Most resource parameters (memory, number of processors, etc.) can be specified by filling in the relevant fields in the GUI, but the time limit must be handled specially. The only free-form text area is the "title" line, so in addition to your title, add your job time limit in the format: "timelimit=days-hours:minutes:seconds". If you do not specify a time limit, your job wil be terminated after 3 hours.

Our license for Gaussian does not cover running single jobs across multiple nodes, only multiple processors on a single node.

Once you have prepared your calculation in GaussView, and click "submit", the following happens:

  1. GV asks to save the input command file (jobname.gjf for g16, for g09). This should ideally be in it's own directory, but it shouldn't mess up too much if there are other running jobs in that directory, as long as they have different names.
  2. GV runs a script that scans jobname.gjf for things like memory, disk, timelimit=days-hours:minutes:seconds (or mf_medium, mf_long, etc., from the old method).
  3. GV copies the template slurm script /sopt/Gaussian/16/run_slurm.g16 to the directory that contains jobname.gjf, then edits it to include the info from step 2. The slurm script gets renamed to "run_slurm.jobname" to keep jobs separate.
  4. GV then submits the job to slurm with the command "sbatch run_slurm.jobname".
  5. Slurm looks at all the lines in run_slurm.jobname that start with #SBATCH to see what CPU/Memory/Disk/Timelimit resources are needed.
  6. When a node with those resources is ready, slurm will copy the entire directory that contains run_slurm.jobname and jobname.gjf (the submission directory) to a scratch disk on that node, then run the commands in run_slurm.jobname. For Gaussian, this is basically "g16 jobname.gjf".
  7. When the Gaussian calculation finishes normally, commands in run_slurm.jobname copy the files in the scratch directory back to the submission directory.

    Note: If the job is cancelled before finishing normally (over time limit, out of memory, or other errors), the end commands in run_slurm.jobname will not get to run. A separate cleanup script will then run that tries to move everything back to the submission directory.