pyiron.testing package

Submodules

pyiron.testing.executable module

class pyiron.testing.executable.ExampleExecutable[source]

Bases: object

Simple python executable that resembles the behavior of a real job, i.e., it processes input files, generates output files and can perform restarts. Will be mainly used to test the genericJob class.

get_energy(alat)[source]

Based on the lattice constant a random energy is calculated.

Parameters:alat (float) – lattice constant
Returns:list of n random energy values, where n equals self._count
Return type:(list)
read_restart()[source]

Read a restart file for a continous simulation divided into multiple jobs.

run()[source]

Run executes the job as part of a subprocess. The input is written to an input file, then the executable is executed and finally the output file is collected and converted to HDF5 format for future processing.

run_lib(input_dict)[source]

Run lib executes the job directly in Python instead of calling a separate subprocess, this is faster for all Python jobs but not available for non Python jobs. No input or output files are generated when running in library mode, instead all input is provided as an input dictionary and the output is returned as a list.

Parameters:input_dict (dict) – input consisting of [“alpha”, “alat”, “count”]
Returns:alat(float), count(int), energy(list)
Return type:list
write_restart()[source]

Write a restart file for a continous simulation divided into multiple jobs.

pyiron.testing.randomatomistic module

class pyiron.testing.randomatomistic.AtomisticExampleJob(project, job_name)[source]

Bases: pyiron.atomistics.job.atomistic.AtomisticGenericJob, pyiron.testing.randomatomistic.ExampleJob

ExampleJob generating a list of random numbers to simulate energy fluctuations.

Parameters:
  • project (ProjectHDFio) – ProjectHDFio instance which points to the HDF5 file the job is stored in
  • job_name (str) – name of the job, which has to be unique within the project
.. attribute:: job_name

name of the job, which has to be unique within the project

.. attribute:: status
execution status of the job, can be one of the following [initialized, appended, created, submitted, running,
aborted, collect, suspended, refresh, busy, finished]
.. attribute:: job_id

unique id to identify the job in the pyiron database

.. attribute:: parent_id

job id of the predecessor job - the job which was executed before the current one in the current job series

.. attribute:: master_id

job id of the master job - a meta job which groups a series of jobs, which are executed either in parallel or in serial.

.. attribute:: child_ids

list of child job ids - only meta jobs have child jobs - jobs which list the meta job as their master

.. attribute:: project

Project instance the jobs is located in

.. attribute:: project_hdf5

ProjectHDFio instance which points to the HDF5 file the job is stored in

.. attribute:: job_info_str

short string to describe the job by it is job_name and job ID - mainly used for logging

.. attribute:: working_directory

working directory of the job is executed in - outside the HDF5 file

.. attribute:: path

path to the job as a combination of absolute file system path and path within the HDF5 file.

.. attribute:: version

Version of the hamiltonian, which is also the version of the executable unless a custom executable is used.

.. attribute:: executable

Executable used to run the job - usually the path to an external executable.

.. attribute:: library_activated

For job types which offer a Python library pyiron can use the python library instead of an external executable.

.. attribute:: server

Server object to handle the execution environment for the job.

.. attribute:: queue_id

the ID returned from the queuing system - it is most likely not the same as the job ID.

.. attribute:: logger

logger object to monitor the external execution and internal pyiron warnings.

.. attribute:: restart_file_list

list of files which are used to restart the calculation from these files.

.. attribute:: job_type
Job type object with all the available job types: [‘ExampleJob’, ‘SerialMaster’, ‘ParallelMaster’, ‘ScriptJob’,
‘ListMaster’]
from_hdf(hdf=None, group_name=None)[source]

Restore the ExampleJob object in the HDF5 File

Parameters:
  • hdf (ProjectHDFio) – HDF5 group object - optional
  • group_name (str) – HDF5 subgroup name - optional
get_structure(iteration_step=-1)[source]

Gets the structure from a given iteration step of the simulation (MD/ionic relaxation). For static calculations there is only one ionic iteration step :param iteration_step: Step for which the structure is requested :type iteration_step: int

Returns:atomistics.structure.atoms.Atoms object
structure

type: Returns

to_hdf(hdf=None, group_name=None)[source]

Store the ExampleJob object in the HDF5 File

Parameters:
  • hdf (ProjectHDFio) – HDF5 group object - optional
  • group_name (str) – HDF5 subgroup name - optional
class pyiron.testing.randomatomistic.ExampleInput(input_file_name=None)[source]

Bases: pyiron.base.generic.parameters.GenericParameters

Input class for the ExampleJob based on the GenericParameters class.

Parameters:input_file_name (str) – Name of the input file - optional
load_default()[source]

Loading the default settings for the input file.

class pyiron.testing.randomatomistic.ExampleJob(project, job_name)[source]

Bases: pyiron.base.job.generic.GenericJob

ExampleJob generating a list of random numbers to simulate energy fluctuations.

Parameters:
  • project (ProjectHDFio) – ProjectHDFio instance which points to the HDF5 file the job is stored in
  • job_name (str) – name of the job, which has to be unique within the project
.. attribute:: job_name

name of the job, which has to be unique within the project

.. attribute:: status
execution status of the job, can be one of the following [initialized, appended, created, submitted, running,
aborted, collect, suspended, refresh, busy, finished]
.. attribute:: job_id

unique id to identify the job in the pyiron database

.. attribute:: parent_id

job id of the predecessor job - the job which was executed before the current one in the current job series

.. attribute:: master_id

job id of the master job - a meta job which groups a series of jobs, which are executed either in parallel or in serial.

.. attribute:: child_ids

list of child job ids - only meta jobs have child jobs - jobs which list the meta job as their master

.. attribute:: project

Project instance the jobs is located in

.. attribute:: project_hdf5

ProjectHDFio instance which points to the HDF5 file the job is stored in

.. attribute:: job_info_str

short string to describe the job by it is job_name and job ID - mainly used for logging

.. attribute:: working_directory

working directory of the job is executed in - outside the HDF5 file

.. attribute:: path

path to the job as a combination of absolute file system path and path within the HDF5 file.

.. attribute:: version

Version of the hamiltonian, which is also the version of the executable unless a custom executable is used.

.. attribute:: executable

Executable used to run the job - usually the path to an external executable.

.. attribute:: library_activated

For job types which offer a Python library pyiron can use the python library instead of an external executable.

.. attribute:: server

Server object to handle the execution environment for the job.

.. attribute:: queue_id

the ID returned from the queuing system - it is most likely not the same as the job ID.

.. attribute:: logger

logger object to monitor the external execution and internal pyiron warnings.

.. attribute:: restart_file_list

list of files which are used to restart the calculation from these files.

.. attribute:: job_type
Job type object with all the available job types: [‘ExampleJob’, ‘SerialMaster’, ‘ParallelMaster’, ‘ScriptJob’,
‘ListMaster’]
collect_logfiles()[source]

Collect the errors from the info.log file and store them in the HDF5 file

collect_output()[source]

Parse the output files of the example job and store the results in the HDF5 File.

collect_output_log(file_name='output.log')[source]

general purpose routine to extract output from logfile

Parameters:file_name (str) – output.log - optional
collect_warnings()[source]

Collect the warnings if any were written to the info.log file and store them in the HDF5 file

from_hdf(hdf=None, group_name=None)[source]

Restore the ExampleJob object in the HDF5 File

Parameters:
  • hdf (ProjectHDFio) – HDF5 group object - optional
  • group_name (str) – HDF5 subgroup name - optional
interactive_close()[source]
run_if_interactive()[source]

Run the job as Python library and store the result in the HDF5 File.

Returns:job ID
Return type:int
to_hdf(hdf=None, group_name=None)[source]

Store the ExampleJob object in the HDF5 File

Parameters:
  • hdf (ProjectHDFio) – HDF5 group object - optional
  • group_name (str) – HDF5 subgroup name - optional
write_input()[source]

Call routines that generate the codespecifc input files

Module contents