pyiron.base.job package¶
Submodules¶
pyiron.base.job.core module¶
-
class
pyiron.base.job.core.
DatabaseProperties
(job_dict=None)[source]¶ Bases:
object
Access the database entry of the job
-
class
pyiron.base.job.core.
HDF5Content
(project_hdf5)[source]¶ Bases:
object
Access the HDF5 file of the job
-
class
pyiron.base.job.core.
JobCore
(project, job_name)[source]¶ Bases:
pyiron.base.generic.template.PyironObject
The JobCore the most fundamental pyiron job class. From this class the GenericJob as well as the reduced JobPath class are derived. While JobPath only provides access to the HDF5 file it is about one order faster.
Parameters: - project (ProjectHDFio) – ProjectHDFio instance which points to the HDF5 file the job is stored in
- job_name (str) – name of the job, which has to be unique within the project
-
.. attribute:: job_name
name of the job, which has to be unique within the project
-
.. attribute:: status
- execution status of the job, can be one of the following [initialized, appended, created, submitted, running,
- aborted, collect, suspended, refresh, busy, finished]
-
.. attribute:: job_id
unique id to identify the job in the pyiron database
-
.. attribute:: parent_id
job id of the predecessor job - the job which was executed before the current one in the current job series
-
.. attribute:: master_id
job id of the master job - a meta job which groups a series of jobs, which are executed either in parallel or in serial.
-
.. attribute:: child_ids
list of child job ids - only meta jobs have child jobs - jobs which list the meta job as their master
-
.. attribute:: project
Project instance the jobs is located in
-
.. attribute:: project_hdf5
ProjectHDFio instance which points to the HDF5 file the job is stored in
-
.. attribute:: job_info_str
short string to describe the job by it is job_name and job ID - mainly used for logging
-
.. attribute:: working_directory
working directory of the job is executed in - outside the HDF5 file
-
.. attribute:: path
path to the job as a combination of absolute file system path and path within the HDF5 file.
-
check_if_job_exists
(job_name=None, project=None)[source]¶ Check if a job already exists in an specific project.
Parameters: - job_name (str) – Job name (optional)
- project (ProjectHDFio, Project) – Project path (optional)
Returns: True / False
Return type: (bool)
-
child_ids
¶ list of child job ids - only meta jobs have child jobs - jobs which list the meta job as their master
Returns: list of child job ids Return type: list
-
compress
(files_to_compress=None)[source]¶ Compress the output files of a job object.
Parameters: files_to_compress (list) –
-
content
¶
-
copy
()[source]¶ Copy the JobCore object which links to the HDF5 file
Returns: New FileHDFio object pointing to the same HDF5 file Return type: JobCore
-
copy_to
(project, new_database_entry=True, copy_files=True)[source]¶ Copy the content of the job including the HDF5 file to a new location
Parameters: - project (ProjectHDFio) – project to copy the job to
- new_database_entry (bool) – [True/False] to create a new database entry - default True
- copy_files (bool) – [True/False] copy the files inside the working directory - default True
Returns: JobCore object pointing to the new location.
Return type:
-
database_entry
¶
-
from_hdf
(hdf, group_name='group')[source]¶ Restore object from hdf5 format - The function has to be implemented by the derived classes - usually the GenericJob class
Parameters: - hdf (ProjectHDFio) – Optional hdf5 file, otherwise self is used.
- group_name (str) – Optional hdf5 group in the hdf5 file.
-
get
(name)[source]¶ Internal wrapper function for __getitem__() - self[name]
Parameters: key (str, slice) – path to the data or key of the data object Returns: data or data object Return type: dict, list, float, int
-
get_from_table
(path, name)[source]¶ Get a specific value from a pandas.Dataframe
Parameters: - path (str) – relative path to the data object
- name (str) – parameter key
Returns: the value associated to the specific parameter key
Return type: dict, list, float, int
-
get_job_id
(job_specifier=None)[source]¶ get the job_id for job named job_name in the local project path from database
Parameters: job_specifier (str, int) – name of the job or job ID Returns: job ID of the job Return type: int
-
get_pandas
(name)[source]¶ Load a dictionary from the HDF5 file and display the dictionary as pandas Dataframe
Parameters: name (str) – HDF5 node name Returns: The dictionary is returned as pandas.Dataframe object Return type: pandas.Dataframe
-
id
¶ Unique id to identify the job in the pyiron database - use self.job_id instead
Returns: job id Return type: int
-
inspect
(job_specifier)[source]¶ Inspect an existing pyiron object - most commonly a job - from the database
Parameters: job_specifier (str, int) – name of the job or job ID Returns: Access to the HDF5 object - not a GenericJob object - use load() instead. Return type: JobCore
-
is_compressed
()[source]¶ Check if the job is already compressed or not.
Returns: [True/False] Return type: bool
-
is_master_id
(job_id)[source]¶ Check if the job ID job_id is the master ID for any child job
Parameters: job_id (int) – job ID of the master job Returns: [True/False] Return type: bool
-
job_id
¶ Unique id to identify the job in the pyiron database
Returns: job id Return type: int
-
job_info_str
¶ Short string to describe the job by it is job_name and job ID - mainly used for logging
Returns: job info string Return type: str
-
job_name
¶ Get name of the job, which has to be unique within the project
Returns: job name Return type: str
-
list_all
()[source]¶ List all groups and nodes of the HDF5 file - where groups are equivalent to directories and nodes to files.
Returns: {‘groups’: [list of groups], ‘nodes’: [list of nodes]} Return type: dict
-
list_childs
()[source]¶ List child jobs as JobPath objects - not loading the full GenericJob objects for each child
Returns: list of child jobs Return type: list
-
list_files
()[source]¶ List files inside the working directory
Parameters: extension (str) – filter by a specific extension Returns: list of file names Return type: list
-
list_groups
()[source]¶ equivalent to os.listdirs (consider groups as equivalent to dirs)
Returns: list of groups in pytables for the path self.h5_path Return type: (list)
-
list_nodes
()[source]¶ List all groups and nodes of the HDF5 file
Returns: list of nodes Return type: list
-
load
(job_specifier, convert_to_object=True)[source]¶ Load an existing pyiron object - most commonly a job - from the database
Parameters: - job_specifier (str, int) – name of the job or job ID
- convert_to_object (bool) – convert the object to an pyiron object or only access the HDF5 file - default=True accessing only the HDF5 file is about an order of magnitude faster, but only provides limited functionality. Compare the GenericJob object to JobCore object.
Returns: Either the full GenericJob object or just a reduced JobCore object
Return type:
-
load_object
(convert_to_object=True, project=None)[source]¶ Load object to convert a JobPath to an GenericJob object.
Parameters: - convert_to_object (bool) – convert the object to an pyiron object or only access the HDF5 file - default=True accessing only the HDF5 file is about an order of magnitude faster, but only provides limited functionality. Compare the GenericJob object to JobCore object.
- project (ProjectHDFio) – ProjectHDFio to load the object with - optional
Returns: depending on convert_to_object
Return type:
-
master_id
¶ Get job id of the master job - a meta job which groups a series of jobs, which are executed either in parallel or in serial.
Returns: master id Return type: int
-
move_to
(project)[source]¶ Move the content of the job including the HDF5 file to a new location
Parameters: project (ProjectHDFio) – project to move the job to Returns: JobCore object pointing to the new location. Return type: JobCore
-
name
¶ Get name of the job, which has to be unique within the project
Returns: job name Return type: str
-
parent_id
¶ Get job id of the predecessor job - the job which was executed before the current one in the current job series
Returns: parent id Return type: int
-
path
¶ Absolute path of the HDF5 group starting from the system root - combination of the absolute system path plus the absolute path inside the HDF5 file starting from the root group.
Returns: absolute path Return type: str
-
project
¶ Project instance the jobs is located in
Returns: project the job is located in Return type: Project
-
project_hdf5
¶ Get the ProjectHDFio instance which points to the HDF5 file the job is stored in
Returns: HDF5 project Return type: ProjectHDFio
-
remove
(_protect_childs=True)[source]¶ Remove the job - this removes the HDF5 file, all data stored in the HDF5 file an the corresponding database entry.
Parameters: _protect_childs (bool) – [True/False] by default child jobs can not be deleted, to maintain the consistency - default=True
-
remove_child
()[source]¶ internal function to remove command that removes also child jobs. Do never use this command, since it will destroy the integrity of your project.
-
rename
(new_job_name)[source]¶ Rename the job - by changing the job name
Parameters: new_job_name (str) – new job name
-
save
()[source]¶ The save function has to be implemented by the derived classes - usually the GenericJob class
-
status
¶ - Execution status of the job, can be one of the following [initialized, appended, created, submitted, running,
- aborted, collect, suspended, refresh, busy, finished]
Returns: status Return type: (str)
-
to_hdf
(hdf, group_name='group')[source]¶ Store object in hdf5 format - The function has to be implemented by the derived classes - usually the GenericJob class
Parameters: - hdf (ProjectHDFio) – Optional hdf5 file, otherwise self is used.
- group_name (str) – Optional hdf5 group in the hdf5 file.
-
to_object
(object_type=None, **qwargs)[source]¶ Load the full pyiron object from an HDF5 file
Parameters: - object_type – if the ‘TYPE’ node is not available in the HDF5 file a manual object type can be set - optional
- **qwargs – optional parameters [‘job_name’, ‘project’] - to specify the location of the HDF5 path
Returns: pyiron object
Return type:
-
working_directory
¶ working directory of the job is executed in - outside the HDF5 file
Returns: working directory Return type: str
pyiron.base.job.executable module¶
-
class
pyiron.base.job.executable.
Executable
(path_binary_codes, codename=None, module=None, code=None, overwrite_nt_flag=False)[source]¶ Bases:
object
-
available_versions
¶ List all available exectuables in the path_binary_codes for the specified codename.
Returns: list of the available version Return type: list
-
executable_path
¶ Get the executable path
Returns: absolute path Return type: str
-
mpi
¶ Check if the message processing interface is activated.
Returns: [True/False] Return type: bool
-
version
¶ Version of the Executable
Returns: version Return type: str
-
pyiron.base.job.generic module¶
-
class
pyiron.base.job.generic.
GenericJob
(project, job_name)[source]¶ Bases:
pyiron.base.job.core.JobCore
Generic Job class extends the JobCore class with all the functionality to run the job object. From this class all specific Hamiltonians are derived. Therefore it should contain the properties/routines common to all jobs. The functions in this module should be as generic as possible.
Parameters: - project (ProjectHDFio) – ProjectHDFio instance which points to the HDF5 file the job is stored in
- job_name (str) – name of the job, which has to be unique within the project
-
.. attribute:: job_name
name of the job, which has to be unique within the project
-
.. attribute:: status
- execution status of the job, can be one of the following [initialized, appended, created, submitted, running,
- aborted, collect, suspended, refresh, busy, finished]
-
.. attribute:: job_id
unique id to identify the job in the pyiron database
-
.. attribute:: parent_id
job id of the predecessor job - the job which was executed before the current one in the current job series
-
.. attribute:: master_id
job id of the master job - a meta job which groups a series of jobs, which are executed either in parallel or in serial.
-
.. attribute:: child_ids
list of child job ids - only meta jobs have child jobs - jobs which list the meta job as their master
-
.. attribute:: project
Project instance the jobs is located in
-
.. attribute:: project_hdf5
ProjectHDFio instance which points to the HDF5 file the job is stored in
-
.. attribute:: job_info_str
short string to describe the job by it is job_name and job ID - mainly used for logging
-
.. attribute:: working_directory
working directory of the job is executed in - outside the HDF5 file
-
.. attribute:: path
path to the job as a combination of absolute file system path and path within the HDF5 file.
-
.. attribute:: version
Version of the hamiltonian, which is also the version of the executable unless a custom executable is used.
-
.. attribute:: executable
Executable used to run the job - usually the path to an external executable.
-
.. attribute:: library_activated
For job types which offer a Python library pyiron can use the python library instead of an external executable.
-
.. attribute:: server
Server object to handle the execution environment for the job.
-
.. attribute:: queue_id
the ID returned from the queuing system - it is most likely not the same as the job ID.
-
.. attribute:: logger
logger object to monitor the external execution and internal pyiron warnings.
-
.. attribute:: restart_file_list
list of files which are used to restart the calculation from these files.
-
.. attribute:: job_type
- Job type object with all the available job types: [‘ExampleJob’, ‘SerialMaster’, ‘ParallelMaster’, ‘ScriptJob’,
- ‘ListMaster’]
-
append
(job)[source]¶ Metajobs like GenericMaster, ParallelMaster, SerialMaser or ListMaster allow other jobs to be appended. In the GenericJob definition this is only a template function.
-
clear_job
()[source]¶ Convenience function to clear job info after suspend. Mimics deletion of all the job info after suspend in a local test environment.
-
collect_logfiles
()[source]¶ Collect the log files of the external executable and store the information in the HDF5 file. This method has to be implemented in the individual hamiltonians.
-
collect_output
()[source]¶ Collect the output files of the external executable and store the information in the HDF5 file. This method has to be implemented in the individual hamiltonians.
-
convergence_check
()[source]¶ Validate the convergence of the calculation.
Returns: If the calculation is converged Return type: (bool)
-
copy
()[source]¶ Copy the GenericJob object which links to the job and its HDF5 file
Returns: New GenericJob object pointing to the same job Return type: GenericJob
-
copy_file_to_working_directory
(file)[source]¶ Copy a specific file to the working directory before the job is executed.
Parameters: file (str) – path of the file to be copied.
-
copy_template
(project, new_job_name=None)[source]¶ Copy the content of the job including the HDF5 file but without the output data to a new location
Parameters: - project (ProjectHDFio) – project to copy the job to
- new_job_name (str) – to duplicate the job within the same porject it is necessary to modify the job name - optional
Returns: GenericJob object pointing to the new location.
Return type:
-
copy_to
(project=None, new_job_name=None, input_only=False, new_database_entry=True)[source]¶ Copy the content of the job including the HDF5 file to a new location
Parameters: - project (ProjectHDFio) – project to copy the job to
- new_job_name (str) – to duplicate the job within the same porject it is necessary to modify the job name - optional
- input_only (bool) – [True/False] to copy only the input - default False
- new_database_entry (bool) – [True/False] to create a new database entry - default True
Returns: GenericJob object pointing to the new location.
Return type:
-
create_job
(job_type, job_name)[source]¶ Create one of the following jobs: - ‘StructureContainer’: - ‘StructurePipeline’: - ‘AtomisticExampleJob’: example job just generating random number - ‘ExampleJob’: example job just generating random number - ‘Lammps’: - ‘KMC’: - ‘Sphinx’: - ‘Vasp’: - ‘GenericMaster’: - ‘SerialMaster’: series of jobs run in serial - ‘AtomisticSerialMaster’: - ‘ParallelMaster’: series of jobs run in parallel - ‘KmcMaster’: - ‘ThermoLambdaMaster’: - ‘RandomSeedMaster’: - ‘MeamFit’: - ‘Murnaghan’: - ‘MinimizeMurnaghan’: - ‘ElasticMatrix’: - ‘ConvergenceVolume’: - ‘ConvergenceEncutParallel’: - ‘ConvergenceKpointParallel’: - ’PhonopyMaster’: - ‘DefectFormationEnergy’: - ‘LammpsASE’: - ‘PipelineMaster’: - ’TransformationPath’: - ‘ThermoIntEamQh’: - ‘ThermoIntDftEam’: - ‘ScriptJob’: Python script or jupyter notebook job container - ‘ListMaster’: list of jobs
Parameters: - job_type (str) – job type can be [‘StructureContainer’, ‘StructurePipeline’, ‘AtomisticExampleJob’, ‘ExampleJob’, ‘Lammps’, ‘KMC’, ‘Sphinx’, ‘Vasp’, ‘GenericMaster’, ‘SerialMaster’, ‘AtomisticSerialMaster’, ‘ParallelMaster’, ‘KmcMaster’, ‘ThermoLambdaMaster’, ‘RandomSeedMaster’, ‘MeamFit’, ‘Murnaghan’, ‘MinimizeMurnaghan’, ‘ElasticMatrix’, ‘ConvergenceVolume’, ‘ConvergenceEncutParallel’, ‘ConvergenceKpointParallel’, ’PhonopyMaster’, ‘DefectFormationEnergy’, ‘LammpsASE’, ‘PipelineMaster’, ’TransformationPath’, ‘ThermoIntEamQh’, ‘ThermoIntDftEam’, ‘ScriptJob’, ‘ListMaster’]
- job_name (str) – name of the job
Returns: job object depending on the job_type selected
Return type:
-
db_entry
()[source]¶ Generate the initial database entry for the current GenericJob
Returns: - database dictionary {“username”, “projectpath”, “project”, “job”, “subjob”, “hamversion”,
- ”hamilton”, “status”, “computer”, “timestart”, “masterid”, “parentid”}
Return type: (dict)
-
executable
¶ Get the executable used to run the job - usually the path to an external executable.
Returns: exectuable path Return type: (str)
-
from_hdf
(hdf=None, group_name=None)[source]¶ Restore the GenericJob from an HDF5 file
Parameters: - hdf (ProjectHDFio) – HDF5 group object - optional
- group_name (str) – HDF5 subgroup name - optional
-
job_file_name
(file_name, cwd=None)[source]¶ combine the file name file_name with the path of the current working directory
Parameters: - file_name (str) – name of the file
- cwd (str) – current working directory - this overwrites self.project_hdf5.working_directory - optional
Returns: absolute path to the file in the current working directory
Return type: str
-
job_type
¶ - [‘ExampleJob’, ‘SerialMaster’, ‘ParallelMaster’, ‘ScriptJob’,
- ‘ListMaster’]
Returns: Job type object Return type: JobTypeChoice Type: Job type object with all the available job types
-
logger
¶ Get the logger object to monitor the external execution and internal pyiron warnings.
Returns: logger object Return type: logging.getLogger()
-
queue_id
¶ Get the queue ID, the ID returned from the queuing system - it is most likely not the same as the job ID.
Returns: queue ID Return type: int
-
refresh_job_status
()[source]¶ Refresh job status by updating the job status with the status from the database if a job ID is available.
-
remove_child
()[source]¶ internal function to remove command that removes also child jobs. Do never use this command, since it will destroy the integrity of your project.
-
reset_job_id
(job_id=None)[source]¶ Reset the job id sets the job_id to None in the GenericJob as well as all connected modules like JobStatus.
-
restart
(snapshot=-1, job_name=None, job_type=None)[source]¶ Create an restart calculation from the current calculation - in the GenericJob this is the same as create_job(). A restart is only possible after the current job has finished. If you want to run the same job again with different input parameters use job.run(run_again=True) instead.
Parameters: - snapshot (int) – time step from which to restart the calculation - default=-1 - the last time step
- job_name (str) – job name of the new calculation - default=<job_name>_restart
- job_type (str) – job type of the new calculation - default is the same type as the exeisting calculation
Returns:
-
restart_file_dict
¶ A dictionary of the new name of the copied restart files
-
restart_file_list
¶ Get the list of files which are used to restart the calculation from these files.
Returns: list of files Return type: list
-
run
(run_again=False, repair=False, debug=False, run_mode=None)[source]¶ This is the main run function, depending on the job status [‘initialized’, ‘created’, ‘submitted’, ‘running’, ‘collect’,’finished’, ‘refresh’, ‘suspended’] the corresponding run mode is chosen.
Parameters: - run_again (bool) – Delete the existing job and run the simulation again.
- repair (bool) – Set the job status to created and run the simulation again.
- debug (bool) – Debug Mode - defines the log level of the subprocess the job is executed in.
- run_mode (str) – [‘modal’, ‘non_modal’, ‘queue’, ‘manual’] overwrites self.server.run_mode
-
run_if_interactive
()[source]¶ For jobs which executables are available as Python library, those can also be executed with a library call instead of calling an external executable. This is usually faster than a single core python job.
-
run_if_interactive_non_modal
()[source]¶ For jobs which executables are available as Python library, those can also be executed with a library call instead of calling an external executable. This is usually faster than a single core python job.
-
run_if_manually
(_manually_print=True)[source]¶ The run if manually function is called by run if the user decides to execute the simulation manually - this might be helpful to debug a new job type or test updated executables.
Parameters: _manually_print (bool) – Print explanation how to run the simulation manually - default=True.
-
run_if_modal
()[source]¶ The run if modal function is called by run to execute the simulation, while waiting for the output. For this we use subprocess.check_output()
-
run_if_non_modal
()[source]¶ The run if non modal function is called by run to execute the simulation in the background. For this we use subprocess.Popen()
-
run_if_scheduler
()[source]¶ The run if queue function is called by run if the user decides to submit the job to and queing system. The job is submitted to the queuing system using subprocess.Popen()
Returns: Returns the queue ID for the job. Return type: int
-
save
()[source]¶ Save the object, by writing the content to the HDF5 file and storing an entry in the database.
Returns: Job ID stored in the database Return type: (int)
-
send_to_database
()[source]¶ if the jobs should be store in the external/public database this could be implemented here, but currently it is just a placeholder.
-
server
¶ Get the server object to handle the execution environment for the job.
Returns: server object Return type: Server
-
suspend
()[source]¶ Suspend the job by storing the object and its state persistently in HDF5 file and exit it.
-
to_hdf
(hdf=None, group_name=None)[source]¶ Store the GenericJob in an HDF5 file
Parameters: - hdf (ProjectHDFio) – HDF5 group object - optional
- group_name (str) – HDF5 subgroup name - optional
-
update_master
()[source]¶ After a job is finished it checks whether it is linked to any metajob - meaning the master ID is pointing to this jobs job ID. If this is the case and the master job is in status suspended - the child wakes up the master job, sets the status to refresh and execute run on the master job. During the execution the master job is set to status refresh. If another child calls update_master, while the master is in refresh the status of the master is set to busy and if the master is in status busy at the end of the update_master process another update is triggered.
-
validate_ready_to_run
()[source]¶ Validate that the calculation is ready to be executed. By default no generic checks are performed, but one could check that the input information is complete or validate the consistency of the input at this point.
-
version
¶ Get the version of the hamiltonian, which is also the version of the executable unless a custom executable is used.
Returns: version number Return type: str
-
working_directory
¶ Get the working directory of the job is executed in - outside the HDF5 file. The working directory equals the path but it is represented by the filesystem:
/absolute/path/to/the/file.h5/path/inside/the/hdf5/file- becomes:
- /absolute/path/to/the/file_hdf5/path/inside/the/hdf5/file
Returns: absolute path to the working directory Return type: str
pyiron.base.job.interactive module¶
-
class
pyiron.base.job.interactive.
InteractiveBase
(project, job_name)[source]¶ Bases:
pyiron.base.job.generic.GenericJob
InteractiveBase class extends the Generic Job class with all the functionality to run the job object interactively. From this class all interactive Hamiltonians are derived. Therefore it should contain the properties/routines common to all interactive jobs. The functions in this module should be as generic as possible.
Parameters: - project (ProjectHDFio) – ProjectHDFio instance which points to the HDF5 file the job is stored in
- job_name (str) – name of the job, which has to be unique within the project
-
.. attribute:: job_name
name of the job, which has to be unique within the project
-
.. attribute:: status
- execution status of the job, can be one of the following [initialized, appended, created, submitted, running,
- aborted, collect, suspended, refresh, busy, finished]
-
.. attribute:: job_id
unique id to identify the job in the pyiron database
-
.. attribute:: parent_id
job id of the predecessor job - the job which was executed before the current one in the current job series
-
.. attribute:: master_id
job id of the master job - a meta job which groups a series of jobs, which are executed either in parallel or in serial.
-
.. attribute:: child_ids
list of child job ids - only meta jobs have child jobs - jobs which list the meta job as their master
-
.. attribute:: project
Project instance the jobs is located in
-
.. attribute:: project_hdf5
ProjectHDFio instance which points to the HDF5 file the job is stored in
-
.. attribute:: job_info_str
short string to describe the job by it is job_name and job ID - mainly used for logging
-
.. attribute:: working_directory
working directory of the job is executed in - outside the HDF5 file
-
.. attribute:: path
path to the job as a combination of absolute file system path and path within the HDF5 file.
-
.. attribute:: version
Version of the hamiltonian, which is also the version of the executable unless a custom executable is used.
-
.. attribute:: executable
Executable used to run the job - usually the path to an external executable.
-
.. attribute:: library_activated
For job types which offer a Python library pyiron can use the python library instead of an external executable.
-
.. attribute:: server
Server object to handle the execution environment for the job.
-
.. attribute:: queue_id
the ID returned from the queuing system - it is most likely not the same as the job ID.
-
.. attribute:: logger
logger object to monitor the external execution and internal pyiron warnings.
-
.. attribute:: restart_file_list
list of files which are used to restart the calculation from these files.
-
.. attribute:: job_type
- Job type object with all the available job types: [‘ExampleJob’, ‘SerialMaster’, ‘ParallelMaster’, ‘ScriptJob’,
- ‘ListMaster’]
-
from_hdf
(hdf=None, group_name=None)[source]¶ Restore the InteractiveBase object in the HDF5 File
Parameters: - hdf (ProjectHDFio) – HDF5 group object - optional
- group_name (str) – HDF5 subgroup name - optional
-
interactive_flush
(path='interactive', include_last_step=False)[source]¶ Parameters: - path –
- include_last_step –
Returns:
-
interactive_flush_frequency
¶
-
interactive_write_frequency
¶
-
run_if_interactive
()[source]¶ For jobs which executables are available as Python library, those can also be executed with a library call instead of calling an external executable. This is usually faster than a single core python job.
-
run_if_interactive_non_modal
()[source]¶ For jobs which executables are available as Python library, those can also be executed with a library call instead of calling an external executable. This is usually faster than a single core python job.
-
to_hdf
(hdf=None, group_name=None)[source]¶ Store the InteractiveBase object in the HDF5 File
Parameters: - hdf (ProjectHDFio) – HDF5 group object - optional
- group_name (str) – HDF5 subgroup name - optional
pyiron.base.job.jobstatus module¶
-
class
pyiron.base.job.jobstatus.
JobStatus
(initial_status='initialized', db=None, job_id=None)[source]¶ Bases:
object
- The JobStatus object handles the different states a job could have. The available states are:
- initialized: The object for the corresponding job was just created. appended: The job was appended to an master job. created: The files required for the simulation were written to the harddisk. submitted: The job was submitted to the jobscheduler and is waiting to be executed. running: The job is currently executed. aborted: The job failed to execute. collect: The job finished successfully and the written files are being collected. suspended: The job was set to sleep, waiting until other related jobs are finished, before it continous. refresh: The job was suspended before and it is currently checking if there are new tasks it can execute. busy: The job is refreshing, but during the refresh more related jobs finished so another refresh is necessary. finished: The job and all connected sub jobs are finished.
Parameters: - initial_status (str) – If no initial status is provided the status is set to ‘initialized’
- db (DatabaseAccess) – The database which is responsible for this job.
- job_id (int) – job ID
-
.. attribute:: database
the database which is responsible for this job.
-
.. attribute:: job_id
Job ID
-
.. attribute:: string
job status as string
-
database
¶ Get the database which is responsible for this job. If no database is linked it returns None. :returns: The database which is responsible for this job. :rtype: DatabaseAccess
-
job_id
¶ Get the job id of the job this jobstatus is associated to. :returns: job id :rtype: int
-
refresh_status
()[source]¶ Refresh the job status - check if the database and job_id are set and if this is the case load the job status from the database.
-
string
¶ - The object for the corresponding job was just created.
appended: The job was appended to an master job. created: The files required for the simulation were written to the harddisk. submitted: The job was submitted to the jobscheduler and is waiting to be executed. running: The job is currently executed. aborted: The job failed to execute. collect: The job finished successfully and the written files are being collected. suspended: The job was set to sleep, waiting until other related jobs are finished, before it continous. refresh: The job was suspended before and it is currently checking if there are new tasks it can execute. busy: The job is refreshing, but during the refresh more related jobs finished so another refresh is
necessary.finished: The job and all connected sub jobs are finished.
Returns: - status [initialized, appended, created, submitted, running, aborted, collect, suspended, refresh,
- busy, finished]
Return type: (str) Type: Get the current status as string, it can be Type: initialized
pyiron.base.job.jobtype module¶
-
class
pyiron.base.job.jobtype.
JobType
[source]¶ Bases:
object
The JobTypeBase class creates a new object of a given class type.
-
class
pyiron.base.job.jobtype.
JobTypeChoice
[source]¶ Bases:
object
Helper class to choose the job type directly from the project, autocompletion is enabled by overwriting the __dir__() function.
-
job_class_dict
¶
-
-
class
pyiron.base.job.jobtype.
Singleton
[source]¶ Bases:
type
Implemented with suggestions from
http://stackoverflow.com/questions/6760685/creating-a-singleton-in-python
pyiron.base.job.path module¶
-
class
pyiron.base.job.path.
JobPath
(db, job_id=None, db_entry=None, user=None)[source]¶ Bases:
pyiron.base.job.core.JobCore
The JobPath class is derived from the JobCore and is used as a lean version of the GenericJob class. Instead of loading the full pyiron object the JobPath class only provides access to the HDF5 file, which should be enough for most analysis.
Parameters: - db (DatabaseAccess) – database object
- job_id (int) – Job ID - optional, but either a job ID or a database entry db_entry has to be provided.
- db_entry (dict) – database entry {“job”:, “subjob”:, “projectpath”:, “project”:, “hamilton”:, “hamversion”:, “status”:} and optional entries are {“id”:, “masterid”:, “parentid”:}
- user (str) – current unix/linux/windows user who is running pyiron
-
.. attribute:: job_name
name of the job, which has to be unique within the project
-
.. attribute:: status
- execution status of the job, can be one of the following [initialized, appended, created, submitted, running,
- aborted, collect, suspended, refresh, busy, finished]
-
.. attribute:: job_id
unique id to identify the job in the pyiron database
-
.. attribute:: parent_id
job id of the predecessor job - the job which was executed before the current one in the current job series
-
.. attribute:: master_id
job id of the master job - a meta job which groups a series of jobs, which are executed either in parallel or in serial.
-
.. attribute:: child_ids
list of child job ids - only meta jobs have child jobs - jobs which list the meta job as their master
-
.. attribute:: project
Project instance the jobs is located in
-
.. attribute:: project_hdf5
ProjectHDFio instance which points to the HDF5 file the job is stored in
-
.. attribute:: job_info_str
short string to describe the job by it is job_name and job ID - mainly used for logging
-
.. attribute:: working_directory
working directory of the job is executed in - outside the HDF5 file
-
.. attribute:: path
path to the job as a combination of absolute file system path and path within the HDF5 file.
-
.. attribute:: is_root
boolean if the HDF5 object is located at the root level of the HDF5 file
-
.. attribute:: is_open
boolean if the HDF5 file is currently opened - if an active file handler exists
-
.. attribute:: is_empty
boolean if the HDF5 file is empty
-
.. attribute:: base_name
name of the HDF5 file but without any file extension
-
.. attribute:: file_path
directory where the HDF5 file is located
-
.. attribute:: h5_path
path inside the HDF5 file - also stored as absolute path
-
base_name
¶ Name of the HDF5 file - but without the file extension .h5
Returns: file name without the file extension Return type: str
-
create_group
(name)[source]¶ Create an HDF5 group - similar to a folder in the filesystem - the HDF5 groups allow the users to structure their data.
Parameters: name (str) – name of the HDF5 group Returns: FileHDFio object pointing to the new group Return type: FileHDFio
-
file_path
¶ Path where the HDF5 file is located - posixpath.dirname()
Returns: HDF5 file location Return type: str
-
groups
()[source]¶ Filter HDF5 file by groups
Returns: an HDF5 file which is filtered by groups Return type: FileHDFio
-
h5_path
¶ Get the path in the HDF5 file starting from the root group - meaning this path starts with ‘/’
Returns: HDF5 path Return type: str
-
is_empty
¶ Check if the HDF5 file is empty
Returns: [True/False] Return type: bool
-
is_root
¶ Check if the current h5_path is pointing to the HDF5 root group.
Returns: [True/False] Return type: bool
-
items
()[source]¶ List all keys and values as items of all groups and nodes of the HDF5 file
Returns: list of sets (key, value) Return type: list
-
keys
()[source]¶ List all groups and nodes of the HDF5 file - where groups are equivalent to directories and nodes to files.
Returns: all groups and nodes Return type: list
-
list_dirs
()[source]¶ equivalent to os.listdirs (consider groups as equivalent to dirs)
Returns: list of groups in pytables for the path self.h5_path Return type: (list)
-
listdirs
()[source]¶ equivalent to os.listdirs (consider groups as equivalent to dirs)
Returns: list of groups in pytables for the path self.h5_path Return type: (list)
-
nodes
()[source]¶ Filter HDF5 file by nodes
Returns: an HDF5 file which is filtered by nodes Return type: FileHDFio
-
open
(h5_rel_path)[source]¶ Create an HDF5 group and enter this specific group. If the group exists in the HDF5 path only the h5_path is set correspondingly otherwise the group is created first.
Parameters: h5_rel_path (str) – relative path from the current HDF5 path - h5_path - to the new group Returns: FileHDFio object pointing to the new group Return type: FileHDFio
pyiron.base.job.script module¶
-
class
pyiron.base.job.script.
ScriptJob
(project, job_name)[source]¶ Bases:
pyiron.base.job.generic.GenericJob
The ScriptJob class allows to submit Python scripts and Jupyter notebooks to the pyiron job management system.
Parameters: - project (ProjectHDFio) – ProjectHDFio instance which points to the HDF5 file the job is stored in
- job_name (str) – name of the job, which has to be unique within the project
-
attribute
¶ job_name
name of the job, which has to be unique within the project
-
.. attribute:: status
- execution status of the job, can be one of the following [initialized, appended, created, submitted, running,
- aborted, collect, suspended, refresh, busy, finished]
-
.. attribute:: job_id
unique id to identify the job in the pyiron database
-
.. attribute:: parent_id
job id of the predecessor job - the job which was executed before the current one in the current job series
-
.. attribute:: master_id
job id of the master job - a meta job which groups a series of jobs, which are executed either in parallel or in serial.
-
.. attribute:: child_ids
list of child job ids - only meta jobs have child jobs - jobs which list the meta job as their master
-
.. attribute:: project
Project instance the jobs is located in
-
.. attribute:: project_hdf5
ProjectHDFio instance which points to the HDF5 file the job is stored in
-
.. attribute:: job_info_str
short string to describe the job by it is job_name and job ID - mainly used for logging
-
.. attribute:: working_directory
working directory of the job is executed in - outside the HDF5 file
-
.. attribute:: path
path to the job as a combination of absolute file system path and path within the HDF5 file.
-
.. attribute:: version
Version of the hamiltonian, which is also the version of the executable unless a custom executable is used.
-
.. attribute:: executable
Executable used to run the job - usually the path to an external executable.
-
.. attribute:: library_activated
For job types which offer a Python library pyiron can use the python library instead of an external executable.
-
.. attribute:: server
Server object to handle the execution environment for the job.
-
.. attribute:: queue_id
the ID returned from the queuing system - it is most likely not the same as the job ID.
-
.. attribute:: logger
logger object to monitor the external execution and internal pyiron warnings.
-
.. attribute:: restart_file_list
list of files which are used to restart the calculation from these files.
-
.. attribute:: job_type
- Job type object with all the available job types: [‘ExampleJob’, ‘SerialMaster’, ‘ParallelMaster’, ‘ScriptJob’,
- ‘ListMaster’]
-
.. attribute:: script_path
the absolute path to the python script
-
collect_output
()[source]¶ Collect output function updates the master ID entries for all the child jobs created by this script job, if the child job is already assigned to an master job nothing happens - master IDs are not overwritten.
-
from_hdf
(hdf=None, group_name=None)[source]¶ Restore the ScriptJob from an HDF5 file
Parameters: - hdf (ProjectHDFio) – HDF5 group object - optional
- group_name (str) – HDF5 subgroup name - optional
-
script_path
¶ Python script path
Returns: absolute path to the python script Return type: str
-
to_hdf
(hdf=None, group_name=None)[source]¶ Store the ScriptJob in an HDF5 file
Parameters: - hdf (ProjectHDFio) – HDF5 group object - optional
- group_name (str) – HDF5 subgroup name - optional
pyiron.base.job.wrapper module¶
-
class
pyiron.base.job.wrapper.
JobWrapper
(working_directory, job_id, debug=False)[source]¶ Bases:
object
The job wrapper is called from the run_job.py script, it restores the job from hdf5 and executes it.
Parameters: - working_directory (str) – working directory of the job
- job_id (int) – job ID
- debug (bool) – enable debug mode [True/False] (optional)