Describe the problem
While integrating Deadline Cloud job submission in existing pipeline, the pipeline code must be able to access the same data Submitter is collecting. For example - pipeline code wants to validate that correct render settings are used or frame range is matching the shot defined in project tracking, and so on. It might also want to inject something to the submitter, like automatically set fleet or queue for specific project context. Right now, the pipeline code would have to replace large chunks of the submitter logic, essentially creating the job bundle itself - that would lead to unnecessary code duplication, making the code maintenance difficult.
Proposed Solution
While I am creating this issue in the Maya Submitter, this concept should be available in other submitters as well.
There are to parts to the solution:
Exposing functions in public library
Functions currently used in on_create_job_bundle_callback() and other places, like getting and processing render layers, cameras, etc. Either using callbacks to get the exact data that are to be processed into the job or have at least the ability to call the same logic to achieve some functional parity between Submitter and pipeline code. This function library interface could be shared between all submitters - at the end almost all DCCs use similar concepts with (with few exceptions and different naming, but this could help to standardize it across the repositories). Functions like get_render_layers() can return render layers in maya or ROPs in Houdini for example.
Getting access to the job bundle data
One could argue that the submitter UI is just simplified frontend for the resulting job bundle - data filled in the UI plus scene state makes the job bundle files. If pipeline code has access to the job bundle before the submission is done, it can pull data out to validate if they are correct and/or add additional data based on the pipeline configuration. This could be another level of control pipeline could utilize on the job submission.
Example Use Cases
- Delegate most of the settings to existing pipeline code (in case of AYON for example, to AYON Publisher).
- Enable data validation prior the job submission: check render settings and additional scene data.
- Add additional jobs (like publishing job) based on to be submitted render jobs.
Describe the problem
While integrating Deadline Cloud job submission in existing pipeline, the pipeline code must be able to access the same data Submitter is collecting. For example - pipeline code wants to validate that correct render settings are used or frame range is matching the shot defined in project tracking, and so on. It might also want to inject something to the submitter, like automatically set fleet or queue for specific project context. Right now, the pipeline code would have to replace large chunks of the submitter logic, essentially creating the job bundle itself - that would lead to unnecessary code duplication, making the code maintenance difficult.
Proposed Solution
While I am creating this issue in the Maya Submitter, this concept should be available in other submitters as well.
There are to parts to the solution:
Exposing functions in public library
Functions currently used in
on_create_job_bundle_callback()and other places, like getting and processing render layers, cameras, etc. Either using callbacks to get the exact data that are to be processed into the job or have at least the ability to call the same logic to achieve some functional parity between Submitter and pipeline code. This function library interface could be shared between all submitters - at the end almost all DCCs use similar concepts with (with few exceptions and different naming, but this could help to standardize it across the repositories). Functions likeget_render_layers()can return render layers in maya or ROPs in Houdini for example.Getting access to the job bundle data
One could argue that the submitter UI is just simplified frontend for the resulting job bundle - data filled in the UI plus scene state makes the job bundle files. If pipeline code has access to the job bundle before the submission is done, it can pull data out to validate if they are correct and/or add additional data based on the pipeline configuration. This could be another level of control pipeline could utilize on the job submission.
Example Use Cases