KFP charms use Jinja2 templates in order to store manifests that are applied during its deployment. Those manifests are placed under src/templates in each charm's directory. The process for updating them is:
- Install
kustomizeusing the official documentation instructions - Clone Kubeflow manifests repo locally
cdinto the repo and checkout to the branch or tag of the target version.- Build the manifests with
kustomizeaccording to instructions in https://github.com/kubeflow/manifests?tab=readme-ov-file#kubeflow-pipelines. - Checkout to the branch or tag of the version of the current manifest
- Build the manifest with
kustomize(see step 4) and save the file - Compare both files to spot the differences (e.g. using diff
diff kfp-manifests-vX.yaml kfp-manifests-vY.yaml > kfp-vX-vY.diff)
Kfp-api uses also two images launcher and driver apart from its workload container one. Those are updated on every release but this change is not visible when comparing manifests. In order to update those, grab their sources from the corresponding comments in the config.yaml file and switch to the target version of that file. Then, use the new images to update the config options' default value.
Once the comparison is done, add any changes to the relevant aggregated ClusterRoles to the
templates/auth_manifests.yaml.j2 file and remember to:
- Use the current model as the namespace
- Use the application name in the name of any ClusterRoles, ClusterRoleBindings, or ServiceAccounts.
- Add to the labels
app: {{ app_name }}
Note that non-aggregated ClusterRoles are skipped due to deploying charms with the --trust argument. Regarding CRDs that have updates, they are copied as they are into the corresponding charm's in a crds.yaml.j2 file while there can be changes in other resources as well e.g. secrets or configMaps.
- In order to copy kfp-profile-controller CRDs, follow instructions on the top of its crd_manifests.yaml.j2 file.
- We do not have a
cache-servercomponent, so related manifests are skipped. - We do not keep a
pipeline-runnerServiceAccount(and related manifests), since even though the api-server is configured to use it by default, the manifests update it to use a different one. - For argo related manifests, we only keep the
aggregateClusterRoles. - Apart from changes shown in the
diffabove, kfp-api charm also requires updatingdriver-imageandlauncher-imagevalues in the config file. Source for those can be found in the charm's config.yaml file. - Changes for envoy charm may also be included in the aforementioned
diff. - We do not keep a
pipeline-install-configconfigMapas upstream does, since charms define those configurations either in theirconfig.yamlor directly in their pebble layer. However, we should pay attention to changes in thatconfigMap's values since its values could be used in other places, using thevalueFromfield in anenv's definition.
tox is the only tool required locally, as tox internally installs and uses poetry, be it to manage Python dependencies or to run tox environments. To install it: pipx install tox.
Optionally, poerty can be additionally installed independently just for the sake of running Python commands locally outside of tox during debugging/development. To install it: pipx install poetry.
To add/update/remove any dependencies and/or to upgrade Python, simply:
-
add/update/remove such dependencies to/in/from the desired group(s) below
[tool.poetry.group.<your-group>.dependencies]inpyproject.toml, and/or upgrade Python itself inrequires-pythonunder[project]⚠️ dependencies for the charm itself are also defined as dependencies of a dedicated group calledcharm, specifically below[tool.poetry.group.charm.dependencies], and not as project dependencies below[project.dependencies]or[tool.poetry.dependencies]⚠️ -
run
tox -e update-requirementsto update the lock fileby this point,
poerty, throughtox, will let you know if there are any dependency conflicts to solve. -
optionally, if you also want to update your local environment for running Python commands/scripts yourself and not through tox, see Running Python Environments below
To run tox environments, either locally for development or in CI workflows for testing, ensure to have tox installed first and then simply run your tox environments natively (e.g.: tox -e lint). tox will internally first install poetry and then rely on it to install and run its environments.
To run Python commands locally for debugging/development from any environments built from any combinations of dependency groups without relying on tox:
- ensure you have
poetryinstalled - install any required dependency groups:
poetry install --only <your-group-a>,<your-group-b>(or all groups, if you prefer:poetry install --all-groups) - run Python commands via poetry:
poetry run python3 <your-command>
Each charm directory has both unit and integration tests that can be executed with tox -e unit or tox -e integration respectively. This repository also includes bundle integration tests that can be executed with tox -e bundle-integration in the root directory of the project. The bundle integration tests expect that all kfp charms have been built in their respective charm directories. You can either do this by running charmcraft pack in the directory of each charm, or use charmcraftcache which will download cached charms to speed up the process:
# Use charmcraft pack
for dir in charms/*/; do (cd "$dir" && charmcraft pack); done
# Use charmcraftcache
for dir in charms/*/; do (cd "$dir" && ccc pack); doneAfter you have packed all charms, run the bundle integration tests by passing the --charms-path option with the path to the charms directory of the project:
# Make sure you pass the full path
tox -e bundle-integration -- --charms-path=<full-path-to-charms-subdirectory>If you have already deployed the bundle and want to rerun the bundle tests only, you can pass the --no-deploy flag to skip the deploying:
tox -e bundle-integration -- --model=kubeflow --no-deploy