Hello, I have trouble understanding how config pat...
# questions
c
Hello, I have trouble understanding how config patterns works for the OmegaConfigLoader. Based on the documentation : https://docs.kedro.org/en/1.0.0/configure/advanced_configuration/#how-to-ensure-non-default-configuration-files-get-loaded I have tried to do something similar, but whenever I specify agents and tasks in my Node it returns the error : ValueError: Pipeline input(s) {'agents', 'tasks'} not found in the DataCatalogWithCatalogCommandsMixin The documentation is not clear how to access agents and tasks dict in my node. Thanks for your help
i
I think loading a config like that is meant to be used to just load new configurations in your
conf/
folder that aren't called
parameters
,
credentials
, or
catalog
. So its loading catalog entries in a files that look like
agents.yml
(for example). When you use the name
"agents"
in a node definition, its looking for a catalog entry called
agents
. So inside one of your catalog files, you would need to have an entry that looks like:
Copy code
agents:
    type: <my-dataset-type>
    filepath: <my-path-to-agent>
👍 1
m
Hi @Clément Franger, I would love to understand a bit more about what you're trying to build. In your case, what do agents and tasks look like? Is it python code, yaml config, something else?
c
By reading the docs, i was under the impression that we could specify any yml files in the conf folder as soon as we change the OmegaConfigLoader args accordingly, and that we could load it in the node inputs the same way as parameters, or params:my_key. Ideally, I would like to have a agents.yml file in my conf folder (that contains any config related to my agents). Thanks
m
You can indeed specify any yml file to be loaded by the config loader, however
parameters
are a special type of config and are the only ones that get passed to the node like that. See for example this section on how custom spark configuration can be used in Kedro: https://docs.kedro.org/en/1.0.0/integrations-and-plugins/pyspark_integration/#initialise-a-sparksession-using-a-hook