Is there a way to use environment variables for fi...
# questions
k
Is there a way to use environment variables for file paths in the catalog.yml or is it all relative to the directory you execute the command from? If I am executing after I init the project and work on the pipeline works fine, but it is after I run
kedro package
and I deploy the wheel and conf tar.gz to another machine is where I was wondering? If I was able to have an environment variable setup for the prefix depending then the wheel could be executed from anywhere on the system and the catalog will know where to look for data and also where write data. This could be one or more environment variables too. For example:
Copy code
raw_daily_data:
  type: kedro_mlflow.io.artifacts.MlflowArtifactDataSet
  data_set:
    type: PartitionedDataSet
    path: "{{ ANOM_DETECT_DATADIR }}/data/01_raw"  # path to the location of partitions
    dataset: pickle.PickleDataSet
  layer: raw
m
a
+1 on what @marrrcin said. Ideally you shouldn’t use environment variables anywhere except for loading credentials. You can do it with
OmegaConfigLoader
with a custom implementation right now. With the next release (coming soon!) you would be able to do it this way - https://docs.kedro.org/en/latest/configuration/advanced_configuration.html#how-to-use-resolvers-in-the-omegaconfigloader
k
@Ankita Katiyar and @marrrcin, thank you for the feedback! I will stick with best practices and not use the environment variables. 🙂 Good to know about the OmageConfigLoader and the upcoming features too!
👍 1