Dimitri Accad06/13/2023, 2:06 PM
command or by uploading the
kedro kubeflow upload-pipeline
file directly. I get the following error:
Our input parameter length being over 22000 characters. Now, I saw that some other people got the issue before as in : https://github.com/kubeflow/pipelines/issues/4828 They were talking about a possible fix in the KFP v2, but I can see that it's not yet stable. Do you have any recommendations on how to bypass/modify this limitation? Thank you very much!
Error creating pipeline: Create pipeline failed: Invalid input error: The input parameter length exceed maximum size of 10000.
marrrcin06/13/2023, 2:53 PM
as an input to the Kedro node directly? Maybe try narrowing down the params to specific subkeys by passing
as an input instead
"params:<key used in the node>"
Dimitri Accad06/13/2023, 4:11 PM
and direct calls to the keys defined in
"params:<key defined in paremeter.yml>"
when it's either a dataset saved as
, or more generally the output of a previous node such as a trained model to send into the evaluation pipeline. I checked but none of the calls to the keys defined in the
can be replaced with
as they are generated at run time and can't be written in the
file before the run. I know there is over 45
file that we keep track of during a run (mostly intermediate results), those are all defined in the
file, could this be part of the issue? Thank you very much!
marrrcin06/14/2023, 9:37 AM
Dimitri Accad06/14/2023, 11:43 AM
marrrcin06/14/2023, 1:49 PM
Dimitri Accad06/14/2023, 2:03 PM
file. It's in this file that the values of the key
is over 22000 characters. We found a workaround by replacing the
file with a dataset that we load. However it requires us to modify our code extensively. By replacing this file we are able to to comply to the 10000 character limit. If we had a setting that would allow us to lift the 10000 character restriction, it would be quite useful.
marrrcin06/14/2023, 2:09 PM