Hello again <@U045L91RV9D> and Kedro community T...
# plugins-integrations
Hello again @marrrcin and Kedro community Thank you for your answer on my previous question, I was able to communicate with the server following your instructions! I'm facing a new issue now, when uploading the pipeline, with either with the
kedro kubeflow upload-pipeline
command or by uploading the
file directly. I get the following error:
Copy code
Error creating pipeline: Create pipeline failed: Invalid input error: The input parameter length exceed maximum size of 10000.
Our input parameter length being over 22000 characters. Now, I saw that some other people got the issue before as in : https://github.com/kubeflow/pipelines/issues/4828 They were talking about a possible fix in the KFP v2, but I can see that it's not yet stable. Do you have any recommendations on how to bypass/modify this limitation? Thank you very much!
Do you pass
as an input to the Kedro node directly? Maybe try narrowing down the params to specific subkeys by passing
"params:<key used in the node>"
as an input instead
Hello marrrcin, Thank you for your quick answer! The input of my Kedro nodes are a mix of
"params:<key defined in paremeter.yml>"
and direct calls to the keys defined in
when it's either a dataset saved as
, or more generally the output of a previous node such as a trained model to send into the evaluation pipeline. I checked but none of the calls to the keys defined in the
can be replaced with
as they are generated at run time and can't be written in the
file before the run. I know there is over 45
file that we keep track of during a run (mostly intermediate results), those are all defined in the
file, could this be part of the issue? Thank you very much!
How many lines does your
file have?
We have about 550 lines in the parameters.yml file.
I dont understand the “Our input parameter length being over 22000 characters.” then
It's after we compile the pipeline that we obtain the
file. It's in this file that the values of the key
is over 22000 characters. We found a workaround by replacing the
file with a dataset that we load. However it requires us to modify our code extensively. By replacing this file we are able to to comply to the 10000 character limit. If we had a setting that would allow us to lift the 10000 character restriction, it would be quite useful.
This is strictly related to Kubeflow then, not Kedro / plugin