Anyone know what can cause this error running the ...
# plugins-integrations
g
Anyone know what can cause this error running the space tutorial on Kubeflow?
Copy code
time="2022-11-02T21:13:40.417Z" level=info msg="capturing logs" argo=true
Error: fork/exec /bin/sh: exec format error
fork/exec /bin/sh: exec format error
Happens for data-volume-init pipeline stage.
e
My guess is there’s an issue with a docker container kubeflow tries to run. Maybe it’s been build by docker on different architecture then kubeflow is running? (like container built with m1 mac - arm, kubeflow running ubuntu)
💯 1
g
Thanks. Is there any way to do the equivalent of
docker buildx
with
kedro docker
?
e
kedro docker
is just a wrapper here - once the Dockerfile is in place you can build it any way you want
👍 1
g
Ok got a multi-platform docker build working. And I bumped the version of my image. I changed kubeflow.yaml to point to that later version, and did :
kedro kubeflow upload-pipeline -i "$REPO_PUSH"/"$IMAGE_VER"
kedro kubeflow run-once -en myuser
When I go look in kubeflow, the run fails since it is still using the old version of the pipeline. I delete the pipeline from Kubeflow and repeated this cycle, but the old version of the image is still being used. What do I need to do to get the latest version of the image to be used here?
@marrrcin Might you have any insight here?
m
The
upload-pipeline
command is only required if you want to re-use the pipeline from the Kubeflow Pipelines directly. If your intent is only to run the pipeline in an ad-hoc manner (e.g once you make some changes and re-build the image), then you don’t have to use the
upload-pipeline
every time, just the
run-once
command. The
run-once
command will take the config from the Kedro environment you’re launching it (which by default is
local
) and run the pipeline once for you, directly in the Kubeflow, without the need of manually uploading it first.
I changed kubeflow.yaml to point to that later version, and did
In which Kedro env did you made this change?
g
conf/base/kubeflow.yaml (the only one I’m using)
m
What about local one? Is it empty?
g
Pretty much. It only contains an effectively empty credentials.yaml file
m
OK, try running
Copy code
kedro kubeflow run-once -en myuser -i "$REPO_PUSH"/"$IMAGE_VER"
g
Unfortunately that still runs using the old image version despite the image version being different on the command line (when I echo it).
m
Do you use
latest
tag for the docker image?
g
No, I could not get that to update so I’ve been using semantic (v0.1, v0.11, etc). Looking closer, when I click on a
data-volume-init()
node in the KFP UI, I am seeing mixed versions in the yaml:
containerStatuses > name: main > image: (old version number)
But I see that the current version is being used for the init containers down under:
name: ARGO_TEMPLATE: value: (current version)
Any idea where the stale image version would be coming from?
I was able to get past all this. Thanks for the assistance!
m
Can you share what was the solution?
We can update our docs with some “gotchas” if that’s the case here