Hello, I was going through the tutorial on visuali...
# questions
f
Hello, I was going through the tutorial on visualizing pipelines in notebooks (

YouTube link

) and found it quite insightful, especially for data teams that primarily work within notebook environments. However, I’m a bit unsure about the recommended approach for running a pipeline when it's defined directly within a notebook. I understand that normally, executing a pipeline requires initializing a
KedroSession
. But in cases where the
Pipeline
and
DataCatalog
is already defined in the notebook itself, what would be the best practice for running it?
managed to get it working by instantiating an
OmegaConfigLoader
and using
DataCatalog.from_config
. The issue, however, is that
DataCatalog
does not include parameters by default, so I had to create a function to build the
feed_dict,
similar to the
get_feed_dict
method from the
Context
object, I believe. With that setup, I was able to instantiate a
SequentialRunner
and call its
run
method successfully. The only part I’m not entirely satisfied with is how I’m passing the parameters to the
DataCatalog
not sure if there’s a better way to do it
e
Hi @Felipe Monroy, you can add them like this
catalog[param_name] = param_value
as long as you’re able to create
params_dict,
which should look something like this:
Copy code
params_dict = {
  "params:a": {"b": 1},
  "params:a.b": 1
}
This is done automatically via context but there’s no separate public API for that
f
Thank you! I watched the video and thought about the possibility of using Kedro entirely within a notebook. I believe the only missing part is the feed dictionary, everything else works perfectly