hey, i am using the latest version of Kedro (0.18...
# questions
d
hey, i am using the latest version of Kedro (0.18.4). i am trying to use the session.load_context() function and the functions returns :
d
have you got any custom hooks enabled?
d
no
i was able to run it two days ago
i just install the pyarrow package
and something went broken
d
That shouldn’t have affected things, could you post the stack trace
d
Untitled.cpp
d
I think this bit of dynamic pipeline-ing here is causing the issues:
Copy code
│ /Users/dorzazon/Documents/workspace/egged-kedro/egged/src/egged/pipelines/data_processing/pipeli │
│ ne.py:28 in create_pipeline                                                                      │
│                                                                                                  │
│   25 │   │   │   )])                                                                             │
│   26 │   pipelines = []                                                                          │
│   27 │   # get catalog                                                                           │
│ ❱ 28 │   catalog = _get_catalog()                                                                │
│   29 │   for dataset in catalog.load('params:dataset_names'):                                    │
│   30 │   │   pipelines.append(pipeline(pipe=template,                                            │
│   31 │   │   │   │   │   │   │   │   │   │   inputs={"dataset": dataset},
it’s best to access the catalog live via hooks
d
how can i find information on how to acess catalog live via hooks?
d
but, what is the problem with acessing the catalog like i did?
it worked for me two days ago
now something is broken
and, which hook should i use to mimic what is did?
this is my create_pipeline functiom:
Copy code
template = pipeline(
    [
        node(
            func=preprocess_df,
            inputs=["dataset", 'params:dataset_config', 'params:col_names_config'],
            outputs="preprocessed_dataset_name",
            name="preprocess_df_node"
        )])
pipelines = []
# get catalog
catalog = _get_catalog()
for dataset in catalog.load('params:dataset_names'):
    pipelines.append(pipeline(pipe=template,
                                    inputs={"dataset": dataset},
                                    parameters={"params:dataset_config": f'params:{dataset}',
                                                'params:col_names_config': 'params:col_names_config'},
                                    outputs={"preprocessed_dataset_name": f'preprocessed_{dataset}'},
                                    namespace=f'preprocessed_{dataset}'))
# return all pipelines
final_pipeline = pipelines[0]
for pipe in pipelines[1:]:
    final_pipeline += pipe