hey kedroids, whats the rationale / approach with...
# questions
hey kedroids, whats the rationale / approach with the spark versioning on kedro-datasets? Do we need to re-test with later spark versions or is there a known breaking issue past 3.4? Im hoping to use more recent versions of delta & spark but short of forking kedro-datasets we are a little stuck here right?
Copy code
The user requested pyspark==3.5.1
    delta-spark 3.2.0 depends on pyspark<3.6.0 and >=3.5.0
    kedro-datasets[pandas-csvdataset,pandas-exceldataset,pandas-parquetdataset,spark-deltatabledataset,spark-sparkdataset] 1.6.0 depends on pyspark<3.4 and >=2.2; extra == "spark-deltatabledataset"
We've upped the dependency bound in the newer
versions 🙂
ah weird i hadnt pinned kedro datasets version yet it was loading 2.1. must be some shenanigens in my environment. looks like
is most recent supported
👀 2
It seems to be because of this dependency which affects the databricks dataset