Hi all! In my organization, we run Databricks Workflow based on GitHub repository source code using kedro pipeline daily basis.
I’d like to know which nodes take most of the time to process. What would be the best practices to know how long each nodes require to run in this scenario?
Hey huys, sorry to bother you guys here, I have almost the same question, I'm just a little bit confuse, were you (@datajoely) mentioning this part of those hooks? (print screen), and if so, for me it wasn't clear if its necessary to use Grafana...