Oscar Villa
02/06/2023, 8:22 PMgcloud auth login
But when I run kedro run
and its slicing variations or kedro ipython
it rise problems with permisions.datajoely
02/07/2023, 9:46 AMOscar Villa
02/07/2023, 1:13 PMkedro ipython
a issue with permision arise. And the same when I ran kedro run
. But if I run
from kedro.extras.datasets.pandas import GBQQueryDataSet
sql = "SELECT * FROM dataset_1.table_a"
data_set = GBQQueryDataSet(sql, project='my-project')
sql_data = data_set.load()
datajoely
02/07/2023, 1:33 PMGPQQueryDataSet
classOscar Villa
02/07/2023, 2:08 PMdatajoely
02/07/2023, 2:08 PMthis example only works if you’ve authenticated on an env leveldata_set = GBQQueryDataSet(sql, project='my-project')
credentials (*`Union`*[*`Dict`*[,str
],Any
,Credentials
]) – Credentials for accessing Google APIs. EitherNone
object or dictionary with parameters required to instantiategoogle.auth.credentials.Credentials
. Here you can find all the arguments: https://google-auth.readthedocs.io/en/latest/reference/google.oauth2.credentials.htmlgoogle.oauth2.credentials.Credentials
Oscar Villa
02/07/2023, 2:13 PMgoogle.auth.credentials.Credentials
in the node code in Python, the dataset class will find the credentials in the env. Y prefere something like that, that do not imlly to put the credentials in the. yamldatajoely
02/07/2023, 2:15 PMOscar Villa
02/07/2023, 2:45 PMgcloud auth application-default login
So, it's no need to store the credentials in the .yaml nor to do anything else. The data now is ingested in kedro ipython
and when I run the pipeline with kedro run:+1:
datajoely
02/07/2023, 2:45 PMOscar Villa
02/07/2023, 2:52 PMdatajoely
02/07/2023, 3:00 PM