hi all, had a quick question when trying to run a ...
# questions
m
hi all, had a quick question when trying to run a pipeline on DataBricks, with
spark.SparkJDBCDataset
entries in the catalog. This is the error we are getting:
Copy code
DatasetError: Failed while loading data from data set SparkJDBCDataset(load_args={'properties': {'driver': com.simba.athena.jdbc.Driver}}, save_args={'properties': {'driver': com.simba.athena.jdbc.Driver}}, table=ticket_master.vw_fact_trans_data, url=jdbc:<awsathena://AwsRegion>=XXX;S3OutputLocation=<s3://XXX/;profile=default;>).
An error occurred while calling o831.jdbc.
: java.sql.SQLException: [JDBC Driver]profile file cannot be null
d
Hi @Mark Einhorn - I’m not an expert in
Athena
but is the a profile called “default” in the config? You’re specifiying it in the URL:
jdbc:<awsathena://AwsRegion>=XXX;S3OutputLocation=<s3://XXX/;profile=default>;
m
Hey @datajoely, thanks very much for the reply. Yeah, that’s our issue, when we run the pipeline locally, we point it at our
default
AWS creds, but we don’t pass those to DB, hence, it’s looking for a profile that’s not there. And if we drop
profile
from the catalog entry, it complains that
profile
can’t be null
d
Again not an aws expert but I think there might be some Env vars you can apply