Jacques Vergine
02/11/2025, 11:27 AMHall
02/11/2025, 11:28 AMJacques Vergine
02/11/2025, 11:28 AMclass MyDataset(SparkDataset):
def __init__( # noqa: PLR0913
self,
*,
filepath: str,
table: str
):
...
We are then trying to use it in our catalog, but this entry was failing
integration.int.{source}.data1:
type: MyDataset
filepath: ${globals:integration_source_path}/int/{source}/data1
table: {source}_data1
with the following error pointing to the table: {source}_data1
line:
An error has occurred: Invalid YAML or JSON file .../catalog.yml, unable to read line 20, position 17.
ERROR An error has occurred: Invalid YAML or ....py:212
JSON file
.../catalog.yml,
unable to read line 20, position 17.
We managed to solve it by putting {source}
at the end of the table name, like this:
integration.int.{source}.data1:
type: MyDataset
filepath: ${globals:integration_source_path}/int/{source}/data1
table: data1_{source}
Is this an expected behaviour, or should we raise it as an issue?Jitendra Gundaniya
02/11/2025, 11:43 AM{
and tries (and fails) to parse it as a mapping. So table: data1_{source}
or table: "{source}_data1"
should work. and I think no need to raise an issue.Jacques Vergine
02/11/2025, 12:00 PMJacques Vergine
02/11/2025, 3:59 PM