Has anyone successfully saved a `SQLTableDataset` ...
# questions
r
Has anyone successfully saved a
SQLTableDataset
while specifying data types? From what I’m seeing, the pandas API requires a dictionary of SQL alchemy type objects. Would this require a custom resolver to pass the types through the catalog? FYI I’m attempting to save tables to a SQL Server/Azure DWH.
j
Would this require a custom resolver to pass the types through the catalog?
yes! here's an example for Polars:
r
Thanks. What would be a good way to add arguments for these? For instance,
sqlalchemy.types.String(32)
j
maybe
"${sqlalchemy:String:32}"
and you split the value by
:
?
r
Copy code
CONFIG_LOADER_ARGS = {
    "custom_resolvers": {
        "sql_types": lambda x: getattr(sqlalchemy.types, x),
    }
}
I'm not sure how the nested keys would work with that, though
Alternately,
Copy code
CONFIG_LOADER_ARGS = {
    "custom_resolvers": {
        "sql_string": lambda x: sqlalchemy.types.String(x),
    }
}
j
I was thinking
Copy code
CONFIG_LOADER_ARGS = {
    "custom_resolvers": {
        "sql_types": lambda value: getattr(sqlalchemy.types, value.split(":")[0])(value.split(":")[1]),
    }
}
or rather
Copy code
def get_sql_type(value):
    type_cls_name, arg = value.split(":")
    type_cls = getattr(sqlalchemy.types, type_cls_name)
    return type_cls_name(arg)
does it make sense @Richard Purvis?
r
Yes, thanks.
👍🏼 1