write_catalog_table
            WriteCatalogTableAction
¶
    
              Bases: PipelineAction
Writes a DataFrame to a specified catalog table using CatalogWriter.
Example
Source code in src/cloe_nessy/pipeline/actions/write_catalog_table.py
                
            run(context, *, table_identifier=None, mode='append', partition_by=None, options=None, **_)
  
      staticmethod
  
¶
    Writes a DataFrame to a specified catalog table.
Parameters:
| Name | Type | Description | Default | 
|---|---|---|---|
| context | PipelineContext | Context in which this Action is executed. | required | 
| table_identifier | str | None | The table identifier in the unity catalog in the format 'catalog.schema.table'. If not provided, attempts to use the context's table metadata. | None | 
| mode | str | The write mode. One of 'append', 'overwrite', 'error', 'errorifexists', or 'ignore'. | 'append' | 
| partition_by | str | list[str] | None | Names of the partitioning columns. | None | 
| options | dict[str, str] | None | PySpark options for the DataFrame.saveAsTable operation (e.g. mergeSchema:true). | None | 
Raises:
| Type | Description | 
|---|---|
| ValueError | If the table name is not specified or cannot be inferred from the context. | 
Returns:
| Type | Description | 
|---|---|
| PipelineContext | Context after the execution of this Action. |