pyspark.pipelines.create_streaming_table#
- pyspark.pipelines.create_streaming_table(name, *, comment=None, table_properties=None, partition_cols=None, cluster_by=None, schema=None, format=None)[source]#
Creates a table that can be targeted by append flows.
- Example:
create_streaming_table(“target”)
- Parameters
name – The name of the table.
comment – Description of the table.
table_properties – A dict where the keys are the property names and the values are the property values. These properties will be set on the table.
partition_cols – A list containing the column names of the partition columns.
cluster_by – A list containing the column names of the cluster columns.
schema – Explicit Spark SQL schema to materialize this table with. Supports either a Pyspark StructType or a SQL DDL string, such as “a INT, b STRING”.
format – The format of the table, e.g. “parquet”.