Bemærk
Adgang til denne side kræver godkendelse. Du kan prøve at logge på eller ændre mapper.
Adgang til denne side kræver godkendelse. Du kan prøve at ændre mapper.
Partition transform function: A transform for timestamps to partition data into hours. Supports Spark Connect.
Warning
Deprecated in 4.0.0. Use partitioning.hours instead.
Syntax
from pyspark.databricks.sql import functions as dbf
dbf.hours(col=<col>)
Parameters
| Parameter | Type | Description |
|---|---|---|
col |
pyspark.sql.Column or str |
Target date or timestamp column to work on. |
Returns
pyspark.sql.Column: Data partitioned by hours.
Examples
df.writeTo("catalog.db.table").partitionedBy(
hours("ts")
).createOrReplace()
Note
This function can be used only in combination with the partitionedBy method of the DataFrameWriterV2.