bigframes.streaming.StreamingDataFrame.to_pubsub#
- StreamingDataFrame.to_pubsub(*, topic: str, service_account_email: str, job_id: str | None = None, job_id_prefix: str | None = None, start_timestamp: int | float | str | datetime | date | None = None) QueryJob#
Export the StreamingDataFrame as a continue job and returns a QueryJob object for some management functionality.
This method requires an existing pubsub topic. For instructions on creating a pubsub topic, see https://cloud.google.com/pubsub/docs/samples/pubsub-quickstart-create-topic?hl=en
Note that a service account is a requirement for continuous queries exporting to pubsub.
- Parameters:
topic (str) – The name of the pubsub topic to export to. For example: “taxi-rides”
service_account_email (str) – Full name of the service account to run the continuous query. Example: accountname@projectname.gserviceaccounts.com
job_id (str, default None) – If specified, replace the default job id for the query, see job_id parameter of https://cloud.google.com/python/docs/reference/bigquery/latest/google.cloud.bigquery.client.Client#google_cloud_bigquery_client_Client_query
job_id_prefix (str, default None) – If specified, a job id prefix for the query, see job_id_prefix parameter of https://cloud.google.com/python/docs/reference/bigquery/latest/google.cloud.bigquery.client.Client#google_cloud_bigquery_client_Client_query
start_timestamp (int, float, str, datetime, date, default None) – The starting timestamp for the query. Possible values are to 7 days in the past. If don’t specify a timestamp (None), the query will default to the earliest possible time, 7 days ago. If provide a time-zone-naive timestamp, it will be treated as UTC.
- Returns:
See https://cloud.google.com/python/docs/reference/bigquery/latest/google.cloud.bigquery.job.QueryJob The ongoing query job can be managed using this object. For example, the job can be cancelled or its error status can be examined.
- Return type:
google.cloud.bigquery.QueryJob