bigframes.streaming.StreamingDataFrame.to_bigtable#
- StreamingDataFrame.to_bigtable(*, instance: str, table: str, service_account_email: str | None = None, app_profile: str | None = None, truncate: bool = False, overwrite: bool = False, auto_create_column_families: bool = False, bigtable_options: dict | None = None, job_id: str | None = None, job_id_prefix: str | None = None, start_timestamp: int | float | str | datetime | date | None = None, end_timestamp: int | float | str | datetime | date | None = None) QueryJob#
Export the StreamingDataFrame as a continue job and returns a QueryJob object for some management functionality.
This method requires an existing bigtable preconfigured to accept the continuous query export statement. For instructions on export to bigtable, see https://cloud.google.com/bigquery/docs/export-to-bigtable.
- Parameters:
instance (str) – The name of the bigtable instance to export to.
table (str) – The name of the bigtable table to export to.
service_account_email (str) – Full name of the service account to run the continuous query. Example: accountname@projectname.gserviceaccounts.com If not provided, the user account will be used, but this limits the lifetime of the continuous query.
app_profile (str, default None) – The bigtable app profile to export to. If None, no app profile will be used.
truncate (bool, default False) – The export truncate option, see https://cloud.google.com/bigquery/docs/reference/standard-sql/other-statements#bigtable_export_option
overwrite (bool, default False) – The export overwrite option, see https://cloud.google.com/bigquery/docs/reference/standard-sql/other-statements#bigtable_export_option
auto_create_column_families (bool, default False) – The auto_create_column_families option, see https://cloud.google.com/bigquery/docs/reference/standard-sql/other-statements#bigtable_export_option
bigtable_options (dict, default None) – The bigtable options dict, which will be converted to JSON using json.dumps, see https://cloud.google.com/bigquery/docs/reference/standard-sql/other-statements#bigtable_export_option If None, no bigtable_options parameter will be passed.
job_id (str, default None) – If specified, replace the default job id for the query, see job_id parameter of https://cloud.google.com/python/docs/reference/bigquery/latest/google.cloud.bigquery.client.Client#google_cloud_bigquery_client_Client_query
job_id_prefix (str, default None) – If specified, a job id prefix for the query, see job_id_prefix parameter of https://cloud.google.com/python/docs/reference/bigquery/latest/google.cloud.bigquery.client.Client#google_cloud_bigquery_client_Client_query
start_timestamp (int, float, str, datetime, date, default None) – The starting timestamp for the query. Possible values are to 7 days in the past. If don’t specify a timestamp (None), the query will default to the earliest possible time, 7 days ago. If provide a time-zone-naive timestamp, it will be treated as UTC.
- Returns:
See https://cloud.google.com/python/docs/reference/bigquery/latest/google.cloud.bigquery.job.QueryJob The ongoing query job can be managed using this object. For example, the job can be cancelled or its error status can be examined.
- Return type:
google.cloud.bigquery.QueryJob