bigframes.pandas.DataFrame.to_csv#
- DataFrame.to_csv(path_or_buf=None, sep=',', *, header: bool = True, index: bool = True, allow_large_results: bool | None = None) str | None[source]#
Write object to a comma-separated values (csv) file on Cloud Storage.
- Parameters:
path_or_buf (str, path object, file-like object, or None, default None) –
String, path object (implementing os.PathLike[str]), or file-like object implementing a write() function. If None, the result is returned as a string. If a non-binary file object is passed, it should be opened with newline=’’, disabling universal newlines. If a binary file object is passed, mode might need to contain a ‘b’.
Alternatively, a destination URI of Cloud Storage files(s) to store the extracted dataframe in format of
gs://<bucket_name>/<object_name_or_glob>.If the data size is more than 1GB, you must use a wildcard to export the data into multiple files and the size of the files varies.
None, file-like objects or local file paths not yet supported.
index (bool, default True) – If True, write row names (index).
allow_large_results (bool, default None) – If not None, overrides the global setting to allow or disallow large query results over the default size limit of 10 GB. This parameter has no effect when results are saved to Google Cloud Storage (GCS).
- Returns:
If path_or_buf is None, returns the resulting json format as a string. Otherwise returns None.
- Return type:
None or str