bigframes.pandas.read_csv#

bigframes.pandas.read_csv(filepath_or_buffer: str | IO[bytes], *, sep: str | None = ',', header: int | None = 0, names: MutableSequence[Any] | ndarray[Any, Any] | Tuple[Any, ...] | range | None = None, index_col: int | str | Sequence[str | int] | DefaultIndexKind | Literal[False] | None = None, usecols: MutableSequence[str] | Tuple[str, ...] | Sequence[int] | Series | Index | ndarray[Any, Any] | Callable[[Any], bool] | None = None, dtype: Dict | None = None, engine: Literal['c', 'python', 'pyarrow', 'python-fwf', 'bigquery'] | None = None, encoding: str | None = None, write_engine: Literal['default', 'bigquery_inline', 'bigquery_load', 'bigquery_streaming', 'bigquery_write', '_deferred'] = 'default', **kwargs) DataFrame[source]#

Loads data from a comma-separated values (csv) file into a DataFrame.

The CSV file data will be persisted as a temporary BigQuery table, which can be automatically recycled after the Session is closed.

Note

using engine=”bigquery” will not guarantee the same ordering as the file. Instead, set a serialized index column as the index and sort by that in the resulting DataFrame. Only files stored on your local machine or in Google Cloud Storage are supported.

Note

For non-bigquery engine, data is inlined in the query SQL if it is small enough (roughly 5MB or less in memory). Larger size data is loaded to a BigQuery table instead.

Examples:

>>> import bigframes.pandas as bpd
>>> gcs_path = "gs://cloud-samples-data/bigquery/us-states/us-states.csv"
>>> df = bpd.read_csv(filepath_or_buffer=gcs_path)
>>> df.head(2)
      name post_abbr
0  Alabama        AL
1   Alaska        AK

[2 rows x 2 columns]
Parameters:
  • filepath_or_buffer (str) – A local or Google Cloud Storage (gs://) path with engine=”bigquery” otherwise passed to pandas.read_csv.

  • sep (Optional[str], default ",") – the separator for fields in a CSV file. For the BigQuery engine, the separator can be any ISO-8859-1 single-byte character. To use a character in the range 128-255, you must encode the character as UTF-8. Both engines support sep=” “ to specify tab character as separator. Default engine supports having any number of spaces as separator by specifying sep=”s+”. Separators longer than 1 character are interpreted as regular expressions by the default engine. BigQuery engine only supports single character separators.

  • header (Optional[int], default 0) – row number to use as the column names. - None: Instructs autodetect that there are no headers and data should be read starting from the first row. - 0: If using engine=”bigquery”, Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. When using default engine, pandas assumes the first row contains column names unless the names argument is specified. If names is provided, then the first row is ignored, second row is read as data, and column names are inferred from names. - N > 0: If using engine=”bigquery”, Autodetect skips N rows and tries to detect headers in row N+1. If headers are not detected, row N+1 is just skipped. Otherwise row N+1 is used to extract column names for the detected schema. When using default engine, pandas will skip N rows and assumes row N+1 contains column names unless the names argument is specified. If names is provided, row N+1 will be ignored, row N+2 will be read as data, and column names are inferred from names.

  • names (default None) – a list of column names to use. If the file contains a header row and you want to pass this parameter, then header=0 should be passed as well so the first (header) row is ignored.

  • index_col (default None) – column(s) to use as the row labels of the DataFrame, either given as string name or column index. index_col=False can be used with the default engine only to enforce that the first column is not used as the index. Using column index instead of column name is only supported with the default engine. The BigQuery engine only supports having a single column name as the index_col. Neither engine supports having a multi-column index.

  • usecols (default None) – List of column names to use): The BigQuery engine only supports having a list of string column names. Column indices and callable functions are only supported with the default engine. Using the default engine, the column names in usecols can be defined to correspond to column names provided with the names parameter (ignoring the document’s header row of column names). The order of the column indices/names in usecols is ignored with the default engine. The order of the column names provided with the BigQuery engine will be consistent in the resulting dataframe. If using a callable function with the default engine, only column names that evaluate to True by the callable function will be in the resulting dataframe.

  • dtype (data type for data or columns) – Data type for data or columns. Only to be used with default engine.

  • engine (Optional[Dict], default None) – Type of engine to use. If engine=”bigquery” is specified, then BigQuery’s load API will be used. Otherwise, the engine will be passed to pandas.read_csv.

  • encoding (Optional[str], default to None) – encoding the character encoding of the data. The default encoding is UTF-8 for both engines. The default engine acceps a wide range of encodings. Refer to Python documentation for a comprehensive list, https://docs.python.org/3/library/codecs.html#standard-encodings The BigQuery engine only supports UTF-8 and ISO-8859-1.

  • write_engine (str) – How data should be written to BigQuery (if at all). See bigframes.pandas.read_pandas() for a full description of supported values.

  • **kwargs – keyword arguments for pandas.read_csv when not using the BigQuery engine.

Returns:

A BigQuery DataFrames.

Return type:

bigframes.pandas.DataFrame

Raises:

bigframes.exceptions.DefaultIndexWarning – Using the default index is discouraged, such as with clustered or partitioned tables without primary keys.