bigframes.bigquery.ai.generate_table#
- bigframes.bigquery.ai.generate_table(model: BaseEstimator | str | Series, data: DataFrame | Series | DataFrame | Series, *, output_schema: str, temperature: float | None = None, top_p: float | None = None, max_output_tokens: int | None = None, stop_sequences: List[str] | None = None, request_type: str | None = None) DataFrame[source]#
Generates a table using a BigQuery ML model.
See the AI.GENERATE_TABLE function syntax for additional reference.
Examples:
>>> import bigframes.pandas as bpd >>> import bigframes.bigquery as bbq >>> # The user is responsible for constructing a DataFrame that contains >>> # the necessary columns for the model's prompt. For example, a >>> # DataFrame with a 'prompt' column for text classification. >>> df = bpd.DataFrame({'prompt': ["some text to classify"]}) >>> result = bbq.ai.generate_table( ... "project.dataset.model_name", ... data=df, ... output_schema="category STRING" ... )
- Parameters:
model (bigframes.ml.base.BaseEstimator or str) – The model to use for table generation.
data (bigframes.pandas.DataFrame or bigframes.pandas.Series) – The data to generate table for. If a Series is provided, it is treated as the ‘prompt’ column. If a DataFrame is provided, it must contain a ‘prompt’ column, or you must rename the column you wish to generate table to ‘prompt’.
output_schema (str) – A string defining the output schema (e.g., “col1 STRING, col2 INT64”).
temperature (float, optional) – A FLOAT64 value that is used for sampling promiscuity. The value must be in the range
[0.0, 1.0].top_p (float, optional) – A FLOAT64 value that changes how the model selects tokens for output.
max_output_tokens (int, optional) – An INT64 value that sets the maximum number of tokens in the generated table.
stop_sequences (List[str], optional) – An ARRAY<STRING> value that contains the stop sequences for the model.
request_type (str, optional) – A STRING value that contains the request type for the model.
- Returns:
The generated table.
- Return type: