bigframes.bigquery.ml.generate_text#

bigframes.bigquery.ml.generate_text(model: BaseEstimator | str | Series, input_: DataFrame | DataFrame | str, *, temperature: float | None = None, max_output_tokens: int | None = None, top_k: int | None = None, top_p: float | None = None, flatten_json_output: bool | None = None, stop_sequences: List[str] | None = None, ground_with_google_search: bool | None = None, request_type: str | None = None) DataFrame[source]#

Generates text using a BigQuery ML model.

See the BigQuery ML GENERATE_TEXT function syntax for additional reference.

Parameters:
  • model (bigframes.ml.base.BaseEstimator or str) – The model to use for text generation.

  • input (Union[bigframes.pandas.DataFrame, str]) – The DataFrame or query to use for text generation.

  • temperature (float, optional) – A FLOAT64 value that is used for sampling promiscuity. The value must be in the range [0.0, 1.0]. A lower temperature works well for prompts that expect a more deterministic and less open-ended or creative response, while a higher temperature can lead to more diverse or creative results. A temperature of 0 is deterministic, meaning that the highest probability response is always selected.

  • max_output_tokens (int, optional) – An INT64 value that sets the maximum number of tokens in the generated text.

  • top_k (int, optional) – An INT64 value that changes how the model selects tokens for output. A top_k of 1 means the next selected token is the most probable among all tokens in the model’s vocabulary. A top_k of 3 means that the next token is selected from among the three most probable tokens by using temperature. The default value is 40.

  • top_p (float, optional) – A FLOAT64 value that changes how the model selects tokens for output. Tokens are selected from most probable to least probable until the sum of their probabilities equals the top_p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top_p value is 0.5, then the model will select either A or B as the next token by using temperature. The default value is 0.95.

  • flatten_json_output (bool, optional) – A BOOL value that determines the content of the generated JSON column.

  • stop_sequences (List[str], optional) – An ARRAY<STRING> value that contains the stop sequences for the model.

  • ground_with_google_search (bool, optional) – A BOOL value that determines whether to ground the model with Google Search.

  • request_type (str, optional) – A STRING value that contains the request type for the model.

Returns:

The generated text.

Return type:

bigframes.pandas.DataFrame