# Copyright 2025 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

Format LLM output using an output schema#

Colab logo Run in Colab GitHub logo View on GitHub BQ logo Open in BigQuery Studio

This notebook shows you how to create structured LLM output by specifying an output schema when generating predictions with a Gemini model.

Costs#

This tutorial uses billable components of Google Cloud:

  • BigQuery (compute)

  • BigQuery ML

  • Generative AI support on Vertex AI

Learn about BigQuery compute pricing, Generative AI support on Vertex AI pricing, and BigQuery ML pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage.

Before you begin#

Complete the tasks in this section to set up your environment.

Set up your Google Cloud project#

The following steps are required, regardless of your notebook environment.

  1. Select or create a Google Cloud project. When you first create an account, you get a $300 credit towards your compute/storage costs.

  2. Make sure that billing is enabled for your project.

  3. Click here to enable the following APIs:

  • BigQuery API

  • BigQuery Connection API

  • Vertex AI API

  1. If you are running this notebook locally, install the Cloud SDK.

Authenticate your Google Cloud account#

Depending on your Jupyter environment, you might have to manually authenticate. Follow the relevant instructions below.

BigQuery Studio or Vertex AI Workbench

Do nothing, you are already authenticated.

Local JupyterLab instance

Uncomment and run the following cell:

# ! gcloud auth login

Colab

Uncomment and run the following cell:

# from google.colab import auth
# auth.authenticate_user()

Set up your project#

Set your project and import necessary modules. If you don’t know your project ID, see Locate the project ID.

PROJECT = "" # replace with your project
import bigframes
bigframes.options.bigquery.project = PROJECT
bigframes.options.display.progress_bar = None

import bigframes.pandas as bpd
from bigframes.ml import llm

Create a DataFrame and a Gemini model#

Create a simple DataFrame of several cities:

df = bpd.DataFrame({"city": ["Seattle", "New York", "Shanghai"]})
df
/usr/local/google/home/garrettwu/src/bigframes/bigframes/core/global_session.py:103: DefaultLocationWarning: No explicit location is set, so using location US for the session.
  _global_session = bigframes.session.connect(
city
0 Seattle
1 New York
2 Shanghai

3 rows × 1 columns

[3 rows x 1 columns in total]

Connect to a Gemini model using the GeminiTextGenerator class:

gemini = llm.GeminiTextGenerator()
/usr/local/google/home/garrettwu/src/bigframes/bigframes/core/log_adapter.py:175: FutureWarning: Since upgrading the default model can cause unintended breakages, the
default model will be removed in BigFrames 3.0. Please supply an
explicit model to avoid this message.
  return method(*args, **kwargs)

Generate structured output data#

Previously, LLMs could only generate text output. For example, you could generate output that identifies whether a given city is a US city:

result = gemini.predict(df, prompt=[df["city"], "is a US city?"])
result[["city", "ml_generate_text_llm_result"]]
/usr/local/google/home/garrettwu/src/bigframes/bigframes/core/array_value.py:109: PreviewWarning: JSON column interpretation as a custom PyArrow extention in
`db_dtypes` is a preview feature and subject to change.
  warnings.warn(msg, bfe.PreviewWarning)
city ml_generate_text_llm_result
0 Seattle Yes, Seattle is a city in the United States. I...
1 New York Yes, New York City is a city in the United Sta...
2 Shanghai No, Shanghai is not a US city. It is a major c...

3 rows × 2 columns

[3 rows x 2 columns in total]

The output is text that a human can read. However, if you want the output to be more useful for analysis, it is better to format the output as structured data. This is especially true when you want to have Boolean, integer, or float values to work with instead of string values. Previously, formatting the output in this way wasn’t easy.

Now, you can get structured output out-of-the-box by specifying the output_schema parameter when calling the Gemini model’s predict method. In the following example, the model output is formatted as Boolean values:

result = gemini.predict(df, prompt=[df["city"], "is a US city?"], output_schema={"is_us_city": "bool"})
result[["city", "is_us_city"]]
/usr/local/google/home/garrettwu/src/bigframes/bigframes/core/array_value.py:109: PreviewWarning: JSON column interpretation as a custom PyArrow extention in
`db_dtypes` is a preview feature and subject to change.
  warnings.warn(msg, bfe.PreviewWarning)
city is_us_city
0 Seattle True
1 New York True
2 Shanghai False

3 rows × 2 columns

[3 rows x 2 columns in total]

You can also format model output as float or integer values. In the following example, the model output is formatted as float values to show the city’s population in millions:

result = gemini.predict(df, prompt=["what is the population in millions of", df["city"]], output_schema={"population_in_millions": "float64"})
result[["city", "population_in_millions"]]
/usr/local/google/home/garrettwu/src/bigframes/bigframes/core/array_value.py:109: PreviewWarning: JSON column interpretation as a custom PyArrow extention in
`db_dtypes` is a preview feature and subject to change.
  warnings.warn(msg, bfe.PreviewWarning)
city population_in_millions
0 Seattle 0.75
1 New York 19.68
2 Shanghai 26.32

3 rows × 2 columns

[3 rows x 2 columns in total]

In the following example, the model output is formatted as integer values to show the count of the city’s rainy days:

result = gemini.predict(df, prompt=["how many rainy days per year in", df["city"]], output_schema={"rainy_days": "int64"})
result[["city", "rainy_days"]]
/usr/local/google/home/garrettwu/src/bigframes/bigframes/core/array_value.py:109: PreviewWarning: JSON column interpretation as a custom PyArrow extention in
`db_dtypes` is a preview feature and subject to change.
  warnings.warn(msg, bfe.PreviewWarning)
city rainy_days
0 Seattle 152
1 New York 123
2 Shanghai 123

3 rows × 2 columns

[3 rows x 2 columns in total]

Format output as multiple data types in one prediction#

Within a single prediction, you can generate multiple columns of output that use different data types.

The input doesn’t have to be dedicated prompts as long as the output column names are informative to the model.

result = gemini.predict(df, prompt=[df["city"]], output_schema={"is_US_city": "bool", "population_in_millions": "float64", "rainy_days_per_year": "int64"})
result[["city", "is_US_city", "population_in_millions", "rainy_days_per_year"]]
/usr/local/google/home/garrettwu/src/bigframes/bigframes/core/array_value.py:109: PreviewWarning: JSON column interpretation as a custom PyArrow extention in
`db_dtypes` is a preview feature and subject to change.
  warnings.warn(msg, bfe.PreviewWarning)
city is_US_city population_in_millions rainy_days_per_year
0 Seattle True 0.75 152
1 New York True 8.8 121
2 Shanghai False 26.32 115

3 rows × 4 columns

[3 rows x 4 columns in total]

Format output as a composite data type#

You can generate composite data types like arrays and structs. The following example generates a places_to_visit column as an array of strings and a gps_coordinates column as a struct of floats:

result = gemini.predict(df, prompt=[df["city"]], output_schema={"is_US_city": "bool", "population_in_millions": "float64", "rainy_days_per_year": "int64", "places_to_visit": "array<string>", "gps_coordinates": "struct<latitude float64, longitude float64>"})
result[["city", "is_US_city", "population_in_millions", "rainy_days_per_year", "places_to_visit", "gps_coordinates"]]
/usr/local/google/home/garrettwu/src/bigframes/bigframes/core/array_value.py:109: PreviewWarning: JSON column interpretation as a custom PyArrow extention in
`db_dtypes` is a preview feature and subject to change.
  warnings.warn(msg, bfe.PreviewWarning)
city is_US_city population_in_millions rainy_days_per_year places_to_visit gps_coordinates
0 Seattle True 0.74 150 ['Space Needle' 'Pike Place Market' 'Museum of... {'latitude': 47.6062, 'longitude': -122.3321}
1 New York True 8.4 121 ['Times Square' 'Central Park' 'Statue of Libe... {'latitude': 40.7128, 'longitude': -74.006}
2 Shanghai False 26.32 115 ['The Bund' 'Yu Garden' 'Shanghai Museum' 'Ori... {'latitude': 31.2304, 'longitude': 121.4737}

3 rows × 6 columns

[3 rows x 6 columns in total]

Clean up#

To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial.

Otherwise, run the following cell to delete the temporary cloud artifacts created during the BigFrames session:

bpd.close_session()

Next steps#

Learn more about BigQuery DataFrames in the documentation and find more sample notebooks in the GitHub repo.