{ "cells": [ { "cell_type": "code", "execution_count": 1, "id": "acd53f9d", "metadata": {}, "outputs": [], "source": [ "# Copyright 2025 Google LLC\n", "#\n", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "id": "e75ce682", "metadata": {}, "source": [ "# BigQuery DataFrames (BigFrames) AI Functions\n", "\n", "\n", "\n", " \n", " \n", " \n", "
\n", " \n", " \"Colab Run in Colab\n", " \n", " \n", " \n", " \"GitHub\n", " View on GitHub\n", " \n", " \n", " \n", " \"BQ\n", " Open in BQ Studio\n", " \n", "
" ] }, { "cell_type": "markdown", "id": "aee05821", "metadata": {}, "source": [ "This notebook provides a brief introduction to AI functions in BigQuery Dataframes." ] }, { "cell_type": "markdown", "id": "1232f400", "metadata": {}, "source": [ "## Preparation\n", "\n", "First, set up your BigFrames environment:" ] }, { "cell_type": "code", "execution_count": null, "id": "c9f924aa", "metadata": {}, "outputs": [], "source": [ "import bigframes.pandas as bpd \n", "\n", "PROJECT_ID = \"\" # Your project ID here\n", "\n", "bpd.options.bigquery.project = PROJECT_ID\n", "bpd.options.bigquery.ordering_mode = \"partial\"\n", "bpd.options.display.progress_bar = None" ] }, { "cell_type": "markdown", "id": "e2188773", "metadata": {}, "source": [ "## ai.generate\n", "\n", "The `ai.generate` function lets you analyze any combination of text and unstructured data from BigQuery. You can mix BigFrames or Pandas series with string literals as your prompt in the form of a tuple. You are also allowed to provide only a series. Here is an example:" ] }, { "cell_type": "code", "execution_count": 3, "id": "471a47fe", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/usr/local/google/home/sycai/src/python-bigquery-dataframes/bigframes/core/global_session.py:103: DefaultLocationWarning: No explicit location is set, so using location US for the session.\n", " _global_session = bigframes.session.connect(\n" ] }, { "data": { "text/plain": [ "0 {'result': 'Salad\\n', 'full_response': '{\"cand...\n", "1 {'result': 'Sausageroll\\n', 'full_response': '...\n", "dtype: struct>, status: string>[pyarrow]" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import bigframes.bigquery as bbq\n", "\n", "ingredients1 = bpd.Series([\"Lettuce\", \"Sausage\"])\n", "ingredients2 = bpd.Series([\"Cucumber\", \"Long Bread\"])\n", "\n", "prompt = (\"What's the food made from \", ingredients1, \" and \", ingredients2, \" One word only\")\n", "bbq.ai.generate(prompt)" ] }, { "cell_type": "markdown", "id": "03953835", "metadata": {}, "source": [ "The function returns a series of structs. The `'result'` field holds the answer, while more metadata can be found in the `'full_response'` field. The `'status'` field tells you whether LLM made a successful response for that specific row. " ] }, { "cell_type": "markdown", "id": "b606c51f", "metadata": {}, "source": [ "You can also include additional model parameters into your function call, as long as they conform to the structure of `generateContent` [request body format](https://cloud.google.com/vertex-ai/docs/reference/rest/v1/projects.locations.endpoints/generateContent#request-body). In the next example, you use `maxOutputTokens` to limit the length of the generated content." ] }, { "cell_type": "code", "execution_count": 4, "id": "4a3229a8", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0 Lettuce\n", "1 The food\n", "Name: result, dtype: string" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "model_params = {\n", " \"generationConfig\": {\"maxOutputTokens\": 2}\n", "}\n", "\n", "ingredients1 = bpd.Series([\"Lettuce\", \"Sausage\"])\n", "ingredients2 = bpd.Series([\"Cucumber\", \"Long Bread\"])\n", "\n", "prompt = (\"What's the food made from \", ingredients1, \" and \", ingredients2)\n", "bbq.ai.generate(prompt, model_params=model_params).struct.field(\"result\")" ] }, { "cell_type": "markdown", "id": "3acba92d", "metadata": {}, "source": [ "The answers are cut short as expected.\n", "\n", "In addition to `ai.generate`, you can use `ai.generate_bool`, `ai.generate_int`, and `ai.generate_double` for other output types." ] }, { "cell_type": "markdown", "id": "0bf9f1de", "metadata": {}, "source": [ "## ai.if_\n", "\n", "`ai.if_` generates a series of booleans. It's a handy tool for joining and filtering your data, not only because it directly returns boolean values, but also because it provides more optimization during data processing. Here is an example of using `ai.if_`:" ] }, { "cell_type": "code", "execution_count": 5, "id": "718c6622", "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
creaturecategory
0Catmammal
1Salmonfish
\n", "

2 rows × 2 columns

\n", "
[2 rows x 2 columns in total]" ], "text/plain": [ "creature category\n", " Cat mammal\n", " Salmon fish\n", "\n", "[2 rows x 2 columns]" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "creatures = bpd.DataFrame({\"creature\": [\"Cat\", \"Salmon\"]})\n", "categories = bpd.DataFrame({\"category\": [\"mammal\", \"fish\"]})\n", "\n", "joined_df = creatures.merge(categories, how=\"cross\")\n", "condition = bbq.ai.if_((joined_df[\"creature\"], \" is a \", joined_df[\"category\"]))\n", "\n", "# Filter our dataframe\n", "joined_df = joined_df[condition]\n", "joined_df" ] }, { "cell_type": "markdown", "id": "bb0999df", "metadata": {}, "source": [ "## ai.score" ] }, { "cell_type": "markdown", "id": "63b5a59f", "metadata": {}, "source": [ "`ai.score` ranks your input based on the prompt and assigns a double value (i.e. a score) to each item. You can then sort your data based on their scores. For example:" ] }, { "cell_type": "code", "execution_count": 6, "id": "6875fe36", "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
animalsrelative_weight
1spider1.0
0tiger8.0
2blue whale10.0
\n", "

3 rows × 2 columns

\n", "
[3 rows x 2 columns in total]" ], "text/plain": [ " animals relative_weight\n", "1 spider 1.0\n", "0 tiger 8.0\n", "2 blue whale 10.0\n", "\n", "[3 rows x 2 columns]" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df = bpd.DataFrame({'animals': ['tiger', 'spider', 'blue whale']})\n", "\n", "df['relative_weight'] = bbq.ai.score((\"Rank the relative weight of \", df['animals'], \" on the scale from 1 to 10\"))\n", "df.sort_values(by='relative_weight')" ] }, { "cell_type": "markdown", "id": "1ed0dff1", "metadata": {}, "source": [ "## ai.classify" ] }, { "cell_type": "markdown", "id": "c56b91cf", "metadata": {}, "source": [ "`ai.classify` categories your inputs into the specified categories. " ] }, { "cell_type": "code", "execution_count": 7, "id": "8cfb844b", "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
animalcategory
0tigermammal
1spideranthropod
2blue whalemammal
3salmonfish
\n", "

4 rows × 2 columns

\n", "
[4 rows x 2 columns in total]" ], "text/plain": [ " animal category\n", "0 tiger mammal\n", "1 spider anthropod\n", "2 blue whale mammal\n", "3 salmon fish\n", "\n", "[4 rows x 2 columns]" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df = bpd.DataFrame({'animal': ['tiger', 'spider', 'blue whale', 'salmon']})\n", "\n", "df['category'] = bbq.ai.classify(df['animal'], categories=['mammal', 'fish', 'anthropod'])\n", "df" ] }, { "cell_type": "markdown", "id": "9e4037bc", "metadata": {}, "source": [ "Note that this function can only return the values that are provided in the `categories` argument. If your categories do not cover all cases, your may get wrong answers:" ] }, { "cell_type": "code", "execution_count": 9, "id": "2e66110a", "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
animalcategory
0tigermammal
1spidermammal
\n", "

2 rows × 2 columns

\n", "
[2 rows x 2 columns in total]" ], "text/plain": [ " animal category\n", "0 tiger mammal\n", "1 spider mammal\n", "\n", "[2 rows x 2 columns]" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df = bpd.DataFrame({'animal': ['tiger', 'spider']})\n", "\n", "df['category'] = bbq.ai.classify(df['animal'], categories=['mammal', 'fish']) # Spider belongs to neither category\n", "df" ] } ], "metadata": { "kernelspec": { "display_name": "venv (3.10.17)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.17" } }, "nbformat": 4, "nbformat_minor": 5 }