Plan restrictions applyBulk export is only available on LangSmith Plus or Enterprise tiers.
- Setting up a GCS bucket and HMAC credentials for LangSmith.
- Creating a bulk export destination and export job.
- Loading the exported data into BigQuery.
Prerequisites
- Data in your LangSmith Tracing project.
gcloudCLI installed. (You can also use the Google Cloud console for setup.)
1. Create a GCS bucket
Create a dedicated GCS bucket for LangSmith exports. Using a dedicated bucket makes it easier to grant scoped permissions without affecting other data:2. Create a service account and grant access
Create a GCP service account that LangSmith will use to write data to GCS:storage.objects.create. Granting storage.objects.delete is optional, but recommended. LangSmith uses it to clean up a temporary test file created during destination validation. If this permission is absent, a tmp/ folder may remain in your bucket.
The “Storage Object Admin” predefined role covers all required and recommended permissions:
storage.objects.create(required)storage.objects.delete(optional, for test file cleanup)storage.objects.get(optional but recommended, for file size verification)storage.multipartUploads.create(optional but recommended, for large file uploads)
3. Generate HMAC keys
LangSmith connects to GCS using the S3-compatible XML API, which requires HMAC keys rather than a service account JSON key. Generate HMAC keys for your service account:accessId and secret from the output. You can also generate HMAC keys in the GCP Console under Cloud Storage → Settings → Interoperability → Create a key for a service account.
4. Create a bulk export destination
Create a destination in LangSmith pointing to your GCS bucket. Setendpoint_url to https://storage.googleapis.com to use the GCS S3-compatible API.
You will need your LangSmith API key and workspace ID.
prefix is a path within the bucket where LangSmith will write exported files. For example, langsmith-exports or data/traces. Choose any value that works for your bucket layout.
LangSmith validates the credentials by performing a test write before saving the destination. If the request returns a 400 error, refer to Debug destination errors.
Save the id from the response; you will need it in the next step.
Temporary validation file
During destination creation (and credential rotation), LangSmith writes a temporary.txt file to YOUR_PREFIX/tmp/ to verify write access, then attempts to delete it. The deletion is best-effort: if the service account lacks storage.objects.delete, the file is not deleted and the tmp/ folder remains in your bucket.
The tmp/ folder does not affect exports, but it will be included in broad GCS URI globs (e.g., gs://YOUR_BUCKET_NAME/YOUR_PREFIX/*).
5. Create a bulk export job
Create an export targeting a specific project. Useformat_version: v2_beta for BigQuery compatibility—it produces UTC timezone-aware timestamps that BigQuery handles correctly.
You will need the project ID (session_id), which you can copy from the project view in the Tracing Projects list.
One-time export:
Output file structure
Exported files land in GCS using a Hive-partitioned path structure:export_id, tenant_id, session_id, resource, year, month, day) are available as queryable columns in BigQuery when Hive partition detection is enabled.
6. Load data into BigQuery
BigQuery offers two ways to access your exported data. Both require granting the BigQuery service account read access to your GCS bucket first. Choose based on your needs:- External table: data stays in GCS and BigQuery queries it in place. No storage costs in BigQuery, but query performance is slower than native storage. Refer to Required roles.
- Native table: data is copied into BigQuery storage. Faster queries and full support for BigQuery features, but incurs BigQuery storage costs. Refer to Required permissions.
Create the table
- External table
- Native table
An external table queries data directly from GCS without copying it into BigQuery.
- In the BigQuery console, expand your project and dataset in the Explorer pane.
- Click the dataset’s Actions menu (three dots) and select Create table.
- Under Source:
- Set Create table from to Google Cloud Storage.
- Set the file path to
gs://YOUR_BUCKET_NAME/YOUR_PREFIX/export_id=*. Usingexport_id=*scopes BigQuery to Hive-partitioned export directories and excludes thetmp/folder that LangSmith writes during destination validation (see Temporary validation file). - Set File format to Parquet.
- Check Source data partitioning, then:
- Set Source URI prefix to
gs://YOUR_BUCKET_NAME/YOUR_PREFIX. - Set Partition inference mode to Automatically infer types.
- Set Source URI prefix to
- Under Destination:
- Select your project and dataset.
- Enter a table name, for example
langsmith_runs. - Set Table type to External table.
- Under Schema, enable Auto-detect.
- Click Create table.
export_id, tenant_id, session_id, resource, year, month, day) are available as queryable columns. Filter on year, month, or day in your queries to enable partition pruning.Credential rotation
To rotate your HMAC keys without interrupting active exports:- Generate new HMAC keys in GCP for the same service account.
-
Call the PATCH endpoint with the new credentials:
LangSmith validates the new credentials with a test write before saving. A new
tmp/file may appear in your bucket during this validation (see Temporary validation file). - Keep old HMAC keys active until all in-flight export runs complete. Both credential sets are valid simultaneously during the transition window.
- Delete the old HMAC keys in GCP once you have confirmed no in-flight runs are using them.
Troubleshooting
| Symptom | Likely cause | Fix |
|---|---|---|
400 Access denied on destination creation | HMAC credentials lack write permission | Verify the service account has storage.objects.create on the bucket |
400 Key ID you provided does not exist | HMAC access ID is invalid | Regenerate HMAC keys in GCP |
400 Invalid endpoint | Endpoint URL is malformed | Use exactly https://storage.googleapis.com |
| BigQuery table shows no rows | Export not yet complete | Check export status with GET /api/v1/bulk-exports/{export_id} |
| BigQuery partition pruning not working | Incorrect source URI prefix | Ensure the source URI prefix ends before the first partition key, e.g. gs://BUCKET/PREFIX |
BigQuery picks up tmp/ files | Broad file path glob | Use export_id=* in your file path instead of * |
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

