Generate personalized Databricks newsletters for customers. Queries Salesforce for account/UCO context, searches for relevant public Databricks content, generates a professional HTML email, and creates a Gmail draft ready to send.
Generate a personalized Databricks newsletter email for a customer account by combining Salesforce context with curated public Databricks content.
EVERY link in the newsletter MUST be curl-verified as HTTP 200 before inclusion. No exceptions. Databricks blog URLs frequently 404. Never guess or fabricate URLs. Always batch-verify with:
for url in <all_candidate_urls>; do
code=$(curl -s -o /dev/null -w "%{http_code}" -L "$url")
echo "$code - $url"
done
Only include links returning 200. If a link fails, find a replacement from the seed list or another verified source. Do NOT include any link that has not been verified in the current session.
When the user asks to create a customer newsletter, Databricks update email, or account newsletter for a specific account name.
The user provides:
--since 2025-01-01 to widen the search window.--since default window.Inline help: If the user passes help, --help, or ?, display:
Usage: /customer-newsletter <Account Name> [--to [email protected]] [--from "Name,email"] [--since YYYY-MM-DD] [--reset] [--date YYYY-MM-DD] [--domain "topic"]
Generates a personalized Databricks newsletter for a customer:
1. Queries Salesforce for account info and active Use Cases
2. Searches web for relevant Databricks content (blogs, docs, case studies)
3. Creates a professional HTML email with curated links
4. Saves as Gmail draft ready to send
5. Records sent links to avoid repeats in future newsletters
The sender (CTA button + footer) defaults to the Account's AE from Salesforce.
Override with --from to use a different sender.
Options:
--since YYYY-MM-DD Only include content after this date (default: 3 months ago)
--reset Clear send history for this account (fresh start)
--date YYYY-MM-DD Generate as if today were this date (for replays)
--domain "topic" Focus newsletter on a specific domain topic (6-8 links grouped by category)
Examples:
/customer-newsletter SITA
/customer-newsletter SITA --to [email protected]
/customer-newsletter SITA --to [email protected] --from "Laurent Prat,[email protected]"
/customer-newsletter SITA --since 2025-06-01 # wider content window
/customer-newsletter SITA --reset --to [email protected] # fresh start, no dedup
/customer-newsletter SITA --date 2025-12-01 # replay Q4 2025 newsletter
/customer-newsletter SITA --domain "Predictive Maintenance in Aviation"
/customer-newsletter "MSC CARGO" --domain "Supply Chain Optimization" --to [email protected]
/customer-newsletter help
Use the Salesforce CLI (/opt/homebrew/bin/sf) to query account and UCO data.
Step 1a: Get Account Info (including AE)
/opt/homebrew/bin/sf data query --query "SELECT Id, Name, Industry, Owner.Name, Owner.Email FROM Account WHERE Name LIKE '<account_name>%'" --json
Use prefix match (Name LIKE 'SITA%') not contains match (%SITA%) to avoid too many results.
Selecting the right account when multiple matches are returned:
Industry is non-nullThe Owner.Name and Owner.Email fields give us the Account Executive (AE). Use these as the sender in the CTA button and footer unless --from was explicitly provided.
Step 1b: Get Active Use Cases (U2-U6)
/opt/homebrew/bin/sf data query --query "SELECT Id, Name, Stages__c, Use_Case_Description__c, Implementation_Status__c, Demand_Plan_Next_Steps__c FROM UseCase__c WHERE Account__c = '<account_id>' AND Stages__c IN ('U2','U3','U4','U5','U6')" --json
Note on field names: The Account and UseCase__c objects do NOT have Products__c, Workloads__c, Platform__c, Cloud_Type__c, ACV__c, or ARR__c fields. Extract workload/product context from the UCO Name and Use_Case_Description__c fields instead.
Extract from results:
Previously sent newsletters are stored in a shared Google Drive folder. Each newsletter HTML file contains a <!-- newsletter-meta --> comment with the full list of links used. This is the source of truth for deduplication — no local history files.
Drive structure: SA Customer Newsletters / <Account Name> / in Google Drive (shared with databricks.com domain as writer, so any SA can generate newsletters and dedup across the team).
Each customer gets its own subfolder (e.g., SA Customer Newsletters/SITA/, SA Customer Newsletters/MSC Cargo/). This keeps newsletters organized per account even if the AE changes.
Known Drive folder ID: The parent folder "SA Customer Newsletters" has ID 1AdcDxj-gA55j1ihLt0Ohbjyd3WtpISbN. Use this directly to skip the search step when possible.
Steps:
1AdcDxj-gA55j1ihLt0Ohbjyd3WtpISbN):TOKEN=$(gcloud auth application-default print-access-token)
curl -s "https://www.googleapis.com/drive/v3/files?q=name%3D'SA+Customer+Newsletters'+and+mimeType%3D'application/vnd.google-apps.folder'+and+trashed%3Dfalse&fields=files(id,name)" \
-H "Authorization: Bearer $TOKEN" \
-H "x-goog-user-project: gcp-sandbox-field-eng"
If the folder doesn't exist, create it (see Phase 4b).
PARENT_FOLDER="<parent_folder_id>"
ACCOUNT_NAME="<Account Name>"
# Search for existing customer subfolder
curl -s "https://www.googleapis.com/drive/v3/files?q='${PARENT_FOLDER}'+in+parents+and+name%3D'${ACCOUNT_NAME}'+and+mimeType%3D'application/vnd.google-apps.folder'+and+trashed%3Dfalse&fields=files(id,name)" \
-H "Authorization: Bearer $TOKEN" \
-H "x-goog-user-project: gcp-sandbox-field-eng"
If not found, create it as a child of the parent folder.
CUSTOMER_FOLDER="<customer_folder_id>"
curl -s "https://www.googleapis.com/drive/v3/files?q='${CUSTOMER_FOLDER}'+in+parents+and+trashed%3Dfalse&fields=files(id,name,createdTime)" \
-H "Authorization: Bearer $TOKEN" \
-H "x-goog-user-project: gcp-sandbox-field-eng"
<!-- newsletter-meta --> comment:FILE_CONTENT=$(curl -s "https://www.googleapis.com/drive/v3/files/${FILE_ID}?alt=media" \
-H "Authorization: Bearer $TOKEN" \
-H "x-goog-user-project: gcp-sandbox-field-eng")
# Extract links from metadata comment
echo "$FILE_CONTENT" | grep -o '<!-- newsletter-meta:.*-->' | head -1
links= field from each metadata comment to build the previously sent links set--reset was passed, skip loading history (treat as empty set)Note: The newsletter HTML file IS the history record. No separate JSON files needed. This makes history shareable across SAs and machines.
Based on the customer context from Phase 1, curate 4-6 relevant public Databricks links.
IMPORTANT - Deduplication: Exclude any URL found in the previously sent links set (from Phase 1b). If a candidate link was already sent in a previous newsletter for this account, skip it and find a different one. This ensures each quarterly newsletter has fresh content.
Strategy: Try WebSearch first, fall back to known URLs if unavailable.
Date window: Determine the content search window:
--since provided: use that date as the start--date provided: default to 3 months before that dateStep A: Try WebSearch for content matching customer products/industry/workloads.
When --domain is set: Replace UCO-based search strategy with domain-focused searches across 6 content categories. Run WebSearch queries targeting each category:
| Category | Search query pattern | Section title in email |
|---|---|---|
| Reference Architecture | databricks "<domain>" reference architecture | Reference Architectures |
| Tutorials & Guides | databricks "<domain>" tutorial OR guide OR how-to | Tutorials & How-To Guides |
| Public Demos & Notebooks | databricks "<domain>" demo OR notebook OR example site:github.com OR site:databricks.com | Demos & Examples |
| Blog Articles | databricks "<domain>" blog site:databricks.com/blog | Articles & Insights |
| Case Studies | databricks "<domain>" case study OR customer story site:databricks.com | Customer Stories |
| Documentation | databricks "<domain>" site:docs.databricks.com | Documentation |
--since date window where relevantWhen --domain is NOT set (standard mode):
Step B: If WebSearch fails, use known-good Databricks URLs from this seed list (all verified 200 as of Mar 2026):
# Product pages