Key Concepts
- Decomposition - The core skill of looking at any recurring marketing workflow and breaking it into its parts: what triggers it, what data it needs, where it forks, and who gets the output. This transfers across domains, teams, and tools.
- Multi-source pattern - Pulling data from two or more sources in parallel (e.g., Google Search Console + AI citation data), then merging them into a single LLM node for cross-referencing and judgment.
Quick Reference
The repeating agent architecture
Every agent in this lesson follows the same skeleton:
- Trigger - scheduled or manual
- Inputs - 3-4 parameters scoped to what matters (not everything)
- Data collection - one or more nodes pulling from integrations, knowledge bases, or scraped content
- LLM judgment node - cross-references the collected data and returns structured output (always includes a Boolean + supporting detail fields)
- Conditional branch - the Boolean drives a fork
- Left path (action needed) - LLM formats a report, brief, or alert → delivers via Slack, Gmail, or Google Docs
- Right path (all clear) - code node returns a clean "nothing to report" message
Four agents, same building blocks
- Content Visibility Monitor (content team): Inputs are 3 high-value page URLs. Pulls Google Search Console data + AI citation share in parallel. LLM cross-references traditional search performance against AI visibility. Outputs a Slack report only when a significant shift is detected.
- Post-Event Lead Router (demand gen): Inputs are attendee data, product name, webinar title, sales Slack channel. Knowledge base provides product context. LLM scores attendees into hot/warm/no-show tiers. Hot leads get personalized context briefs sent to the sales Slack channel + tiered follow-up email drafts saved to Google Docs.
- Brand Coverage Monitor (comms): Inputs are brand name, competitor names, monitoring keywords. Two Google search nodes (brand + competitor, filtered to last 7 days) plus a knowledge base pull for messaging pillars. LLM classifies coverage by sentiment. Negative coverage triggers an urgent Slack alert with a response brief; routine coverage gets a daily digest.
- Brand Consistency Auditor (brand): Inputs are 3 page URLs. Firecrawl scrapes the pages, an LLM node cleans the raw HTML, knowledge base provides brand guidelines. Audit LLM compares content against guidelines and returns findings + severity. Inconsistencies generate a Google Doc report emailed to the brand manager via Gmail.
Explaining marketing engineering to a non-technical person
Three value pillars to hit:
- Capacity recovered - recurring manual work (pulling reports, triaging coverage, comparing week-over-week) runs automatically
- Quality ingrained - output consistency doesn't depend on who did the work or how rushed they were
- New capabilities unlocked - things that weren't feasible manually now exist (daily competitor pricing monitoring, hundreds of pages audited in minutes, personalized lead briefs two hours after a webinar)