AEO Foundation
The first time you do the work from Profound 101. This certification does not require access to Profound's platform to complete.

Target Audience
Marketers and Strategists
Skills
AEO Theory, Diagnostic Reasoning
Submission Type
Theoretical Assessment
Rubric
Submissions are evaluated across five dimensions, each scored on a 1–5 scale| Category | Weight | What we're evaluating | 3 - Excellent | 2 - Meets expectations | 1 - Needs improvement |
|---|---|---|---|---|---|
| Prompt Construction | 10% | Evaluated in: Diagnostic Walkthrough The compound-job prompt is the foundation, if the prompt is weak, the entire diagnosis collapses. | Compound-job prompt with a clear product, specific situation, and at least one constraint that forces the engine to reason. Tight enough that different brands could plausibly win or lose depending on their fit. You can tell the person thought about what kind of answer this would produce before they typed it. | Has a product and a situation but the constraint is weak or generic. Produces a usable answer but doesn't really test the engine's decision logic. | Single-intent query ("best CRM," "top project management tools"). No situation, no constraint. The engine can answer with a ranked list and no reasoning. |
| Diagnostic Precision | 30% | Evaluated in: Diagnostic Walkthrough (winner and loser diagnosis) This is the most heavily weighted dimension because it's the core skill. Can you look at a page and see the structure underneath it? | Navigates to both the winner's and loser's websites on camera. For the winner: points to specific structural elements (the billboard, extractable answer blocks, page architecture) and names the gate or gates they're passing. For the loser: identifies the specific structural failure and names the failing gate with a clear line from the page problem to the AI answer outcome. The diagnosis feels like one connected argument, not two separate observations. "This page works because X, and that page fails because Y, and that's why the AI answer looks the way it does." | Visits both pages and makes generally correct observations, but the evidence is thin on at least one side. Gates are named but not tightly connected to specific page elements. The evaluator believes the candidate sees the right things but the reasoning isn't fully articulated. Or one diagnosis is sharp and the other is vague. | Doesn't visit one or both websites. Gives surface-level explanations ("their content is good" / "they need to optimize"). Gates are named as vocabulary words rather than diagnostic findings. No connection between page structure and AI answer outcome. |
| Data Reading | 20% | Evaluated in: Citation Table Read, Answer Reading portion of Diagnostic Walkthrough Can you look at data and read it strategically, in the right order, without treating it as a scoreboard? | In the Citation Table Read: follows the prescribed reading sequence, narrates each step, and explains why the order matters. Interprets the domain mix and the "Mentioned on Page" patterns to draw a strategic conclusion about HubSpot's position in the citation supply chain. Cross-references citation share with presence/absence to identify where the real gaps and opportunities are. In the Diagnostic Walkthrough: reads the AI answer before jumping to any website. Narrates what they observe about citation patterns, brand positioning, and answer structure. The evaluator can hear that this person reads data as a map, not a scoreboard. | Right general approach in both pieces, but the interpretation stays surface-level. Follows roughly the right sequence but doesn't explain why it matters. Notes the domain mix or the answer structure without drawing a strategic conclusion from it. Gets to the right neighborhood but doesn't arrive at a sharp insight. | Reads the citation table as a ranking ("the page with the most citations is..."). Starts with counts rather than source types. In the walkthrough, skims the AI answer without narrating observations or skips straight to a website. No evidence of strategic reading in either piece. |
| Strategic Prescription | 15% | Evaluated in: Diagnostic Walkthrough (fix), Citation Table Read (next action) Can you translate what you see into something someone could act on tomorrow? | In the Diagnostic Walkthrough: proposes one specific fix for the loser that follows logically from the diagnosis. "Rewrite the H1 and first 200 words to address the compound job, add a structured comparison block. Generate-phase fix." Placed in the correct SAGE phase. Executable. In the Citation Table Read: states a specific next action grounded in the data. "The highest-cited pages where HubSpot isn't mentioned are these review sites. Next move is earned media outreach to get included on those pages." Specific enough to hand to a team member. | Fix and next action are both directionally correct but lack specificity. "They need to redo their landing page" or "we should focus on third-party coverage" without naming what, where, or how. Or one is sharp and the other is vague. SAGE phase may be missing or slightly off. | Generic in both pieces. "Improve their AEO." "Create more content." "Optimize for AI." No SAGE phase. Not executable. Or the recommendation doesn't follow from the diagnosis. |
| Conceptual Reasoning | 15% | Evaluated in: Conceptual Defense (written) Can you engage with a plausible-sounding strategic argument and identify exactly where the logic breaks down? | Correct conclusion, stated clearly. Names the specific conceptual distinction the course draws about fan-out transformations. Explains why the colleague's logic fails even though parts of it seem right. Proposes a corrected approach that uses the same data but applies it differently. Specific enough that the colleague could change their plan based on this response. | Correct conclusion but the explanation of the distinction is fuzzy. Uses the right vocabulary without fully articulating the concept. Or the corrected approach is present but vague. The evaluator can tell the candidate is close but hasn't nailed the distinction cleanly. | Wrong conclusion. Or agrees with the colleague because the recommendation sounds like good SEO practice. Or gives a binary answer without engaging with the nuance. No evidence that the candidate understands what fan-out transformations actually represent in the course's framework. |
| Framework Fluency | 10% | This dimension is scored holistically across all three pieces. It asks: does this person use the Profound 101 frameworks as thinking tools, or as vocabulary words? | The Three Gates, SAGE, and course concepts show up naturally throughout the submission. The candidate uses them to structure their reasoning, not to decorate it. You can tell the frameworks shaped how they thought about the problem rather than being bolted on after the fact. They might not use the exact course language every time, but the logic is clearly there. | Frameworks are present and correctly applied, but feel somewhat mechanical. The candidate names the right concepts at the right moments, but it sounds more like a checklist than a way of thinking. The evaluator believes the candidate knows the frameworks but hasn't fully internalized them yet. | Frameworks are absent, misapplied, or used as vocabulary words without understanding. "This is a Gate 2 problem" with no explanation of what that means in context. SAGE phase is named without connection to the diagnosis. Or the candidate doesn't reference the frameworks at all and relies on general marketing intuition. |