I2A Intelligence Engine · Adobe Summit 2026
Content Compass

The intelligence
layer that makes
content personal.

1,589 pieces of content. 9 buyer personas. 25 product intelligence rubrics. 158 themes that connect what a buyer cares about to what they should read next.

1,589
Content records
9
Buyer personas
25
Product Intelligence Rubrics
Unique journeys
possible
Scroll to explore

Agents are smart. Content libraries aren't.

AI agents can reason, synthesise and act — but when they go looking for content to inform a research journey, they hit a wall. Thousands of assets with no structured signal about who they're for, what stage they serve, or which priorities they address. Content Compass is the bridge.

AGENT WITHOUT CONTENT INTELLIGENCE
AGENT QUERY "Find Adobe content relevant to a CMO evaluating journey orchestration"
Adobe Experience Cloud Overview
What is Journey Orchestration? — Blog
Adobe Summit 2024 Keynote Highlights
AJO Product Page
Marketo Engage Customer Stories

No persona signal. No stage awareness. No play alignment. Generic results for a specific need.

CONTENT COMPASS · WHAT YOU GET

Recommendations customised to your priorities — not a list of results, but a structured research journey built around three dimensions of what matters to you.

CX capabilities you care about
Content mapped to the specific Adobe capabilities most relevant to your priorities — journey orchestration, unified data, content at scale, AI — not the full catalogue.
Proof points that match your context
Customer success stories from companies like yours, third-party research and analyst reports — surfaced alongside product content so you're building credibility, not just awareness.
Paced to how you consume content
Quick reads under 5 minutes. Focused articles of 5–15 minutes. Deep research for 15+. Each week's plan fits the time you have — 30 minutes, a focused session, or a 2-hour block.
Validated by peer perspectives
Recommendations shaped by how similar buyers with similar priorities have navigated their research — persona-aligned signals that reflect what people like you actually find valuable.
Persona-matched  ·  Stage-aware  ·  Play-aligned  ·  Time-aware  ·  Peer-validated

From priorities to a personalised journey in under two seconds.

Themes are the secret sauce — the connective tissue between what a buyer says and what content will actually matter to them.

01 · Buyer input
What they tell us
"I need to prove marketing ROI to the board, scale content with AI, and build personalisation without adding headcount."
CMO B2B Tech Initial Research
02 · Theme matching
Scale content creation with Gen-AI
Maximize ROI of content
Use/implement AI solutions that are business-safe
Streamline content & creative workflows
Manage customer engagement across lifecycle
Drive revenue growth
03 · Product Intelligence Rubrics
GenStudio for Performance Marketing
Adobe Workfront
AEM Assets
Overall AI
Agentic AI
04 · Ranked content
Prudential transforms creative workflows with Workfront
How Adobe transformed its content supply chain
Scale content personalization for high-tech success
Adobe marketers use generative AI to create on-brand content

Themes are the connective tissue.

158 themes mapped across 9 personas and 25 product intelligence rubrics. Each theme is a bridge — connecting what a buyer cares about to the plays that address it and the content that proves it. The rubric isn't keywords. It's structured intelligence.

"Scale content creation with Gen-AI"
GenStudio Overall AI Brand Concierge
"Maximize ROI of content"
Workfront Content Analytics GenStudio
"Use/implement AI solutions that are business-safe & enterprise-ready"
Agentic AI Generative AI Overall AI
"Establish a unified view of customers"
RTCDP CJA
"Ensure brand consistency"
GenStudio AEM Assets Workfront
"Expand searchability in AI-native channels"
LLM Optimizer AEM Sites Optimizer
"Improve performance with real-time analytics"
CJA Mix Modeler RTCDP
"Automate manual tasks with AI"
Agentic AI Overall AI Workfront
"Prove marketing ROI to the board"
CJA Mix Modeler Content Analytics

Scoring logic that turns content into intelligence.

Every piece of content is evaluated against the rubrics and assigned scores for persona fit, journey stage, and product alignment. Those scores power every recommendation.

01
Content is evaluated
Every piece of content is assessed against the full theme library. Does it address the CX capabilities buyers care about? How deeply? How specifically?
02
~200 Intelligence Scores are created per item
Each item is scored across 9 personas, 25 product intelligence rubrics, and 3 journey stages — generating ~200 individual scores per piece of content, based on theme alignment and specificity.
03
Fit tiers are assigned
Content that scores in the top 10% for each Intelligence Score is classified as Best Fit — strongly aligned to this persona and product. The next 20% become Good Match — relevant and worth exploring.
04
Logic powers the journey
Best Fit and Good Match items are ranked, filtered by content tier (IR / GD / GS), and assembled into a progressive research journey — the right content at the right stage.
A design decision worth knowing
Longer content naturally scores higher — more words means more theme coverage. So Best Fit tiers are calculated within content groupings and length bands, ensuring short reads, articles, and deep-dives all have equal representation in recommendations. A 3-minute blog post competes fairly with a 20-minute report.

Research structured to support how buying decisions actually unfold.

Every journey follows a progressive arc — from awareness to proof to evaluation. The system detects where a buyer is and adjusts accordingly. While today we offer a focused set of journeys, customers and their agents can tailor the arc to their own priorities, timelines, and ways of working.

Today: curated journeys across 9 personas & 3 formats  ·  Tomorrow: fully agent-tailored journeys built around your specific needs
WEEK 01
Build your foundation
Reports, thought leadership, problem framing. Understand the landscape before evaluating solutions.
Initial Research
WEEK 02
Go deeper on what matters
Customer success stories, articles, proof points. See how others have solved the same problems.
Go Deeper
WEEK 03
Evaluate your options
Product-specific content, demos, solution briefs. Build the business case and compare approaches.
Get Specific
WEEK 04
Refine & share
Decision-stage content, ROI proof, executive summaries. Prepare to act and get stakeholders aligned.
Get Specific

BUILT FOR 9 PERSONAS

Chief Marketing Officer
Chief Information Officer
Marketing Leader
IT Leader
MarTech Leader
Marketing Practitioner
IT Practitioner
Creative Leader
Creative Practitioner

9 months of R&D. 4 days to build the demo.

The Content Compass 1.0 solution is the product of nine months of research, experiments and pilots. What you're seeing today — this agentic experience — was built from scratch in four days with Claude. Two very different timelines. One intelligence layer.

1,589
Content records scored after quality-auditing 88 mismatched records
IR: 696 · GD: 437 · GS: 456 · source-validated BF/GM tiers
158
Unique buyer themes across persona and product intelligence rubrics
113 persona · 149 product plays · 158 unique
120+
JavaScript functions powering scoring, journeys, compare and theme logic
Single HTML file · no backend
20+
Versions built and iterated in real-time with Claude
~500 dev-hour equivalent
1,375
Content items enriched with theme detection data
Each read 10× by AI model · score stored per theme · threshold ≥ 5/10 reads
25
Product Intelligence Rubrics — scored and comparable side by side
GenStudio · RTCDP · Agentic AI · and more
9mo
R&D behind Content Compass 1.0 — research, experiments and pilots
GTM Strategy × Experience Intelligence × Data & Tech
4d
To build this agentic experience from scratch with Claude
Single HTML file · no backend · fully live

Built for humans and agents — from day one.

Adobe has thousands of GTM practitioners making content strategy decisions every day — and so are their agents. GTM Strategy partnered with Experience Intelligence to build a universal intelligence layer for both: humans and agents making better, faster, more consistent decisions at scale.

Built by
GTM Strategy
Enterprise Marketing
×
In partnership with
Experience Intelligence
Adobe
&
In partnership with
Data & Tech
Enterprise Marketing
Built to serve
Marketers BDRs AEs Web Team Internal Agents
Content Compass is not an official Adobe product.
Crawl
Helping GTM practitioners make better content decisions
Walk
Helping GTM practitioners make better decisions with agents
Run
Helping agents make better decisions with human coaches & guides

We’re here
to learn from you.

We built this for internal agents. Now we’re asking: can it serve external agents too? We don’t have the answer — and we want to hear yours.

We are in learning mode. Your perspective shapes what we build next.
Open questions we’d love your take on
01
Can Content Compass rubrics and themes serve external agents as well as internal ones?
02
How should Adobe develop an approach where agents are an audience — not just a channel?
03
What Adobe intelligence do your agents wish they could access today?

Key choices behind how this works.

Intelligent recommendations are only partly about scoring. The rest is judgment — choices about what matters, what to fix, and what comes next. Here are the ones that shaped Content Compass.

Design Choices
Foundational decisions that define how the system thinks
🎯
Play specificity beats broad appeal
Content that scores well across many product plays isn't necessarily the best recommendation. A piece deeply relevant to one play is more valuable than one vaguely relevant to all of them. We reward specificity and penalise breadth — so Content Compare surfaces focused, differentiated content rather than the same universally-high-scoring pieces every time.
📐
Best Fit is relative to your peer group
A play score of 72 among case studies can be Good Match — because that's a competitive group with hundreds of strong pieces. The same score among blog posts would be Best Fit. We calculate tiers within each content format and length band, so every piece of content competes fairly against its true peers, not the whole library.
⚖️
Content length is a fairness problem
Longer content naturally covers more themes and scores higher. Left unaddressed, every Best Fit recommendation would be a 15+ minute report. We calculate fit tiers within content length bands — so a 3-minute blog competes fairly with a deep-dive whitepaper. Both earn their place.
🧩
Theme detection is a confidence score, not a flag
Each piece of content was read ten times by an AI model. A theme detected in 10/10 reads is unmistakably present. Detected in 5/10 — it's there, but softer. We store the actual reliability score, not just a binary flag, so Content Compare shows not just which themes are covered but how confidently — darker checkmarks for stronger signals.
Improvements Adopted
Issues we found through testing and chose to fix
🗂️
Data quality is non-negotiable
We identified 88 records where title and URL didn't match — a site migration artefact. And we found a 16.4% misclassification rate where the prototype's scoring formula diverged from the source methodology. Both fixed. Serving a buyer a mislabelled piece, or calling something Best Fit when it isn't, breaks trust faster than any gap in coverage.
🔍
Get Specific needs a different lens
Our audit found the Get Specific tier defaulting to the same broad product overview pages for every persona. We fixed this by requiring product pages at GS tier to match the active plays — so a Creative Director sees Firefly and GenStudio pages, not an AJO overview. Late-stage content should narrow, not repeat.
🤝
Human judgment can't be automated away
We audited the pre-built journeys with a synthetic expert reviewer. The scoring model surfaces the right content. What it can't do yet is understand that a B2B SaaS buyer doesn't relate to a B2C consumer brand story, even if the themes overlap. Industry rubrics, buyer context signals, and human curation are all still part of the roadmap.
Roadmap
What comes next — the intelligence still being built
🏭
What's still ahead: industry context
A financial services CIO and a retail CMO have different pressures even when their persona rubric is the same. We're building industry rubrics — a third intelligence layer that will let the journey adapt not just to who you are, but where you work.
TYPE html> Content Compass — The Intelligence Layer
I2A Intelligence Engine · Adobe Summit 2026
Content Compass

The intelligence
layer that makes
content personal.

1,589 pieces of content. 9 buyer personas. 25 product intelligence rubrics. 158 themes that connect what a buyer cares about to what they should read next.

1,589
Content records
9
Buyer personas
25
Product Intelligence Rubrics
Unique journeys
possible
Scroll to explore

Agents are smart. Content libraries aren't.

AI agents can reason, synthesise and act — but when they go looking for content to inform a research journey, they hit a wall. Thousands of assets with no structured signal about who they're for, what stage they serve, or which priorities they address. Content Compass is the bridge.

AGENT WITHOUT CONTENT INTELLIGENCE
AGENT QUERY "Find Adobe content relevant to a CMO evaluating journey orchestration"
Adobe Experience Cloud Overview
What is Journey Orchestration? — Blog
Adobe Summit 2024 Keynote Highlights
AJO Product Page
Marketo Engage Customer Stories

No persona signal. No stage awareness. No play alignment. Generic results for a specific need.

CONTENT COMPASS · WHAT YOU GET

Recommendations customised to your priorities — not a list of results, but a structured research journey built around three dimensions of what matters to you.

CX capabilities you care about
Content mapped to the specific Adobe capabilities most relevant to your priorities — journey orchestration, unified data, content at scale, AI — not the full catalogue.
Proof points that match your context
Customer success stories from companies like yours, third-party research and analyst reports — surfaced alongside product content so you're building credibility, not just awareness.
Paced to how you consume content
Quick reads under 5 minutes. Focused articles of 5–15 minutes. Deep research for 15+. Each week's plan fits the time you have — 30 minutes, a focused session, or a 2-hour block.
Validated by peer perspectives
Recommendations shaped by how similar buyers with similar priorities have navigated their research — persona-aligned signals that reflect what people like you actually find valuable.
Persona-matched  ·  Stage-aware  ·  Play-aligned  ·  Time-aware  ·  Peer-validated

From priorities to a personalised journey in under two seconds.

Themes are the secret sauce — the connective tissue between what a buyer says and what content will actually matter to them.

01 · Buyer input
What they tell us
"I need to prove marketing ROI to the board, scale content with AI, and build personalisation without adding headcount."
CMO B2B Tech Initial Research
02 · Theme matching
Scale content creation with Gen-AI
Maximize ROI of content
Use/implement AI solutions that are business-safe
Streamline content & creative workflows
Manage customer engagement across lifecycle
Drive revenue growth
03 · Product Intelligence Rubrics
GenStudio for Performance Marketing
Adobe Workfront
AEM Assets
Overall AI
Agentic AI
04 · Ranked content
Prudential transforms creative workflows with Workfront
How Adobe transformed its content supply chain
Scale content personalization for high-tech success
Adobe marketers use generative AI to create on-brand content

Themes are the connective tissue.

158 themes mapped across 9 personas and 25 product intelligence rubrics. Each theme is a bridge — connecting what a buyer cares about to the plays that address it and the content that proves it. The rubric isn't keywords. It's structured intelligence.

"Scale content creation with Gen-AI"
GenStudio Overall AI Brand Concierge
"Maximize ROI of content"
Workfront Content Analytics GenStudio
"Use/implement AI solutions that are business-safe & enterprise-ready"
Agentic AI Generative AI Overall AI
"Establish a unified view of customers"
RTCDP CJA
"Ensure brand consistency"
GenStudio AEM Assets Workfront
"Expand searchability in AI-native channels"
LLM Optimizer AEM Sites Optimizer
"Improve performance with real-time analytics"
CJA Mix Modeler RTCDP
"Automate manual tasks with AI"
Agentic AI Overall AI Workfront
"Prove marketing ROI to the board"
CJA Mix Modeler Content Analytics

Scoring logic that turns content into intelligence.

Every piece of content is evaluated against the rubrics and assigned scores for persona fit, journey stage, and product alignment. Those scores power every recommendation.

01
Content is evaluated
Every piece of content is assessed against the full theme library. Does it address the CX capabilities buyers care about? How deeply? How specifically?
02
~200 Intelligence Scores are created per item
Each item is scored across 9 personas, 25 product intelligence rubrics, and 3 journey stages — generating ~200 individual scores per piece of content, based on theme alignment and specificity.
03
Fit tiers are assigned
Content that scores in the top 10% for each Intelligence Score is classified as Best Fit — strongly aligned to this persona and product. The next 20% become Good Match — relevant and worth exploring.
04
Logic powers the journey
Best Fit and Good Match items are ranked, filtered by content tier (IR / GD / GS), and assembled into a progressive research journey — the right content at the right stage.
A design decision worth knowing
Longer content naturally scores higher — more words means more theme coverage. So Best Fit tiers are calculated within content groupings and length bands, ensuring short reads, articles, and deep-dives all have equal representation in recommendations. A 3-minute blog post competes fairly with a 20-minute report.

Research structured to support how buying decisions actually unfold.

Every journey follows a progressive arc — from awareness to proof to evaluation. The system detects where a buyer is and adjusts accordingly. While today we offer a focused set of journeys, customers and their agents can tailor the arc to their own priorities, timelines, and ways of working.

Today: curated journeys across 9 personas & 3 formats  ·  Tomorrow: fully agent-tailored journeys built around your specific needs
WEEK 01
Build your foundation
Reports, thought leadership, problem framing. Understand the landscape before evaluating solutions.
Initial Research
WEEK 02
Go deeper on what matters
Customer success stories, articles, proof points. See how others have solved the same problems.
Go Deeper
WEEK 03
Evaluate your options
Product-specific content, demos, solution briefs. Build the business case and compare approaches.
Get Specific
WEEK 04
Refine & share
Decision-stage content, ROI proof, executive summaries. Prepare to act and get stakeholders aligned.
Get Specific

BUILT FOR 9 PERSONAS

Chief Marketing Officer
Chief Information Officer
Marketing Leader
IT Leader
MarTech Leader
Marketing Practitioner
IT Practitioner
Creative Leader
Creative Practitioner

9 months of R&D. 4 days to build the demo.

The Content Compass 1.0 solution is the product of nine months of research, experiments and pilots. What you're seeing today — this agentic experience — was built from scratch in four days with Claude. Two very different timelines. One intelligence layer.

1,589
Content records scored after quality-auditing 88 mismatched records
IR: 696 · GD: 437 · GS: 456 · source-validated BF/GM tiers
158
Unique buyer themes across persona and product intelligence rubrics
113 persona · 149 product plays · 158 unique
120+
JavaScript functions powering scoring, journeys, compare and theme logic
Single HTML file · no backend
20+
Versions built and iterated in real-time with Claude
~500 dev-hour equivalent
1,375
Content items enriched with theme detection data
Each read 10× by AI model · score stored per theme · threshold ≥ 5/10 reads
25
Product Intelligence Rubrics — scored and comparable side by side
GenStudio · RTCDP · Agentic AI · and more
9mo
R&D behind Content Compass 1.0 — research, experiments and pilots
GTM Strategy × Experience Intelligence × Data & Tech
4d
To build this agentic experience from scratch with Claude
Single HTML file · no backend · fully live

Built for humans and agents — from day one.

Adobe has thousands of GTM practitioners making content strategy decisions every day — and so are their agents. GTM Strategy partnered with Experience Intelligence to build a universal intelligence layer for both: humans and agents making better, faster, more consistent decisions at scale.

Built by
GTM Strategy
Enterprise Marketing
×
In partnership with
Experience Intelligence
Adobe
&
In partnership with
Data & Tech
Enterprise Marketing
Built to serve
Marketers BDRs AEs Web Team Internal Agents
Content Compass is not an official Adobe product.
Crawl
Helping GTM practitioners make better content decisions
Walk
Helping GTM practitioners make better decisions with agents
Run
Helping agents make better decisions with human coaches & guides

We’re here
to learn from you.

We built this for internal agents. Now we’re asking: can it serve external agents too? We don’t have the answer — and we want to hear yours.

We are in learning mode. Your perspective shapes what we build next.
Open questions we’d love your take on
01
Can Content Compass rubrics and themes serve external agents as well as internal ones?
02
How should Adobe develop an approach where agents are an audience — not just a channel?
03
What Adobe intelligence do your agents wish they could access today?

Key choices behind how this works.

Intelligent recommendations are only partly about scoring. The rest is judgment — choices about what matters, what to fix, and what comes next. Here are the ones that shaped Content Compass.

Design Choices
Foundational decisions that define how the system thinks
🎯
Play specificity beats broad appeal
Content that scores well across many product plays isn't necessarily the best recommendation. A piece deeply relevant to one play is more valuable than one vaguely relevant to all of them. We reward specificity and penalise breadth — so Content Compare surfaces focused, differentiated content rather than the same universally-high-scoring pieces every time.
📐
Best Fit is relative to your peer group
A play score of 72 among case studies can be Good Match — because that's a competitive group with hundreds of strong pieces. The same score among blog posts would be Best Fit. We calculate tiers within each content format and length band, so every piece of content competes fairly against its true peers, not the whole library.
⚖️
Content length is a fairness problem
Longer content naturally covers more themes and scores higher. Left unaddressed, every Best Fit recommendation would be a 15+ minute report. We calculate fit tiers within content length bands — so a 3-minute blog competes fairly with a deep-dive whitepaper. Both earn their place.
🧩
Theme detection is a confidence score, not a flag
Each piece of content was read ten times by an AI model. A theme detected in 10/10 reads is unmistakably present. Detected in 5/10 — it's there, but softer. We store the actual reliability score, not just a binary flag, so Content Compare shows not just which themes are covered but how confidently — darker checkmarks for stronger signals.
Improvements Adopted
Issues we found through testing and chose to fix
🗂️
Data quality is non-negotiable
We identified 88 records where title and URL didn't match — a site migration artefact. And we found a 16.4% misclassification rate where the prototype's scoring formula diverged from the source methodology. Both fixed. Serving a buyer a mislabelled piece, or calling something Best Fit when it isn't, breaks trust faster than any gap in coverage.
🔍
Get Specific needs a different lens
Our audit found the Get Specific tier defaulting to the same broad product overview pages for every persona. We fixed this by requiring product pages at GS tier to match the active plays — so a Creative Director sees Firefly and GenStudio pages, not an AJO overview. Late-stage content should narrow, not repeat.
🤝
Human judgment can't be automated away
We audited the pre-built journeys with a synthetic expert reviewer. The scoring model surfaces the right content. What it can't do yet is understand that a B2B SaaS buyer doesn't relate to a B2C consumer brand story, even if the themes overlap. Industry rubrics, buyer context signals, and human curation are all still part of the roadmap.
Roadmap
What comes next — the intelligence still being built
🏭
What's still ahead: industry context
A financial services CIO and a retail CMO have different pressures even when their persona rubric is the same. We're building industry rubrics — a third intelligence layer that will let the journey adapt not just to who you are, but where you work.
💬
Richer prompt intelligence
Right now the system reads your priorities but doesn't parse them for signals about your stack, your stage, or your competition. The next version will extract those signals — so mentioning Salesforce or a specific challenge shapes which content surfaces first.
TYPE html> Content Compass — The Intelligence Layer
I2A Intelligence Engine · Adobe Summit 2026
Content Compass

The intelligence
layer that makes
content personal.

1,589 pieces of content. 9 buyer personas. 25 product intelligence rubrics. 158 themes that connect what a buyer cares about to what they should read next.

1,589
Content records
9
Buyer personas
25
Product Intelligence Rubrics
Unique journeys
possible
Scroll to explore

Agents are smart. Content libraries aren't.

AI agents can reason, synthesise and act — but when they go looking for content to inform a research journey, they hit a wall. Thousands of assets with no structured signal about who they're for, what stage they serve, or which priorities they address. Content Compass is the bridge.

AGENT WITHOUT CONTENT INTELLIGENCE
AGENT QUERY "Find Adobe content relevant to a CMO evaluating journey orchestration"
Adobe Experience Cloud Overview
What is Journey Orchestration? — Blog
Adobe Summit 2024 Keynote Highlights
AJO Product Page
Marketo Engage Customer Stories

No persona signal. No stage awareness. No play alignment. Generic results for a specific need.

CONTENT COMPASS · WHAT YOU GET

Recommendations customised to your priorities — not a list of results, but a structured research journey built around three dimensions of what matters to you.

CX capabilities you care about
Content mapped to the specific Adobe capabilities most relevant to your priorities — journey orchestration, unified data, content at scale, AI — not the full catalogue.
Proof points that match your context
Customer success stories from companies like yours, third-party research and analyst reports — surfaced alongside product content so you're building credibility, not just awareness.
Paced to how you consume content
Quick reads under 5 minutes. Focused articles of 5–15 minutes. Deep research for 15+. Each week's plan fits the time you have — 30 minutes, a focused session, or a 2-hour block.
Validated by peer perspectives
Recommendations shaped by how similar buyers with similar priorities have navigated their research — persona-aligned signals that reflect what people like you actually find valuable.
Persona-matched  ·  Stage-aware  ·  Play-aligned  ·  Time-aware  ·  Peer-validated

From priorities to a personalised journey in under two seconds.

Themes are the secret sauce — the connective tissue between what a buyer says and what content will actually matter to them.

01 · Buyer input
What they tell us
"I need to prove marketing ROI to the board, scale content with AI, and build personalisation without adding headcount."
CMO B2B Tech Initial Research
02 · Theme matching
Scale content creation with Gen-AI
Maximize ROI of content
Use/implement AI solutions that are business-safe
Streamline content & creative workflows
Manage customer engagement across lifecycle
Drive revenue growth
03 · Product Intelligence Rubrics
GenStudio for Performance Marketing
Adobe Workfront
AEM Assets
Overall AI
Agentic AI
04 · Ranked content
Prudential transforms creative workflows with Workfront
How Adobe transformed its content supply chain
Scale content personalization for high-tech success
Adobe marketers use generative AI to create on-brand content

Themes are the connective tissue.

158 themes mapped across 9 personas and 25 product intelligence rubrics. Each theme is a bridge — connecting what a buyer cares about to the plays that address it and the content that proves it. The rubric isn't keywords. It's structured intelligence.

"Scale content creation with Gen-AI"
GenStudio Overall AI Brand Concierge
"Maximize ROI of content"
Workfront Content Analytics GenStudio
"Use/implement AI solutions that are business-safe & enterprise-ready"
Agentic AI Generative AI Overall AI
"Establish a unified view of customers"
RTCDP CJA
"Ensure brand consistency"
GenStudio AEM Assets Workfront
"Expand searchability in AI-native channels"
LLM Optimizer AEM Sites Optimizer
"Improve performance with real-time analytics"
CJA Mix Modeler RTCDP
"Automate manual tasks with AI"
Agentic AI Overall AI Workfront
"Prove marketing ROI to the board"
CJA Mix Modeler Content Analytics

Scoring logic that turns content into intelligence.

Every piece of content is evaluated against the rubrics and assigned scores for persona fit, journey stage, and product alignment. Those scores power every recommendation.

01
Content is evaluated
Every piece of content is assessed against the full theme library. Does it address the CX capabilities buyers care about? How deeply? How specifically?
02
~200 Intelligence Scores are created per item
Each item is scored across 9 personas, 25 product intelligence rubrics, and 3 journey stages — generating ~200 individual scores per piece of content, based on theme alignment and specificity.
03
Fit tiers are assigned
Content that scores in the top 10% for each Intelligence Score is classified as Best Fit — strongly aligned to this persona and product. The next 20% become Good Match — relevant and worth exploring.
04
Logic powers the journey
Best Fit and Good Match items are ranked, filtered by content tier (IR / GD / GS), and assembled into a progressive research journey — the right content at the right stage.
A design decision worth knowing
Longer content naturally scores higher — more words means more theme coverage. So Best Fit tiers are calculated within content groupings and length bands, ensuring short reads, articles, and deep-dives all have equal representation in recommendations. A 3-minute blog post competes fairly with a 20-minute report.

Research structured to support how buying decisions actually unfold.

Every journey follows a progressive arc — from awareness to proof to evaluation. The system detects where a buyer is and adjusts accordingly. While today we offer a focused set of journeys, customers and their agents can tailor the arc to their own priorities, timelines, and ways of working.

Today: curated journeys across 9 personas & 3 formats  ·  Tomorrow: fully agent-tailored journeys built around your specific needs
WEEK 01
Build your foundation
Reports, thought leadership, problem framing. Understand the landscape before evaluating solutions.
Initial Research
WEEK 02
Go deeper on what matters
Customer success stories, articles, proof points. See how others have solved the same problems.
Go Deeper
WEEK 03
Evaluate your options
Product-specific content, demos, solution briefs. Build the business case and compare approaches.
Get Specific
WEEK 04
Refine & share
Decision-stage content, ROI proof, executive summaries. Prepare to act and get stakeholders aligned.
Get Specific

BUILT FOR 9 PERSONAS

Chief Marketing Officer
Chief Information Officer
Marketing Leader
IT Leader
MarTech Leader
Marketing Practitioner
IT Practitioner
Creative Leader
Creative Practitioner

9 months of R&D. 4 days to build the demo.

The Content Compass 1.0 solution is the product of nine months of research, experiments and pilots. What you're seeing today — this agentic experience — was built from scratch in four days with Claude. Two very different timelines. One intelligence layer.

1,589
Content records scored after quality-auditing 88 mismatched records
IR: 696 · GD: 437 · GS: 456 · source-validated BF/GM tiers
158
Unique buyer themes across persona and product intelligence rubrics
113 persona · 149 product plays · 158 unique
120+
JavaScript functions powering scoring, journeys, compare and theme logic
Single HTML file · no backend
20+
Versions built and iterated in real-time with Claude
~500 dev-hour equivalent
1,375
Content items enriched with theme detection data
Each read 10× by AI model · score stored per theme · threshold ≥ 5/10 reads
25
Product Intelligence Rubrics — scored and comparable side by side
GenStudio · RTCDP · Agentic AI · and more
9mo
R&D behind Content Compass 1.0 — research, experiments and pilots
GTM Strategy × Experience Intelligence × Data & Tech
4d
To build this agentic experience from scratch with Claude
Single HTML file · no backend · fully live

Built for humans and agents — from day one.

Adobe has thousands of GTM practitioners making content strategy decisions every day — and so are their agents. GTM Strategy partnered with Experience Intelligence to build a universal intelligence layer for both: humans and agents making better, faster, more consistent decisions at scale.

Built by
GTM Strategy
Enterprise Marketing
×
In partnership with
Experience Intelligence
Adobe
&
In partnership with
Data & Tech
Enterprise Marketing
Built to serve
Marketers BDRs AEs Web Team Internal Agents
Content Compass is not an official Adobe product.
Crawl
Helping GTM practitioners make better content decisions
Walk
Helping GTM practitioners make better decisions with agents
Run
Helping agents make better decisions with human coaches & guides

We’re here
to learn from you.

We built this for internal agents. Now we’re asking: can it serve external agents too? We don’t have the answer — and we want to hear yours.

We are in learning mode. Your perspective shapes what we build next.
Open questions we’d love your take on
01
Can Content Compass rubrics and themes serve external agents as well as internal ones?
02
How should Adobe develop an approach where agents are an audience — not just a channel?
03
What Adobe intelligence do your agents wish they could access today?

Key choices behind how this works.

Intelligent recommendations are only partly about scoring. The rest is judgment — choices about what matters, what to fix, and what comes next. Here are the ones that shaped Content Compass.

Design Choices
Foundational decisions that define how the system thinks
🎯
Play specificity beats broad appeal
Content that scores well across many product plays isn't necessarily the best recommendation. A piece deeply relevant to one play is more valuable than one vaguely relevant to all of them. We reward specificity and penalise breadth — so Content Compare surfaces focused, differentiated content rather than the same universally-high-scoring pieces every time.
📐
Best Fit is relative to your peer group
A play score of 72 among case studies can be Good Match — because that's a competitive group with hundreds of strong pieces. The same score among blog posts would be Best Fit. We calculate tiers within each content format and length band, so every piece of content competes fairly against its true peers, not the whole library.
⚖️
Content length is a fairness problem
Longer content naturally covers more themes and scores higher. Left unaddressed, every Best Fit recommendation would be a 15+ minute report. We calculate fit tiers within content length bands — so a 3-minute blog competes fairly with a deep-dive whitepaper. Both earn their place.
🧩
Theme detection is a confidence score, not a flag
Each piece of content was read ten times by an AI model. A theme detected in 10/10 reads is unmistakably present. Detected in 5/10 — it's there, but softer. We store the actual reliability score, not just a binary flag, so Content Compare shows not just which themes are covered but how confidently — darker checkmarks for stronger signals.
Improvements Adopted
Issues we found through testing and chose to fix
🗂️
Data quality is non-negotiable
We identified 88 records where title and URL didn't match — a site migration artefact. And we found a 16.4% misclassification rate where the prototype's scoring formula diverged from the source methodology. Both fixed. Serving a buyer a mislabelled piece, or calling something Best Fit when it isn't, breaks trust faster than any gap in coverage.
🔍
Get Specific needs a different lens
Our audit found the Get Specific tier defaulting to the same broad product overview pages for every persona. We fixed this by requiring product pages at GS tier to match the active plays — so a Creative Director sees Firefly and GenStudio pages, not an AJO overview. Late-stage content should narrow, not repeat.
🤝
Human judgment can't be automated away
We audited the pre-built journeys with a synthetic expert reviewer. The scoring model surfaces the right content. What it can't do yet is understand that a B2B SaaS buyer doesn't relate to a B2C consumer brand story, even if the themes overlap. Industry rubrics, buyer context signals, and human curation are all still part of the roadmap.
Roadmap
What comes next — the intelligence still being built
🏭
What's still ahead: industry context
A financial services CIO and a retail CMO have different pressures even when their persona rubric is the same. We're building industry rubrics — a third intelligence layer that will let the journey adapt not just to who you are, but where you work.
TYPE html> Content Compass — The Intelligence Layer
I2A Intelligence Engine · Adobe Summit 2026
Content Compass

The intelligence
layer that makes
content personal.

1,589 pieces of content. 9 buyer personas. 25 product intelligence rubrics. 158 themes that connect what a buyer cares about to what they should read next.

1,589
Content records
9
Buyer personas
25
Product Intelligence Rubrics
Unique journeys
possible
Scroll to explore

Agents are smart. Content libraries aren't.

AI agents can reason, synthesise and act — but when they go looking for content to inform a research journey, they hit a wall. Thousands of assets with no structured signal about who they're for, what stage they serve, or which priorities they address. Content Compass is the bridge.

AGENT WITHOUT CONTENT INTELLIGENCE
AGENT QUERY "Find Adobe content relevant to a CMO evaluating journey orchestration"
Adobe Experience Cloud Overview
What is Journey Orchestration? — Blog
Adobe Summit 2024 Keynote Highlights
AJO Product Page
Marketo Engage Customer Stories

No persona signal. No stage awareness. No play alignment. Generic results for a specific need.

CONTENT COMPASS · WHAT YOU GET

Recommendations customised to your priorities — not a list of results, but a structured research journey built around three dimensions of what matters to you.

CX capabilities you care about
Content mapped to the specific Adobe capabilities most relevant to your priorities — journey orchestration, unified data, content at scale, AI — not the full catalogue.
Proof points that match your context
Customer success stories from companies like yours, third-party research and analyst reports — surfaced alongside product content so you're building credibility, not just awareness.
Paced to how you consume content
Quick reads under 5 minutes. Focused articles of 5–15 minutes. Deep research for 15+. Each week's plan fits the time you have — 30 minutes, a focused session, or a 2-hour block.
Validated by peer perspectives
Recommendations shaped by how similar buyers with similar priorities have navigated their research — persona-aligned signals that reflect what people like you actually find valuable.
Persona-matched  ·  Stage-aware  ·  Play-aligned  ·  Time-aware  ·  Peer-validated

From priorities to a personalised journey in under two seconds.

Themes are the secret sauce — the connective tissue between what a buyer says and what content will actually matter to them.

01 · Buyer input
What they tell us
"I need to prove marketing ROI to the board, scale content with AI, and build personalisation without adding headcount."
CMO B2B Tech Initial Research
02 · Theme matching
Scale content creation with Gen-AI
Maximize ROI of content
Use/implement AI solutions that are business-safe
Streamline content & creative workflows
Manage customer engagement across lifecycle
Drive revenue growth
03 · Product Intelligence Rubrics
GenStudio for Performance Marketing
Adobe Workfront
AEM Assets
Overall AI
Agentic AI
04 · Ranked content
Prudential transforms creative workflows with Workfront
How Adobe transformed its content supply chain
Scale content personalization for high-tech success
Adobe marketers use generative AI to create on-brand content

Themes are the connective tissue.

158 themes mapped across 9 personas and 25 product intelligence rubrics. Each theme is a bridge — connecting what a buyer cares about to the plays that address it and the content that proves it. The rubric isn't keywords. It's structured intelligence.

"Scale content creation with Gen-AI"
GenStudio Overall AI Brand Concierge
"Maximize ROI of content"
Workfront Content Analytics GenStudio
"Use/implement AI solutions that are business-safe & enterprise-ready"
Agentic AI Generative AI Overall AI
"Establish a unified view of customers"
RTCDP CJA
"Ensure brand consistency"
GenStudio AEM Assets Workfront
"Expand searchability in AI-native channels"
LLM Optimizer AEM Sites Optimizer
"Improve performance with real-time analytics"
CJA Mix Modeler RTCDP
"Automate manual tasks with AI"
Agentic AI Overall AI Workfront
"Prove marketing ROI to the board"
CJA Mix Modeler Content Analytics

Scoring logic that turns content into intelligence.

Every piece of content is evaluated against the rubrics and assigned scores for persona fit, journey stage, and product alignment. Those scores power every recommendation.

01
Content is evaluated
Every piece of content is assessed against the full theme library. Does it address the CX capabilities buyers care about? How deeply? How specifically?
02
~200 Intelligence Scores are created per item
Each item is scored across 9 personas, 25 product intelligence rubrics, and 3 journey stages — generating ~200 individual scores per piece of content, based on theme alignment and specificity.
03
Fit tiers are assigned
Content that scores in the top 10% for each Intelligence Score is classified as Best Fit — strongly aligned to this persona and product. The next 20% become Good Match — relevant and worth exploring.
04
Logic powers the journey
Best Fit and Good Match items are ranked, filtered by content tier (IR / GD / GS), and assembled into a progressive research journey — the right content at the right stage.
A design decision worth knowing
Longer content naturally scores higher — more words means more theme coverage. So Best Fit tiers are calculated within content groupings and length bands, ensuring short reads, articles, and deep-dives all have equal representation in recommendations. A 3-minute blog post competes fairly with a 20-minute report.

Research structured to support how buying decisions actually unfold.

Every journey follows a progressive arc — from awareness to proof to evaluation. The system detects where a buyer is and adjusts accordingly. While today we offer a focused set of journeys, customers and their agents can tailor the arc to their own priorities, timelines, and ways of working.

Today: curated journeys across 9 personas & 3 formats  ·  Tomorrow: fully agent-tailored journeys built around your specific needs
WEEK 01
Build your foundation
Reports, thought leadership, problem framing. Understand the landscape before evaluating solutions.
Initial Research
WEEK 02
Go deeper on what matters
Customer success stories, articles, proof points. See how others have solved the same problems.
Go Deeper
WEEK 03
Evaluate your options
Product-specific content, demos, solution briefs. Build the business case and compare approaches.
Get Specific
WEEK 04
Refine & share
Decision-stage content, ROI proof, executive summaries. Prepare to act and get stakeholders aligned.
Get Specific

BUILT FOR 9 PERSONAS

Chief Marketing Officer
Chief Information Officer
Marketing Leader
IT Leader
MarTech Leader
Marketing Practitioner
IT Practitioner
Creative Leader
Creative Practitioner

9 months of R&D. 4 days to build the demo.

The Content Compass 1.0 solution is the product of nine months of research, experiments and pilots. What you're seeing today — this agentic experience — was built from scratch in four days with Claude. Two very different timelines. One intelligence layer.

1,589
Content records scored after quality-auditing 88 mismatched records
IR: 696 · GD: 437 · GS: 456 · source-validated BF/GM tiers
158
Unique buyer themes across persona and product intelligence rubrics
113 persona · 149 product plays · 158 unique
120+
JavaScript functions powering scoring, journeys, compare and theme logic
Single HTML file · no backend
20+
Versions built and iterated in real-time with Claude
~500 dev-hour equivalent
1,375
Content items enriched with theme detection data
Each read 10× by AI model · score stored per theme · threshold ≥ 5/10 reads
25
Product Intelligence Rubrics — scored and comparable side by side
GenStudio · RTCDP · Agentic AI · and more
9mo
R&D behind Content Compass 1.0 — research, experiments and pilots
GTM Strategy × Experience Intelligence × Data & Tech
4d
To build this agentic experience from scratch with Claude
Single HTML file · no backend · fully live

Built for humans and agents — from day one.

Adobe has thousands of GTM practitioners making content strategy decisions every day — and so are their agents. GTM Strategy partnered with Experience Intelligence to build a universal intelligence layer for both: humans and agents making better, faster, more consistent decisions at scale.

Built by
GTM Strategy
Enterprise Marketing
×
In partnership with
Experience Intelligence
Adobe
&
In partnership with
Data & Tech
Enterprise Marketing
Built to serve
Marketers BDRs AEs Web Team Internal Agents
Content Compass is not an official Adobe product.
Crawl
Helping GTM practitioners make better content decisions
Walk
Helping GTM practitioners make better decisions with agents
Run
Helping agents make better decisions with human coaches & guides

We’re here
to learn from you.

We built this for internal agents. Now we’re asking: can it serve external agents too? We don’t have the answer — and we want to hear yours.

We are in learning mode. Your perspective shapes what we build next.
Open questions we’d love your take on
01
Can Content Compass rubrics and themes serve external agents as well as internal ones?
02
How should Adobe develop an approach where agents are an audience — not just a channel?
03
What Adobe intelligence do your agents wish they could access today?

Key choices behind how this works.

Intelligent recommendations are only partly about scoring. The rest is judgment — choices about what matters, what to fix, and what comes next. Here are the ones that shaped Content Compass.

Design Choices
Foundational decisions that define how the system thinks
🎯
Play specificity beats broad appeal
Content that scores well across many product plays isn't necessarily the best recommendation. A piece deeply relevant to one play is more valuable than one vaguely relevant to all of them. We reward specificity and penalise breadth — so Content Compare surfaces focused, differentiated content rather than the same universally-high-scoring pieces every time.
📐
Best Fit is relative to your peer group
A play score of 72 among case studies can be Good Match — because that's a competitive group with hundreds of strong pieces. The same score among blog posts would be Best Fit. We calculate tiers within each content format and length band, so every piece of content competes fairly against its true peers, not the whole library.
⚖️
Content length is a fairness problem
Longer content naturally covers more themes and scores higher. Left unaddressed, every Best Fit recommendation would be a 15+ minute report. We calculate fit tiers within content length bands — so a 3-minute blog competes fairly with a deep-dive whitepaper. Both earn their place.
🧩
Theme detection is a confidence score, not a flag
Each piece of content was read ten times by an AI model. A theme detected in 10/10 reads is unmistakably present. Detected in 5/10 — it's there, but softer. We store the actual reliability score, not just a binary flag, so Content Compare shows not just which themes are covered but how confidently — darker checkmarks for stronger signals.
Improvements Adopted
Issues we found through testing and chose to fix
🗂️
Data quality is non-negotiable
We identified 88 records where title and URL didn't match — a site migration artefact. And we found a 16.4% misclassification rate where the prototype's scoring formula diverged from the source methodology. Both fixed. Serving a buyer a mislabelled piece, or calling something Best Fit when it isn't, breaks trust faster than any gap in coverage.
🔍
Get Specific needs a different lens
Our audit found the Get Specific tier defaulting to the same broad product overview pages for every persona. We fixed this by requiring product pages at GS tier to match the active plays — so a Creative Director sees Firefly and GenStudio pages, not an AJO overview. Late-stage content should narrow, not repeat.
🤝
Human judgment can't be automated away
We audited the pre-built journeys with a synthetic expert reviewer. The scoring model surfaces the right content. What it can't do yet is understand that a B2B SaaS buyer doesn't relate to a B2C consumer brand story, even if the themes overlap. Industry rubrics, buyer context signals, and human curation are all still part of the roadmap.
Roadmap
What comes next — the intelligence still being built
🏭
What's still ahead: industry context
A financial services CIO and a retail CMO have different pressures even when their persona rubric is the same. We're building industry rubrics — a third intelligence layer that will let the journey adapt not just to who you are, but where you work.
💬
Richer prompt intelligence
Right now the system reads your priorities but doesn't parse them for signals about your stack, your stage, or your competition. The next version will extract those signals — so mentioning Salesforce or a specific challenge shapes which content surfaces first.
🔗
From prototype to platform
Content Compass today is a single HTML file. The roadmap leads to a hosted intelligence API — exposing scores, themes, and journey logic to external agents, BDR tools, and the BACOM web experience. The intelligence layer becomes infrastructure.

Intelligence for agents and humans
to personalize your research journey.

Content Compass is a proof of capability — a demonstration of the intelligence layer that could power any AI agent researching on behalf of a buyer. The rubric is the product.

⇄ Compare Journeys 9 Personas 25 Product Intelligence Rubrics API-ready · Post Summit