Checklist: What to Ask When Testing a New CRM for Your Marketing Stack
Prioritized vendor evaluation checklist for CRMs in 2026—focus on ad integrations, data portability, AI governance, and cost‑per‑contact.
Hook: Stop Guessing — Test CRMs Like a Growth Team
Marketing teams in 2026 face the same three bottlenecks: fragmented ad platforms, fragile data portability, and an explosion of AI features that are impressive on demos but risky in production. If you can’t quickly prove ROI, automate reliably, and keep control of customer data, the CRM becomes another costly silo. This checklist prioritizes the vendor questions and practical tests you must run when evaluating a CRM for your marketing stack — with an emphasis on ad platform integration, data portability, AI capabilities, and true cost‑per‑contact economics.
Executive summary (most important first)
Before deep dives, get answers to these four gating items from every vendor. If a vendor fails any one of these, deprioritize them until the vendor resolves it.
- Native ad platform integrations: Direct, server‑side integrations with Google Ads, Meta Conversions API, TikTok, LinkedIn, and DSPs — not just Zapier connectors.
- Data ownership & portability: Exportable, well‑documented customer data in open formats; support for marketing clean rooms and reverse ETL.
- AI governance: Transparent model provenance, retrain policies, and explainable lead scores aligned with your privacy rules.
- Clear cost‑per‑contact math: A repeatable way to compute acquisition and ongoing cost per opted‑in contact that maps to vendor pricing tiers.
Why these priorities in 2026?
2024–2025 accelerated three irreversible shifts: third‑party cookie depreciation forced marketers to invest in first‑party systems; vendors standardized server‑side ad integrations and clean‑room access; and CRM vendors integrated generative AI features that require governance. In late 2025 many marketing teams moved to server‑side conversion APIs and privacy‑first identity graphs. That makes ad integration, data portability and AI controls the most immediate risk-and-reward factors when selecting a CRM in 2026.
How to use this checklist
Run the checklist in three phases: Discovery (vendor questionnaires and demos), Proof‑of‑Concept (PoC) (2–6 week hands‑on test), and Decision (scoring and procurement). For each item we include: what to test, success criteria, sample vendor questions, and red flags.
Phase 1 — Discovery: Must‑ask vendor questions
-
Ad platform integration depth
- What to test: Ask for a demo of your specific ad accounts (Google, Meta, TikTok, LinkedIn, Amazon DSP). Request logs that show server‑side event delivery, deduplication, and match rates.
- Success criteria: Native server‑to‑server integration with event deduplication, 70%+ match rate to primary ad platforms on your test list, and a documented failure/retry mechanism.
- Sample question: “Can you show live logs for events sent from our CRM to Meta Conversions API and Google Ads, including match rates and time to delivery?”
- Red flags: Only client‑side tags or reliance on Zapier/Make for ad platform writes.
-
Identity resolution & privacy
- What to test: Review how the CRM ingests emails, hashed identifiers, phone numbers, and server‑side conversions. Ask about hashed matching, deterministic vs probabilistic approaches, and PII handling.
- Success criteria: Configurable identity resolution rules, native hashing and PII tokenization, and compatibility with your consent management platform (CMP).
- Sample question: “How does your identity graph respect consent, and can we scoping or exclude IDs for regional compliance (GDPR/CCPA/US states)?”
- Red flags: Vendor cannot show how consent flags are enforced across outbound ad writes.
-
Data portability & exports
- What to test: Ask for the export formats (CSV, Parquet, JSON), available schemas, and APIs for bulk export as well as change data capture (CDC).
- Success criteria: Complete exports of contacts, activity streams, and custom fields via API and storage connectors (S3, BigQuery, Snowflake) within SLA.
- Sample question: “Provide an example API call that returns all contact attributes (including custom fields) and a full activity history for a customer.”
- Red flags: Export limited to UI downloads or vendor‑locked formats; no CDC support.
-
AI features & governance
- What to test: Request details on the AI stack (in‑house models vs third‑party LLMs), training data sources, and options to disable or sandbox generative features.
- Success criteria: Ability to log model inputs/outputs, set boundaries on PII usage, and export model decisions (explainability) for audits.
- Sample question: “Which models power lead scoring and content generation? Can we review logs and disable the feature for specific cohorts?”
- Red flags: Vague answers about model provenance or no option to opt out of model training with customer data.
-
Pricing clarity and cost‑per‑contact
- What to test: Ask for a full TCO example: licensing, API call charges, ad integration fees, data storage, and professional services. Request sample invoices for a company of your size.
- Success criteria: Vendor provides a clear cost model you can map to a projected cost‑per‑contact and broken down by acquisition vs retention contacts.
- Sample question: “Show the all‑in monthly cost for 50K contacts with 1M monthly events and 5 ad platform integrations.”
- Red flags: Hidden fees for API usage, exports, or integrations that will inflate cost‑per‑contact later.
Phase 2 — PoC: A 3‑week practical test plan
Run an explicit 2–6 week PoC with a fixed success metric. Example PoC for marketing teams:
-
Week 0 — Setup & baseline
- Import a 5–10K contact sample with 3 months of activity.
- Baseline: record current ad match rates, audience sync lag, MQL rate, and cost‑per‑contact (see formulas below).
-
Week 1 — Ad integration test
- Enable server‑side conversion sends to at least two ad platforms. Track match rate, deduplication events, and conversion attribution.
- Success criteria: Reach or exceed baseline match rates and achieve sub‑5 second median delivery time to ad platforms.
-
Week 2 — AI & automation test
- Run automated lead scoring and a content generation workflow (email or ad creative) with governance enabled. Review model logs and explainability outputs.
- Success criteria: Automated lead scoring lifts MQL conversion by an agreed % (e.g., +10%) without introducing compliance issues.
-
Week 3 — Export & portability test
- Execute full exports (contacts, activities, and segments) to S3 or your data warehouse. Validate schema and run a rehydrate import into a staging system.
- Success criteria: Exports complete within SLA and rehydration preserves identity keys and timestamps with zero field loss for critical marketing fields.
Checklist: Questions, tests and success criteria (Prioritized)
1. Ad platform integration (Priority: High)
- Does the CRM support server‑side event APIs for major ad platforms? (Test: live event logs)
- Can it forward first‑party hashed IDs (email/phone) without exposing PII? (Test: verify hashing/tokenization)
- Is there an audience sync API and how quickly do segments appear in the ad platform? (Success: <24 hours for standard segments)
- Does the CRM support offline conversion uploads and dedup logic? (Test: simulate offline sale flows)
2. Data portability & integration (Priority: High)
- Are exports available via API, and do they include CDC? (Success: hourly CDC or faster)
- Are schemas documented and stable? (Test: sample Parquet/JSON schema)
- Is reverse ETL supported to push data back to warehouse or BI? (Test: push segment to BigQuery/Tableau)
3. AI features & governance (Priority: High)
- Is there transparent model provenance (which model, training data, update cadence)?
- Can you audit and export model decisions used in lead scoring and content personalization?
- Is there an option to disable or sandbox generative outputs for regulated campaigns?
4. Cost‑per‑contact (Priority: High)
Ask for an example TCO and calculate these metrics during the PoC:
- Cost per acquired contact = (Ad spend for period + list purchase cost + onboarding fees) / number of contacts acquired.
- Effective monthly cost per contact = (Monthly subscription + data storage + API fees + average ad sync cost) / active contacts.
- Cost per MQL = (total monthly costs allocated to acquisition and nurturing) / MQLs.
Sample calculation:
Monthly subscription: $2,500
Data storage & API fees: $800
Ad sync & integrations: $200
Monthly ad spend (acquisition portion): $10,000
New opted‑in contacts: 2,000
Effective cost per contact = (2,500 + 800 + 200 + 10,000) / 2,000 = $6.75 per contact.
5. Reporting, attribution & analytics (Priority: Medium)
- Does the CRM produce multi‑touch attribution that matches your data warehouse reports? (Test: reconcile CRM attribution with warehouse within margin)
- Can the CRM export raw event streams to your analytics stack? (Success: raw events in BigQuery within the PoC window)
6. Security, compliance & vendor viability (Priority: Medium)
- Does the vendor provide SOC2/ISO27001 certificates and GDPR DPA? (Request documentation)
- Data residency options — can customer data be stored in your preferred region? (Test: regional export & residency controls)
- Check financial viability and roadmap — vendor should publicly share roadmap tied to ad platform standards (e.g., CAPI, SKAdNetwork variants).
7. Implementation & support (Priority: Medium)
- Does vendor include migration services and runbooks? (Test: request an onboarding checklist tailored to your stack)
- Service level agreements for API availability and export completion times?
Scoring framework (simple & actionable)
Use a weighted scoring model. Example weights (total = 100):
- Ad integration: 25
- Data portability: 20
- AI features & governance: 20
- Cost‑per‑contact transparency: 15
- Reporting & attribution: 10
- Security & support: 10
Score vendors 0–5 on each axis. Multiply by weight. Compare totals and normalize to a 0–100 scale. Prioritize vendors scoring >75 for a pilot.
Real‑world example: Small SaaS marketer PoC (Campaigner.biz test)
We ran a 4‑week PoC on behalf of a mid‑market SaaS client in late 2025. The client needed improved ad match rates and governance for AI lead scoring. Outcome highlights:
- Vendor A (enterprise CRM): Excellent server‑side ad integration but opaque AI training — score 78. Required contract language to prevent model training on customer PII.
- Vendor B (mid‑market CRM with AI): Great explainability and low TCO but required additional engineering for clean room access — score 72.
- Vendor C (specialized martech stack): Best export and CDC support, but ad integrations were delayed — score 69.
Decision: Client chose Vendor A after negotiating an AI data usage addendum and a reduced initial integration fee. The PoC showed a 12% lift in MQLs from AI scoring and improved Google Ads match rate from 48% to 71% using server‑side conversions.
Red flags that should stop the process
- Vendor refuses to provide export APIs or sample data schemas.
- Vendor cannot demonstrate server‑side ad writes with your ad accounts.
- No option to disable AI training on your data, or inability to export model decisions.
- Hidden fees that make cost‑per‑contact unpredictable (e.g., per‑API call charges that can balloon).
2026 Trends to factor into your decision
- Privacy‑first advertising: Widespread adoption of clean rooms and cohort-based targeting makes server‑side integrations and identity graphs essential.
- AI regulation & transparency: Post‑EU AI Act rollout (enforcement increasing in 2025–26) means explainability and model audit trails are becoming contractually required for many enterprise deals.
- Composability: Best-in-class stacks are increasingly modular — expect to deploy CRMs as part of a platform mix rather than monolithically.
- Data gravity: Vendors that provide reliable exports and reverse ETL will win; lock‑in is now a corporate risk metric.
Actionable takeaways (what to run this week)
- Identify 3 vendors and ask for live access to your ad accounts, sample export schema, and an AI governance statement.
- Plan a 3–4 week PoC around the tests above and lock success metrics in writing.
- Compute projected cost‑per‑contact using vendor TCO and your acquisition forecasts; demand line‑item pricing.
- Negotiate contractual terms: data export SLA, AI data usage, and audit rights.
Closing: A final word on risk & reward
Choosing a CRM in 2026 is no longer just about features. It’s about orchestration: how well the CRM integrates with ad platforms, how easily your data leaves the vendor, and whether the AI features can be trusted in production. Use this prioritized checklist to structure a pragmatic PoC — and demand the transparency and contract terms that protect your data and ROI.
Ready to run a PoC with confidence? Download our one‑page PoC runbook and vendor questionnaire (designed for 2026 ad and AI realities), or book a 30‑minute vendor evaluation session with our team to apply the checklist to your stack.
Related Reading
- Grocery List: The 'Cozy Night In' Bundle — Soups, Sides, and Comfort Staples
- Pandan Negroni and Beyond: Where to Find Asian-Inspired Cocktails in Major Cities
- How to Accept Crypto for High-Tech Items: Invoices, Taxes, and Practical Tips
- Book Club Theme: 'Very Chinese Time'—Exploring Identity, Memes, and Cultural Memory Through Literature
- How HomeAdvantage and Credit Union Tools Can Reduce Homebuying Stress and Improve Mental Health
Related Topics
Unknown
Contributor
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you
How Social Preference Shapes Keyword Intent Before Search
Landing Pages That Convert for Nonlinear B2B Journeys
How to Brief AI for Email Copy Without Getting ‘Slop’
Using GA4, CRM and Google’s Total Budgets to Create a Unified Campaign Dashboard
The Power of Storytelling in Nonprofit Marketing: Lessons from the Stage
From Our Network
Trending stories across our publication group