· Mishaal Murawala

The 90-Day GTM Diagnostic: What Actually Matters in Week One

What an operator-led GTM diagnostic looks like in week one — the questions that surface the truth, the data to pull first, and the signals that tell you what's actually broken.

Most GTM diagnostics are theater. A consultant shows up on Monday, asks for the org chart, requests a pipeline export, sits through four introductory calls, and produces a 30-page deck at the end of week three with headings like "Current State" and "Key Findings." The CEO nods. The deck gets filed. Nothing changes.

The problem is not the quality of the consultant. The problem is that week one is being used to produce an artifact instead of to build a diagnosis. When you only have three months to install discipline inside a PE-backed portfolio company, week one is the most expensive week of the engagement. You spend it the way a cardiologist spends the first ten minutes of an emergency — triaging, not documenting.

This post describes what that week looks like when it's done right. The questions asked, the data pulled, the people met with, and the signals that matter. If you are a PE operating partner evaluating a GTM engagement, this is the standard. If you are a portfolio CEO considering one, this is what to expect.

The Decks Lie. The Calendar Doesn't.

The first trap in any GTM diagnostic is the strategy deck. Every portfolio company has one. It will tell you that the ICP is "mid-market B2B SaaS decision-makers in verticals X, Y, and Z." It will tell you that the GTM motion is "inbound-led with strategic outbound to enterprise accounts." It will tell you there are five strategic priorities this year.

None of this is true in the way the deck claims. The deck describes the aspiration. The calendar describes reality.

On day one, I ask for three calendars: the CEO's, the CRO's, and the head of marketing's. Not to pry, but to read. Where is the time actually going. How many hours last week were spent on the priorities that the deck says matter versus the priorities that are actually consuming leadership. If the deck says "enterprise expansion" is priority one and the CRO has spent 38 minutes on enterprise accounts in the last two weeks, the deck is fiction. That gap is the first piece of the diagnosis.

The Data Pull That Actually Matters

Every GTM consultant will ask for a pipeline export. Most of them will then spend three days slicing it into reports that tell the CEO what they already know. That is wasted time.

In week one, there are exactly four data pulls that matter. Anything beyond these is context you can collect in parallel without blocking the diagnosis.

First, the win-rate-by-source cohort for the last four quarters. Not aggregate win rates — cohort. If inbound closes at 28% and outbound closes at 6%, and the company is investing evenly in both, that is a diagnosis by itself. It also tells you where the GTM leverage actually is versus where the team thinks it is.

Second, the deals that slipped or lost in the last two quarters, with the stated reason. Read the reasons. The pattern is almost always the same across three to five categories — pricing, competition, champion turnover, timing — and the distribution tells you which loop is broken. A team losing 40% of deals to "no decision" is a team with a qualification problem. A team losing 40% to a single competitor is a team with a positioning problem. They need different fixes.

Third, the time from lead creation to first meaningful activity, by channel. Lead response time is the most consistently ignored leading indicator in B2B GTM. If inbound leads are sitting for 48 hours before a rep touches them, the inbound investment is partially wasted before any conversation happens. The number is trivial to measure and telling when you do.

Fourth, the pipeline coverage ratio by segment. A CRO who can recite the top-line coverage number but cannot break it down by segment is a CRO operating on aggregate. Aggregate coverage hides segment-level rot until it's too late to fix it in quarter.

Who You Talk To on Day 1 Through 7

The wrong people to talk to first are the executives. Executives describe the system they think they're running. Front-line operators describe the system that is actually running. The information gap between those two is the diagnosis.

In the first seven days, the sequence I run is:

Day 1-2: one-on-one with the CEO, CRO, and head of marketing. Each 45 minutes. Purpose: understand the stated priorities, the stated problems, and the stated plan. I am listening for contradictions across the three conversations. If the CEO says the problem is "outbound is not producing" and the CRO says the problem is "we do not have enough SDRs" and the head of marketing says the problem is "the messaging needs a refresh," there are three different diagnoses of the same situation. That gap is diagnostic.

Day 3-4: three to four front-line conversations. Two AEs (one hitting quota, one missing). One SDR. One customer success lead. These conversations are not one-on-ones with executives. They are 30-minute ride-alongs where the operator walks me through their actual week — what they do, what slows them down, what the manager asks them to report, what they actually report. The gap between what the manager asks for and what gets reported is almost always where the broken loop lives.

Day 5: one customer conversation. Current customer, not prospect. 30 minutes. Purpose: hear how the product is actually used and what the buying process actually looked like. Most portfolio companies have a mental model of their buyer that was correct three years ago and has not been updated. The customer conversation tests the current model.

Day 6-7: synthesis. Not a deck. A one-page document that names three things: the actual GTM motion (not the stated one), the one broken loop that matters most, and the installation recommendation.

The Operator Signals

After doing this enough times, certain signals are reliable. I trust them more than I trust the narrative the team offers, because they are structural rather than verbal. Three patterns in particular:

Seven open initiatives, three clear owners. If the leadership team can rattle off seven initiatives but only three of them have a named owner with decision rights, you have a discipline problem, not a strategy problem. The strategy is probably fine. The installation is missing. Adding a new initiative is the worst possible intervention. Closing two is the right one.

The CRO cannot name the top three deals expected to close this month. This is specific and telling. A CRO who knows the pipeline from the inside can name the top three by prospect name, champion, dollar amount, and current blocker, without notes. A CRO who has to pull up Salesforce to answer is operating at a distance from the pipeline. That distance is usually the real problem.

Marketing and sales have different definitions of "qualified." Ask marketing what an MQL is. Ask sales what an SQL is. If the handoff definition is not identical and verbally rehearsed in under ten seconds, the handoff is broken. This is the single most common failure mode in portfolio B2B GTM, and it is almost always downstream of no one owning the definition.

No one can name what got killed last quarter. A healthy GTM organization kills one or two initiatives per quarter deliberately. A GTM organization that cannot name anything that got killed is running with entropy as its prioritization mechanism — initiatives fade, nothing gets closed, the portfolio gets heavier over time. Ask the question. Watch the room.

Where Discipline Already Exists

Not every portfolio company is broken. About one in four I walk into already has some form of initiative discipline installed — weekly rhythm, clear owners, honest measurement. When that's the case, the diagnostic pivots. The question is no longer "install discipline" but "which initiatives are worth the existing discipline."

The tell for existing discipline is subtle but unmistakable. The weekly GTM leadership meeting has a fixed agenda and finishes on time. Initiative owners can name their leading indicator without looking. The CRO references the kill criteria for their current initiatives casually. Board updates are short because the operating rhythm underneath them is tight.

When discipline exists, the engagement shape is different — we work on the specific strategic question the company is facing rather than the operating system. That is rarer and faster and usually the higher-leverage engagement.

What Comes Out of Week One

At the end of week one, the deliverable is not a deck. It is a decision. One page. Three sections.

Section one: what's actually broken. One sentence, specific, backed by the data pulled and the conversations held. Not "GTM needs work." Something like: "The mid-market motion is broken because inbound leads sit for 36 hours before first touch, and the AE pod owning that segment is carrying 45 accounts per rep, which is the root cause of the coverage collapse."

Section two: what we're installing. One or two interventions. Named. Owner assigned. Window defined. Kill criterion stated.

Section three: what we're deliberately not doing. This section is harder to write than section two. It names the three or four other things that look broken but are not the priority for the next 90 days. This is the section that earns trust with operators, because it shows that the diagnosis did not default to "fix everything."

If you end week one with that one page, and the CEO signs off on it, you have something worth running. If week one produced a deck full of observations and no decision, week two is already compromised.

Ready to Run a Real Diagnostic?

If you are operating inside a PE-backed portfolio company and the quarterly reviews keep producing the same list of unresolved issues, the problem is probably not strategy. It is that no one has run a diagnostic that ends with a decision. Our engagements are three months minimum because installation takes that long. Week one is where the work actually starts.