I've been the person building KPI frameworks from scratch at three different startups. Every time it's the same thing: you set up tracking, write SQL queries, build dashboards, and then spend hours each week pulling numbers that should just be there. PMs at early-stage companies don't have a data team. They're doing this themselves, and most of the time the insights come too late to act on.
Pulse
An AI-powered analytics copilot that helps product teams at early-stage startups stop drowning in dashboards and start finding the signals that actually matter.
I interviewed 12 PMs and analysts at Series A through C startups. The pattern was consistent: they all had dashboards, but nobody trusted them to surface what mattered. 9 out of 12 said they still write ad-hoc SQL queries weekly because their dashboards don't answer the questions they actually have. The gap isn't data access. It's that the data doesn't talk back.
Tools like Amplitude and Mixpanel are powerful but designed for companies with dedicated analytics teams. Metabase and Mode are flexible but still require SQL fluency. The newer AI analytics tools (like Narrator, Zing) focus on visualization, not on the interpretive layer. None of them answer the question: "what changed this week, and should I care?"
I scoped the MVP around three high-confidence features using RICE: natural language querying (ask your data a question, get an answer without SQL), anomaly detection (flag when a metric moves outside its normal range), and weekly digest (an AI-generated summary of what changed and why it might matter). Pushed dashboards and custom alerts to v2.
Pulse connects to a startup's existing data warehouse (Postgres, BigQuery, or Snowflake) and uses an LLM layer to translate natural language questions into SQL, run them, and return plain-English answers with supporting charts. The anomaly detection runs nightly, comparing each tracked metric against its trailing 30-day distribution and flagging anything beyond 2 standard deviations. The weekly digest is generated using a combination of anomaly output, week-over-week deltas, and a summarization model that writes like a PM would for their standup notes. All queries are read-only and the schema mapping is done during onboarding so users don't need to configure anything after setup.
- Activation rate (target: 60% in 24h): % of signups who connect a data source and run their first natural language query within 24 hours. The onboarding flow is designed specifically around this: schema mapping, starter question suggestions, and a single-click first query lower the activation barrier as much as possible.
- Weekly engagement (target: 45%): % of active users who open the weekly digest and take at least one follow-up action in the same session. "Open only" doesn't count. The digest needs to be useful enough to pull people back into the product, not just readable.
- Query trust (target: 70%): % of natural language queries where the user accepts the AI-generated answer without immediately re-running it or switching to raw SQL. This is the core signal for whether the LLM layer is actually working. Tracked per query and trended over time as the model improves.
- Time to insight (target: under 30 seconds): Median time from question submitted to answer viewed. Baseline from research is ~25 minutes doing it manually. At launch I'm targeting sub-30 seconds end to end. This metric lives on the engineering side as much as the product side.
- 4-week retention (target: 50%): % of activated users still running at least one query per week at the 4-week mark. Early-stage analytics tools often see a "honeymoon drop" after week 1 when novelty fades. Retention at 4 weeks is the first real signal of habit formation.
- NPS (tracked at week 2 and week 8): Simple 0-10 survey with one follow-up question: "What would you miss most if Pulse disappeared?" The qualitative answers here matter more than the score. Week 2 captures first impressions; week 8 captures whether the product has built a real workflow dependency.
Why I'm building this
This isn't a hypothetical. I've set up KPI tracking at Digo, built analytics frameworks at Giri, and automated data pipelines at UC Davis for over two years. Every time, I wished something like Pulse existed so I could skip the plumbing and get to the part that actually matters: making product decisions with real data.
The case study above covers the research and product thinking. The product itself is in development.
← Back to selected work