Tim Davis

Account Executive - AI Infrastructure

Modular

Account ExecutiveOutbound HeavyEnterprise
Deal Size: $150K-500K+ ACV
Sales Cycle: 4-9 months
Posted by Tim Davis

Overview

You sell Modular's AI infrastructure stack to companies with serious ML workloads. Your buyers are ML platform engineers, MLOps leads, and engineering directors who currently run models on NVIDIA GPUs and want better performance, hardware portability, or easier production deployment. Post-acquisition of BentoML, you're now selling a full-stack story: optimization (MAX/Mojo) plus production serving (BentoML).


Role Snapshot

AspectDetails
Role TypeFull-cycle AE (likely self-sourced + some inbound)
Sales MotionOutbound-heavy with technical inbound from open source
Deal ComplexityEnterprise/Strategic - multi-stakeholder, technical eval
Sales Cycle4-9 months (POC required, infra changes are slow)
Deal Size$150K-500K+ ACV (infrastructure spend)
Quota (est.)$1M-1.5M/year

Company Context

Stage: Series B+ (315 employees, acquired BentoML - signs of growth capital)

Size: 315 employees

Growth: Aggressively hiring post-acquisition. BentoML brings 10K+ orgs and 50+ F500 companies to the funnel. Recent big move signals momentum.

Market Position: Category creator. They're not selling "another ML tool" - they're pitching a hypervisor for AI compute. Competing against companies building on top of NVIDIA/AMD stacks vs. Modular's hardware-agnostic layer.


GTM Reality

Pipeline Sources:

  • 30% Inbound - Open source users (BentoML community, Mojo early adopters) who hit scale/performance problems and want the commercial product
  • 60% Outbound - You target companies with large ML infrastructure spend (evident from job postings, tech blogs, conference talks)
  • 10% Warm intros from technical advisors, investors, existing customers

SDR/AE Structure: Likely self-sourcing with some SDR support for high-value accounts. Post-acquisition, BentoML inbound creates warm leads but you still do most prospecting.

SE Support: Definitely have Solutions Engineers/Architects - this stuff is too technical to demo yourself. You coordinate POCs but SEs run the technical deep-dives.


Competitive Landscape

Main Competitors:

  • Status quo (companies building their own serving infrastructure on NVIDIA/AMD)
  • Ray/Anyscale (distributed compute for ML)
  • Traditional ML platforms (SageMaker, Vertex AI, Databricks)
  • Open source serving frameworks (TorchServe, TensorFlow Serving)

How They Differentiate: Full-stack control (optimization + serving), true hardware portability (NVIDIA AND AMD without rewriting code), enterprise BYOC (your cloud, your VPC). Not a hosted service - it's your infrastructure.

Common Objections:

  • "We already have a serving stack" (why rebuild?)
  • "We're locked into NVIDIA" (switching cost concerns)
  • "Open source BentoML is fine for us" (why pay?)
  • "Too early in market" (risk of betting on new category)

Win Themes: Performance gains (quantified), avoiding hardware vendor lock-in, faster deployment cycles, enterprise support for open-source stack


What You'll Actually Do

Time Breakdown

Prospecting (35%) | Active Deals (40%) | Internal/Admin (25%)

Key Activities

  • Outbound prospecting to ML platform teams: You research companies hiring ML engineers at scale, read their tech blogs, find LinkedIn posts about their infrastructure challenges. Then cold email/LinkedIn saying "saw you're running [X models] on [Y infrastructure] - here's how we helped [similar company] cut inference costs 60%."

  • Running technical discovery calls: Your job is to understand their current stack (what models, what hardware, what serving framework, what pain points). You're qualifying for: scale (need 20+ models in production), technical fit (using PyTorch/TensorFlow), budget authority (infrastructure teams with $500K+ spend).

  • Coordinating POCs with Solutions Engineers: Once you get a technical champion interested, you tee up a 30-60 day proof of concept. You're project managing: getting them environment access, weekly check-ins on progress, navigating their security review, pushing to get results benchmarked against their current setup.

  • Multi-threading to close: Technical champion loves it, but you need procurement, infra lead, maybe CTO sign-off. You're booking executive briefings, building business cases (cost savings, deployment velocity), and chasing stakeholders across departments. Deals slip because "we need to finish Q4 roadmap first" or "security review is backed up."


The Honest Reality

What's Hard

  • Long, technical sales cycles: These aren't quick wins. Companies have existing infrastructure that works. You're asking them to rip-and-replace or layer in new tooling. POCs take 60+ days, then procurement takes another 30-60. Most of your pipeline is 6+ months out.

  • You're selling a category that doesn't exist yet: "Hypervisor for AI compute" isn't in anyone's budget. You're educating while selling. Lots of conversations die at "interesting but not a priority right now" because they don't have a burning problem you solve.

  • Open source cannibalization: BentoML is Apache 2.0 open source. Many prospects will say "we'll just use the free version." You're selling enterprise features (support, SLAs, advanced optimization) to teams who are used to DIY-ing their infrastructure.

What Success Looks Like

  • You close 6-8 deals per year at $150-500K ACV (mix of new logos and expansion)
  • Your POC-to-close rate is 40-50% (high technical bar means most POCs are real opportunities)
  • Customers deploy to production within 90 days and start expanding usage (seat-based or consumption pricing kicks in)

Who You're Selling To

Primary Buyers:

  • ML Platform Engineers / MLOps Leads (IC level, technical champions)
  • Engineering Directors / VPs Infrastructure (budget holders)
  • Sometimes CTO in smaller companies (strategic infrastructure decisions)

What They Care About:

  • Performance: "Will this actually make our models faster/cheaper to run?" They want benchmark data vs. their current stack.
  • Portability: "Can we avoid NVIDIA lock-in?" Especially relevant with AMD/other chips emerging.
  • Production readiness: "Is this battle-tested or will it break our prod environment?" BentoML's 10K+ org adoption helps here.
  • Migration cost: "How hard is it to move our existing models over?" If it requires rewriting everything, it's a no-go.
  • Support/SLAs: "What happens when this breaks at 2am?" Open source is great until you need enterprise support.

Requirements

  • 3-5+ years selling infrastructure software (DevOps, MLOps, cloud, data platforms)
  • Comfortable talking to engineers - you don't need to code but you need to understand ML concepts (inference, training, model serving, GPU compute)
  • Experience running technical POCs and coordinating with Solutions Engineers
  • Track record of 6-figure ACV deals with 3-9 month sales cycles
  • Familiarity with ML/AI ecosystem (PyTorch, TensorFlow, model deployment challenges)
  • Bonus: sold developer tools or open-source commercial products before