Kriztian Dela Cruz

AI Sales Specialist

Google

Sales EngineerInbound HeavyStrategicOn-site📍 Sunnyvale or San Francisco, CA
Deal Size: $250K-$2M+ annual cloud spend
Sales Cycle: 6-12 months
Posted by Kriztian Dela Cruz

Overview

You sell Google Cloud's AI/ML platform to enterprises already using cloud infrastructure or evaluating a move. You're part sales, part solutions architect - running technical demos of Vertex AI, designing POCs for custom AI applications, and proving Google's infrastructure (TPUs, Gemini models, vector databases) can handle production ML workloads. You work with AEs who own the commercial relationship while you handle all technical validation.


Role Snapshot

AspectDetails
Role TypeSales Engineer / Solutions Architect (AI/ML focused)
Sales MotionInbound-heavy (existing cloud customers + strategic accounts)
Deal ComplexityEnterprise / Strategic
Sales Cycle6-12 months (POC + procurement)
Deal Size$250K-$2M+ annual cloud spend
Quota (est.)Influenced revenue, not direct quota - likely supporting $3-5M/year in team bookings

Company Context

Stage: Public (Alphabet) - Google Cloud is a $33B+ revenue business unit

Size: 336,000+ employees (Cloud org is ~30K)

Growth: Cloud growing 25%+ YoY, massive AI investment post-ChatGPT launch

Market Position: #3 in cloud infrastructure behind AWS and Azure, but positioned as the "AI-native cloud" with proprietary models and custom silicon


GTM Reality

Pipeline Sources:

  • 60% Existing Google Cloud customers expanding into AI workloads
  • 30% Strategic outbound to enterprises with known ML initiatives (working with dedicated AE)
  • 10% Inbound from developer teams who've used Vertex AI and need enterprise support

SDR/AE Structure: You support 2-4 Strategic Account Executives. They own the commercial relationship, you own technical validation. No SDRs - AEs prospect into strategic accounts or work named territories.

SE Support: You ARE the SE support. Some deals get help from specialized data engineers or ML researchers for deep technical questions, but you're the primary technical resource.


Competitive Landscape

Main Competitors:

  • AWS (SageMaker, Bedrock) - market leader with deepest cloud footprint
  • Microsoft Azure (Azure ML, OpenAI partnership) - strong enterprise relationships, bundled with Office 365
  • Databricks (ML platform, not infrastructure) - popular with data science teams
  • Snowflake (data + ML features) - owns the data warehouse, adding AI

How They Differentiate:

  • Proprietary models (Gemini) vs reselling OpenAI like Azure
  • Custom AI chips (TPUs) optimized for training and inference
  • Vertex AI unified platform vs stitching together AWS services
  • "We invented Transformers" credibility with ML teams

Common Objections:

  • "We're already on AWS/Azure and don't want multi-cloud complexity"
  • "Your AI models are behind OpenAI's GPT-4"
  • "AWS has more ML services and better documentation"
  • "We need hybrid/on-prem and you're cloud-only"

Win Themes:

  • Cost efficiency on training large models (TPU performance/price)
  • Vertex AI simplifies ML Ops vs building on AWS primitives
  • Access to Google's latest models and research
  • Security/compliance for regulated industries (healthcare, finance)

What You'll Actually Do

Time Breakdown

Customer Meetings (35%) | POC Work (30%) | Demos/Presentations (20%) | Internal (15%)

Key Activities

  • Technical Discovery: You join AE calls to understand what they're trying to build (fraud detection, chatbots, recommendation engines, document processing). You ask about data volume, latency requirements, accuracy targets, existing ML infrastructure. You're scoping whether this is a $100K prototype or a $2M production deployment.

  • Product Demos: You run live demos of Vertex AI - training a model, deploying to production, monitoring drift, A/B testing different approaches. Most demos are customized to their use case ("here's how you'd fine-tune Gemini on your customer support tickets"). You're screensharing into Jupyter notebooks and GCP console, not clicking through slides.

  • POC Architecture: You design proofs-of-concept that prove Google's stack works for their problem. This means writing architecture docs, sizing compute needs (how many TPUs, what instance types), estimating costs, and identifying integration points with their existing systems. Some POCs you build yourself, others you hand off to their engineering team with detailed specs.

  • Competitive Positioning: When they're evaluating AWS or Azure, you build comparison spreadsheets on price/performance, explain why Vertex AI's unified platform beats SageMaker's 15 different services, and arrange meetings with Google ML researchers if needed for credibility.

  • Internal Coordination: You pull in Google specialists - security reviewers for compliance questions, data engineers for data pipeline design, ML researchers for cutting-edge use cases, customer engineers for post-sale support planning. Lots of Slack/email coordinating who does what.


The Honest Reality

What's Hard

  • Most enterprises already have cloud infrastructure (usually AWS or Azure) and asking them to adopt a second cloud just for AI is a tough sell. You spend a lot of energy justifying why they should deal with multi-cloud complexity.
  • AI projects have high failure rates. Customers start POCs, realize their data quality is terrible or the use case doesn't have ROI, and quietly kill the project. Your influenced pipeline looks great until deals evaporate.
  • You're competing against AWS who has 3x the market share and more ML/AI services. Even when Google's tech is better, procurement prefers the safe choice of staying on AWS.
  • Google Cloud has a reputation for deprioritizing products. Customers ask "will this service exist in 3 years?" and you have to reassure them while knowing Google has killed products before.
  • Long sales cycles with lots of stakeholders. You'll do a brilliant POC, then wait 4 months for procurement, budget approval, security review. Deals constantly slip quarters.
  • The AI landscape changes every month. A model you demoed 60 days ago is now outdated. You're constantly learning new products, keeping up with research papers, and updating demo scripts.

What Success Looks Like

  • You support AEs who close $3-5M in new cloud consumption revenue annually, with your technical work being critical to winning deals
  • Customers actually deploy to production and expand usage (not just POCs that go nowhere)
  • You build POCs that work on the first try and address the customer's actual constraints (latency, cost, accuracy)
  • Internal reputation as the person who can explain complex AI concepts to non-technical executives and architect solutions that actually ship

Who You're Selling To

Primary Buyers:

  • VP/Director of Data Science or ML Engineering (technical buyer, evaluates platform)
  • VP/Director of Engineering or CTO (approves architecture decisions, owns production systems)
  • Cloud Infrastructure leads (deal with procurement, security, compliance)
  • Chief Data Officer or Chief AI Officer (at larger enterprises, owns AI strategy)

What They Care About:

  • ML teams: Can we train models faster and cheaper than our current setup? Does Vertex AI reduce operational overhead vs building on AWS primitives? Can we access state-of-the-art models?
  • Engineering leaders: Will this integrate with our existing systems? What's the migration risk? Who supports this in production?
  • Infrastructure teams: Security, compliance, cost controls, multi-cloud strategy, vendor lock-in risk
  • Executives: Time to value, competitive advantage from AI, cost vs building in-house, strategic partnership with Google

Requirements

  • 5+ years in technical sales, solutions architecture, or ML engineering roles - you need credibility with data science teams
  • Hands-on experience with ML platforms (SageMaker, Azure ML, Databricks, or similar) - you can't fake your way through technical conversations
  • Ability to code in Python, understand ML concepts (training vs inference, fine-tuning, embeddings, RAG), and debug technical issues during demos
  • Experience selling into enterprise accounts with long sales cycles and multiple stakeholders
  • Comfortable presenting to both technical audiences (data scientists) and executives (explaining ROI without jargon)
  • Willingness to live in Bay Area with regular customer visits and on-site meetings (not a remote role despite being tech sales)
  • Track record of designing POCs that actually lead to production deployments, not science projects that die in pilot phase