AI hiring is our native practice — and the bar at AI-native companies is not optional.
We place at AI-native companies from frontier labs to applied-ML scale-ups. ML engineers, research engineers, AI product, ML-adjacent infrastructure, and executive leadership — calibrated to the specific research-vs-applied split of each engagement.
The ai / ml market, honestly.
AI hiring is our flagship specialty. The market is both larger and narrower than most employers realize — larger because every venture-backed company is trying to ship AI features, narrower because the true senior ML engineers who can own a model end-to-end (research through production) are a small fraction of the self-labeled pool. We distinguish frontier-research candidates from applied-ML candidates, applied from MLOps, and LLM-infra specialists from classical ML. Our recruiters run technical screens with real production scenarios, not take-homes.
What we see in ai / ml.
- Frontier research vs applied ML are distinct candidate pools with different comp and different screens
- LLM infrastructure (inference serving, fine-tuning pipelines, RAG systems) is its own specialty
- Agentic AI engineering is a fast-growing sub-specialty as of 2026
- Evaluation engineering (evals, harnesses, benchmarks) is increasingly a distinct senior role
- AI product managers calibrate differently from SaaS PMs — deeper model literacy required
- Research-hire comp is distorted at frontier labs — typical salary bands don't apply
Roles we place in ai / ml.
- Senior ML Engineer (applied + production)
- Research Engineer (frontier + labs)
- LLM Infrastructure Engineer
- Staff ML Engineer
- MLOps / ML Platform Engineer
- AI Product Manager
- VP Engineering / CTO (AI-native product)
- Head of ML / VP AI
How ai / ml compensation differs.
- Frontier-lab research comp in the U.S. regularly clears $1M all-in at senior IC level
- Applied ML engineer comp is 15–30% above comparable senior backend roles
- LLM infra specialists carry premiums because the pool is still small
- Equity components are often the dominant comp line at venture-backed AI companies
Who we work with in ai / ml.
- AI-native Series A–D scale-ups shipping AI as the core product
- Enterprise software companies adding AI capabilities through a dedicated team
- Research labs and foundation-model companies
- Applied-AI companies in vertical markets (healthcare AI, legal AI, fintech AI)
AI / ML hiring — questions we hear.
Yes, though it's a smaller portion of our AI practice. Research-lab placements run differently — compensation is in a different band, the candidate pool is global, and the interview process is research-first. We run these as retained executive or retained IC searches.
Through the technical screen. We ask candidates about specific LLM-infra primitives (vLLM, Triton, inference optimization, evaluation loops, RAG architectures) that classical ML candidates typically haven't worked with. The delta is visible in the first 15 minutes.
Hiring across verticals?
Most scale-up hiring spans multiple verticals — a healthtech company hires SaaS-native engineers, a fintech needs enterprise-grade security. We calibrate per role.