Title of Role: AI Engineer
Location: San Francisco, CA
Company Stage of Funding: Seed Stage (YC-backed), Profitable
Office Type: Onsite (5–6 Days per Week)
Salary: $160,000 – $220,000 + Equity
Our client is a fast-growing, venture-backed AI infrastructure startup building one of the most widely adopted open-source LLM gateways in the ecosystem. Their platform standardizes and unifies 100+ LLM APIs into a single, consistent OpenAI-compatible interface—powering developers and enterprises to seamlessly integrate across providers.
Backed by leading early-stage investors and generating multi-million ARR with strong growth, the company is profitable and scaling quickly. With a small, high-caliber team based in San Francisco, they are defining the interoperability layer for the modern AI stack.
This is a rare opportunity to join an early team shaping core AI infrastructure used by thousands of developers worldwide.
As an AI Engineer, you will play a foundational role in building and scaling the interoperability layer for large language models.
You will:
Design and implement transformations that map OpenAI-compatible API requests to provider-specific LLM APIs
Add and maintain support for new LLM providers and evolving API specifications
Handle provider-specific edge cases, performance considerations, and streaming constraints
Build scalable logging, cost tracking, and spend aggregation systems across millions of API calls
Improve reliability and performance of high-throughput backend services
Contribute directly to a widely adopted open-source project
Collaborate closely with the founding team on architecture and product direction
Engage with users and developers to understand real-world integration needs
This role combines deep backend engineering, API design, distributed systems thinking, and direct impact on AI developer tooling.
1+ years of backend engineering experience building production systems
Strong proficiency in Python and experience with modern backend frameworks (e.g., FastAPI)
Experience designing and integrating APIs at scale
Exposure to distributed systems, performance optimization, or high-throughput services
Comfortable working in small, fast-moving teams with high ownership
Experience maintaining or contributing to open-source projects is a plus
Background in AI/ML infrastructure, developer tooling, or API platforms is highly valued
Startup experience (founder or early employee) is a strong plus
You thrive in environments where speed, ownership, and accountability matter. You enjoy solving infrastructure-level problems and care deeply about clean abstractions and developer experience.
Experience working with LLM APIs (OpenAI, Anthropic, Bedrock, Azure, Vertex, etc.)
Familiarity with asynchronous networking (e.g., httpx, aiohttp)
Experience with Postgres, Redis, cloud storage (S3, GCS), or observability tools
Background in API standardization, data pipelines, or developer SDKs
Experience scaling systems handling millions of events or logs
Base Salary: $160,000 – $220,000
Equity: Meaningful early-stage equity (0.5% – 3% range depending on experience)
Full-time position
Onsite collaboration in San Francisco
High ownership and direct exposure to leadership
Opportunity to shape foundational AI infrastructure used globally
This role is ideal for engineers who want to work at the core of AI infrastructure, influence open standards, and build systems that power the next generation of AI products.
If you're excited about shaping the interoperability layer of the LLM ecosystem and working alongside a highly technical founding team, we’d love to connect.