AI engineer
Ghent, Belgium
Full-Time (On-site / Hybrid)
$100k – 140k / year + 0.50% – 1% Equity
We're hiring an AI engineer to build the intelligence that powers Bond's AI Chief of Staff — the agents, briefing pipelines, prompt systems, and LLM infrastructure that turn raw company data into executive-ready insight. You'll work directly with the founders and own our entire AI layer from day one.
Why Bond?
At Bond, we are building the world's first AI Chief of Staff. After speaking with over 2,000 executives, we identified a universal pain point: information fragmentation and overload. Even the best teams spend countless hours chasing updates, untangling loose threads, and sitting in status meetings, only to find the data they need is buried in a silo.
We're building a system that understands what's happening across a company and tells leaders what truly deserves their time. You don't just save a few hours. You change how companies operate. This is a new layer of infrastructure. Whoever solves this becomes the operating system for modern organizations. This is a generational engineering problem: massive data ingestion, state modeling, prioritization logic, real-time inference, and agentic action loops.
And nobody has cracked it.
Backed by Y Combinator, Fellows Fund, Goodwater Capital, and E14, we are building the intelligence layer that connects these dots automatically. We have raised our seed round and are now expanding our engineering team.
What You'll Do
Build agentic systems that actually work: You will own BondBot, our conversational AI agent. Today it orchestrates 30+ skillsets across 15 platforms — searching Slack threads, triaging Linear issues, drafting replies, managing todos — all through natural language. You will push this from "useful assistant" to "indispensable Chief of Staff" by improving tool orchestration, multi-turn reasoning, and autonomous action loops.
Ship the daily briefing pipeline: Every morning, Bond generates a personalized executive briefing by running five specialized AI agents in parallel — extracting todos, summarizing updates, prepping meetings, tracking objectives, and surfacing what matters. You will own this pipeline end-to-end: the preprocessing, the agent orchestration, the prompt engineering, and the output quality.
Engineer prompts like software: We treat LLM prompts as production code. You will maintain our library of 80+ prompt templates with semantic versioning, structured output schemas, and a rigorous TDD evaluation loop (write failing tests first, then fix the prompt). You will know when a prompt is overfitting versus genuinely improving.
Orchestrate multi-provider LLM infrastructure: Bond runs across OpenAI, Anthropic, Google, and AWS Bedrock with intelligent fallback chains, rate limiting, and circuit breakers. You will optimize for cost, latency, and quality — picking the right model for each of our 49+ generation functions.
Close the feedback loop: Great AI isn't just about generation — it's about measurement. You will build and maintain our evaluation framework, trace every LLM call through Langfuse, and turn qualitative "this briefing felt off" into quantitative pass rates that guide prompt iteration.
Our Stack
We trust you can pick up new tools quickly, but here is what we are building with today:
AI/LLM: LangChain, LangGraph, OpenAI, Anthropic, Google Gemini, AWS Bedrock
Backend: Python, FastAPI, ARQ, asyncio
Data: PostgreSQL, Qdrant, S3
Observability: Langfuse, Sentry, PostHog
Infra: AWS, Terraform, Docker, GitHub Actions
Who You Are
3+ years building LLM-powered applications in production, or a proven history of "high slope" engineering (you've shipped complex AI side projects that actually work under real conditions).
Prompt engineer, not prompt gambler: You understand structured generation, tool calling, and output validation. You iterate on prompts with data, not vibes. You know the difference between a prompt that passes 12 test cases and one that generalizes.
Agent builder: You have experience with agentic architectures — tool orchestration, multi-step reasoning, memory management, graceful failure. You understand why "just call GPT" isn't an architecture.
Async-native Python developer: Our entire AI layer is async. You are comfortable with asyncio, concurrent pipelines, and the sharp edges of running five LLM-backed agents in parallel without corrupting shared state.
Pragmatic & low ego: You step up in incidents to unblock teammates. You believe the best idea wins, not the loudest voice. You know when to use Claude Opus and when Haiku will do.
Product mindset: You don't just optimize F1 scores; you optimize for the feeling of opening your briefing at 7am and knowing exactly what deserves your attention. You have deep empathy for the user and care about the last mile.
Startup ready: You are autonomous, ownership-driven, and able to navigate ambiguity. When the agent hallucinates, you don't file a ticket — you trace the Langfuse span, find the bad prompt, write a failing test, and ship the fix.
Global communicator: You are proactive and reliable, capable of coordinating effectively between our Ghent and our San Francisco presence.
Driven: You are ready to make this your life's work.
If you're looking for work-life balance, predictable routines, or a well-defined roadmap… this isn't your place. If you want to build something that outlives you, welcome home :)
The Bond Offer
Ghent Roots, Silicon Valley Access: You will work out of the Wintercircus in the beautiful city center of Ghent. We regularly travel to San Francisco to work with our Co-Founder & CEO, investors, clients and partners. You are expected to join us. This is your ticket to the global tech ecosystem.
Competitive Pay: $100k – 140k / year. We aim to pay top-tier rates for the Belgian market.
Ownership: 0.50% – 1% Equity. You are a builder, and you should own what you build.