Software Engineer, AgentOps (Remote) at Fieldguide
Job Description
What You'll Own
- Agent infrastructure and tooling. Build and maintain the internal platform that makes agentic workflows reliable and easy to adopt for our team members: MCP integrations, prompt/skill libraries, shared configurations, knowledge, and the tooling that connects agents to our codebase, docs, and internal services.
- Developer experience for AI workflows. Make the right way to use AI the easy way for all team members. Onboarding, documentation, clear paths for common workflows (planning, testing, ticket creation, code review, production support), and the feedback loops that tell you what's working and what isn't.
- Measurement and attribution. Build the instrumentation to track efficiency metrics, agent effectiveness, and team productivity.
- Experimentation and evaluation. The AI tooling landscape changes quickly. Run structured trials of new tools, synthesize learnings, and maintain a living point of view that the team can follow.
- Enablement and culture. Trainings, office hours, internal demos, and coaching that raise the floor across the org. You're making sure every engineer can be AI-native from day one.
What You Bring
Must-haves
- You are an excellent engineer first. You can ship reliable internal tools, build integrations, and debug production systems. You love building things.
- Deep, hands-on experience with AI coding tools. You've used Claude Code, Cursor, Codex, and more in real engineering work, not toy projects. You have strong opinions about what works and what doesn't.
- Systems thinking. You see the connection between a developer's local tooling and org-wide velocity. You think in terms of platforms and impact.
- Ability to influence without authority. You'll be working across engineering, product, design, and more. You need to bring people along through great docs, practical examples, and results rather than mandates.
- Bias toward measurement. You're not satisfied with "it feels faster." You instrument, you measure, you iterate based on data.
Strong signals
- Platform engineering or developer experience background
- Experience with LLMOps practices: eval frameworks, prompt regression testing, cost/performance tracking
- Track record rolling out internal tools across teams (adoption playbooks, change management)
- Security and compliance familiarity for AI-enabled workflows (data access boundaries, governance, observability)
- Experience building knowledge systems: search, RAG, internal documentation platforms

