AI Engineer (Full-time, Remote) at Kata.ai


Company Logo

Kata.ai is Hiring

Job Info:
  • Company Kata.ai
  • Position AI Engineer (Full-time, Remote)
  • Location South Jakarta, Indonesia
  • Source SmartRecruiters
  • Published April 08, 2026
  • Category Development
  • Type Full-Time


Job Description

You will design, build, and deploy production-grade AI systems — including LLM-powered conversational agents, RAG pipelines, NLP workflows, and voice AI integrations — to deliver intelligent, reliable, and measurable AI solutions for enterprise clients across government, financial services, healthcare, and telecommunications sectors — so that Kata's clients can automate customer interactions at scale with high accuracy, low latency, and strong business impact.

Qualifications & Education : 

  • Bachelor's degree in Computer Science, Artificial Intelligence, Data Science, Computational Linguistics, or related field
  • Master's degree in AI/ML is a plus
  • Relevant certifications (GCP AI/ML, DeepLearning.AI, etc.) are advantageous

Technical Skills : 

  • LLM Integration: OpenAI GPT-4o, Anthropic Claude, Google Gemini, or open-source models (LLaMA, Mistral, Qwen)
  • AI Frameworks: LangChain, LlamaIndex, CrewAI, or similar agent/RAG orchestration frameworks
  • Prompt Engineering: System prompt design, few-shot prompting, chain-of-thought, structured output (JSON mode, function calling)
  • RAG Pipelines: Document chunking, embedding strategies, retrieval optimization, reranking
  • Vector Databases: Pinecone, Weaviate, Qdrant, or pgvector
  • Voice AI: LiveKit Agents SDK, STT integrations (Deepgram, Google Speech-to-Text, Whisper), TTS integrations (ElevenLabs, Google TTS)
  • Languages: Python (required); FastAPI for AI service exposure
  • Cloud: GCP or Azure for AI/ML workload deployment — Vertex AI, Azure OpenAI, Cloud Run
  • Evaluation Frameworks: RAGAS, DeepEval, custom eval pipelines, or LLM-as-judge approaches
  • Containerization: Docker; basic Kubernetes for deploying AI services
  • Monitoring: AI-specific observability — LangSmith, Langfuse, or custom logging for tracing LLM calls in production

Experience

Associate Level (1–2 years)

  • 1–2 years of professional experience in AI/ML engineering or software development with a strong AI focus
  • Hands-on experience building or integrating LLM-powered applications using OpenAI, Anthropic Claude, Google Gemini, or equivalent
  • Practical exposure to conversational AI or chatbot development — prompt engineering, intent handling, or dialogue flow design
  • Familiarity with RAG pipeline concepts — document ingestion, embedding, vector search, and retrieval
  • Experience with Python and at least one AI orchestration framework (LangChain, LlamaIndex, or similar)
  • Exposure to cloud platforms (GCP or Azure) for deploying AI/ML workloads


Mid Level (3–5 years)

  • 3–5 years of experience in AI/ML or software engineering, with at least 2 years focused on production-grade LLM or GenAI systems
  • Proven experience designing and deploying RAG pipelines in production — including chunking strategies, embedding models, vector databases (Qdrant, Pinecone, Weaviate, or pgvector), and retrieval optimization
  • Hands-on experience building conversational AI systems for enterprise clients — chatbot, virtual assistant, or AI agent products in regulated industries
  • Demonstrated experience with Voice AI integrations — STT (Deepgram, Whisper, Google Speech-to-Text) and/or TTS (ElevenLabs, Google TTS) in a production environment, ideally with LiveKit Agents SDK or equivalent
  • Experience implementing AI evaluation frameworks (RAGAS, DeepEval, or custom eval pipelines) to measure and improve model quality
  • Experience with AI observability tooling — LangSmith, Langfuse, or custom LLM call tracing in production

We value a flexible working hour for our employees.

The most important is we provide a learning experience in Conversational AI Industry.


✉️