Python AI Engineer (Prompt & Agentic Systems)
About the position
We’re looking for a hands-on engineer who can build AI-enabled applications end-to-end using Python, with strong skills in prompt engineering and agentic system design (multi-agent/orchestrated AI workflows). You’ll design, develop, and productionize intelligent features—ranging from retrieval-augmented generation (RAG) to autonomous tasking agents integrated with internal tools and APIs.
Responsibilities
• Design & Build AI Services: Develop Python-based back-end services that integrate LLMs for reasoning, extraction, summarization, and decision support.
• Prompt Engineering: Craft, version, and evaluate prompts/system instructions; design guardrails, test prompt variants, and optimize for reliability, latency, and cost.
• Agentic Systems: Architect and implement autonomous/multi-agent workflows—planning, tool-use, memory, error recovery, and human-in-the-loop controls.
• RAG Pipelines: Implement document ingestion, chunking, embeddings, vector search (semantic/re-ranking), and grounding strategies.
• Evaluation & Observability: Define metrics and build eval suites for quality (accuracy, factuality, safety), and establish tracing/telemetry for LLM calls.
• API & Tool Integrations: Enable agents to use tools (internal APIs, search, databases, workflow engines); handle auth, rate limits, and fallbacks.
• MLOps / AIOps: Package, containerize, and deploy services (Docker/K8s); manage keys, secrets, CI/CD; support canary rollouts and cost governance.
• Security & Compliance: Apply data privacy principles, PII handling, redaction, prompt injection defenses, and audit logging.
• Cross-Functional Collaboration: Partner with product, data, and security teams to translate requirements into reliable AI features.
Requirements
• Strong Python (typing, async, testing, packaging) and experience building production APIs/services (FastAPI/Flask).
• Hands-on with LLMs (OpenAI, Azure OpenAI, Anthropic, etc.) and embedding/RAG workflows.
• Proven prompt engineering experience (few-shot strategies, tool-use instructions, output schemas, function/tool calling).
• Experience with agent frameworks or custom agent orchestration (e.g., LangGraph/LangChain/AutoGen, or in-house equivalents).
• Vector databases (e.g., FAISS, Chroma, Pinecone, Weaviate) and search relevance tuning.
• Familiar with MLOps/DevOps: Docker, CI/CD, monitoring (Prometheus/Grafana), logging (OpenTelemetry), secrets management.
• Testing & Evals: unit/integration tests, offline evals, golden datasets, regression checks.
• Practical understanding of AI safety/guardrails (prompt injection, data leakage, jailbreak prevention).
Nice-to-haves
• Experience with Azure (or AWS/GCP) AI services, key vaults, and networking.
• Knowledge of Model Context Protocol (MCP) or tool-server patterns for secure tool access.
• Experience with retrievers (BM25, hybrid search), re-rankers, or LlamaIndex/LangChain.
• Familiarity with streaming UIs and structured outputs (JSON, Pydantic schemas).
• Background in LLM finetuning, RLHF/DPO, or synthetic data generation.
• Front-end basics for AI UX (React/Next.js) or chat UI patterns.
• Domain knowledge in HR/ATS, customer support, or internal enterprise workflows.
Apply tot his job
Apply To this Job