Location: Full-remote
Schedule: Full-time
As a Software Engineer, Artificial Intelligence, you will play a key role in building, operating, and continuously improving AI-driven systems that support internal business capabilities. You will focus on designing and maintaining AI-powered data pipelines, agent-based solutions, and LLM-driven services, working closely with modern data and ML platforms.
This role requires strong software engineering foundations combined with hands-on experience in large language models, vector stores, and ML lifecycle management. You will contribute to scalable, production-grade AI solutions while driving quality, reliability, and continuous improvement across the full model and data pipeline lifecycle.
Build and maintain AI-powered data pipelines and extraction processes (batch and streaming) from internal relational data sources and unstructured documents (PDF, Word, PowerPoint) into structured datasets within Databricks
Own end-to-end delivery of AI solutions, from design through production.
Design and manage text embeddings and vector stores within Databricks for use with vector indexing and retrieval solutions.
Drive architecture and best practices for AI-powered systems
Design, develop, and maintain custom tools implemented as MCP servers and Databricks applications to extend agent and model capabilities
Design, develop and implement AI Agents using frameworks like LangChain and LangGraph
Implement LLM scorers to validate and monitor agents, applications and models. Prevent issues like hallucinations or unnecessary actions through structured testing and guardrails.
Drive continuous improvement through prompt engineering, pipeline optimization, vector store tuning, and scorer refinement to ensure high-quality LLM responses.
Collaborate on production deployments, monitoring, and scalability of ML and LLM-based services
5-8 years of industry experience in software engineering or related roles.
Strong proficiency in Python, including production services, asynchronous programming, and testing.
Hands-on experience with AI Agents Development frameworks such as LangChain, LangGraph and LlamaIndex
Experience using MLflow for prompt engineering, experimentation, evaluation, model registry, and deployments.
Solid understanding of vector databases (e.g., FAISS, Pinecone, Weaviate, Chroma or similar), including serverless or managed options.
Experience implementing Retrieval augmented generation (RAG) solutions. Data Ingestion and Retrieval, LLM Generation.
Experience building and consuming REST APIs, model serving solutions, and CI/CD pipelines
Experience working with cloud platforms (AWS), containerization (Docker), and modern deployment practices
Hands-on experience with Databricks, Apache Spark, and Delta Lake (nice to have)
Advanced English level, both written and verbal.

We help tech leaders move faster, stay aligned, and build what matters.
After scaling and selling companies to eBay and Naspers, we built Eureka Labs to be the kind of partner we always wanted: fast-moving, deeply embedded, and focused on outcomes.
Our model: dedicated nearshore teams, tailored AI solutions, zero fluff.
Each team is custom-built to match your goals, culture, and stack—integrating smoothly and taking ownership from day one. With an NPS of 89, we’ve earned our partners’ trust by cutting through complexity and delivering real impact.
Behind it all is a culture we’re proud of: great people who love what they do and stay for the long run.
Scaling is hard. We make it work.