Work Schedule
Standard (Mon-Fri)
Environmental Conditions
Office
Job Description
Thermo Fisher Scientific Inc. is the world leader in serving science, with annual revenue exceeding $40 billion and extensive investment in R&D. Our Mission is to enable our customers to make the world healthier, cleaner and safer. The customers we serve fall within pharmaceutical and biotech companies, hospitals and clinical diagnostic labs, universities, research institutions, and government agencies. Our innovations drive scientific breakthroughs, from groundbreaking research to routine testing and real-world applications.
How will you make an impact?
You will play a pivotal role in designing and delivering reliable, robust AI applications, algorithms, and frameworks that elevate the quality and performance of our product offerings. You will collaborate with and learn from a dedicated team of algorithm and software developers, revolutionizing healthcare through low-cost and high efficiency diagnostic systems.
What will you do?
- Architect, build and deploy LLM powered agent systems (chatbots, copilots, agents) that are safe, fast, and cost-efficient.
- Own the whole product development life cycle: build → prototype → evaluate → harden → monitor.
- Build retrieval-augmented generation (RAG) pipelines (indexing, chunking, embeddings, reranking, grounding).
- Apply context engineering (prompt design, tool calling, memory, compression, window strategy).
- Integrate tools through Model Context Protocol (MCP) and other agent frameworks.
- Design and deploy production-grade chatbots with multi-turn conversation flows, escalation mechanisms, and integrated safety guardrails for seamless use across web, mobile, and internal platforms.
- Implement risk controls (safety filters, jailbreak resistance, PII redaction, abuse detection, audit logs).
- Optimize performance (latency, efficiency, token/cost budgets, streaming, caching, model routing).
- Establish evaluation: golden sets, RAG/grounding scores, toxicity, A/B tests, latency & cost benchmarks.
- Operate in production: tracing, prompt/version lineage, drift detection, incident response, SLOs.
- Collaborate with cross-functional teams, including software, hardware, and data science, to ensure algorithms meet product requirements and are well-integrated into production systems.
- Mentor junior AI engineers, set coding standards and documentation, and advocate for guidelines in LLM engineering including reproducibility, ensuring algorithm reliability and transparency.
- Stay informed on new technologies and industry standards to continuously improve development and evaluation methodologies.
How will you get here?
Education
Master’s degree in Computer Sciences, Mathematics, Statistics, Bioinformatics or a related field; a Ph.D or equivalent experience is highly preferred.
Experience and skills Required
- 3+ years of hands-on experience in production-level chatbots development, including at least 1 year experience in building LLM-based agents.
- Hands-on with major LLMs/APIs(Open AI, LangChain or Anthropic, Hugging Face etc); Expertise in prompt and context engineering for LLMs.
- Deep experience with RAG pipelines and vector/hybrid search (e.g., FAISS, pgvector, Pinecone), rerankers, and grounding/citation techniques.
- Experience developing and integrating tools using Model Context Protocol (MCP), including defining tool capabilities and managing access permissions.
- Demonstrated skills in developing resilient chatbots incorporating state management, tool/function integration, fallback strategies, and multilingual support.
- Proficient programming abilities in Python (mandatory); familiarity with TypeScript, Java/JavaScript, or Matlab is advantageous.
- Experience managing AI agent safety, including content moderation, policy enforcement, red-teaming, and hallucination mitigation.
- Hands-on experience profiling the performance of AI agent systems, including batching and streaming strategies, asynchronous processing and concurrency, efficient caching mechanisms, and cost/latency optimization.
- Experience evaluating LLMs using tools like RAGAS, G-Eval, or similar; familiarity with offline/online metrics and A/B testing frameworks.
- Experience managing the lifecycle of LLMs in production, including versioning, rollback, and continuous improvement, cloud-based CI/CD and containerized deployments
- Strong communication skills with the ability to present work to both technical specialists and non-experts.
- Ability to work independently and collaboratively in cross-functional teams.
- Dedicated and motivated: capable of defining ambiguous tasks, establishing clear goals, iterating rapidly, requesting feedback, and consistently following through.
Preferred
- Familiarity with data systems: SQL/NoSQL, message queues, object storage, and schema design for documents and metadata.
- Understanding of AI system security, data privacy, and compliance considerations in production environments.
- Proven proficiency in coaching junior AI engineers and supporting team-level technical direction.
- Experience with observability tools and practices, including logging, distributed tracing (e.g., OpenTelemetry or equivalent experience), and metrics monitoring (e.g., Prometheus, Grafana).
- Hands-on experience with AWS SageMaker, Bedrock, and Step Functions, along with other relevant AWS services, to build, deploy, and orchestrate AI agents in scalable, production-grade workflows.
- Experience in biotechnology industry is a plus.