ZharfaTech

AI Researcher – Large Language Models


About Us

At ZharfaTech, we are at the forefront of generative artificial intelligence research, specializing Large Language Models (LLMs). Whether it’s pre-training foundational models, fine-tuning for specialized tasks, or developing novel RL-based alignment techniques

About the Role

As an AI Researcher specializing in LLMs, you will lead and contribute to groundbreaking research in model architecture, training optimization, and alignment techniques. You’ll work on every stage of the LLM lifecycle—from pre-training models to fine-tuning for real-world applications—while collaborating with a multidisciplinary team of engineers, data scientists, and product specialists.

We’re looking for someone who thrives on solving complex technical challenges, has a deep understanding of modern LLM techniques, and is eager to publish influential research while building production-ready AI systems.

What You'll Do

Research & Model Development

  • Design and train state-of-the-art LLMs using PyTorch, Hugging Face Transformers, and DeepSpeed, optimizing for scalability, efficiency, and performance.
  • Innovate in LLM architectures, exploring Transformer variants, Mixture of Experts (MoE), and sparse models to push the limits of model capabilities.
  • Develop reasoning-based models, improving logical inference, multi-step problem-solving, and long-context understanding.
  • Implement and refine alignment techniques, including RLHF (PPO, DPO, GRPO), supervised fine-tuning (SFT), and novel reward modeling approaches.
  • Work with engineering teams to deploy models into production, ensuring scalability, latency optimization, and robustness.
  • Publish research findings at top-tier conferences (NeurIPS, ICLR, ACL, EMNLP) and contribute to the open-source AI community.
  • Mentor junior researchers, lead technical discussions, and drive innovation in LLM research

What You'll Bring

Core Technical Skills

  • Deep expertise in LLM training stages:
    • Pre-training (data curation, tokenization, distributed training)
    • Supervised Fine-Tuning (SFT)
    • Reinforcement Learning from Human Feedback (RLHF), including DPO, PPO, and GRPO
  • Strong programming skills in Python, with hands-on experience in PyTorch, JAX, or TensorFlow.
  • Experience with LLM frameworks like Hugging Face Transformers, DeepSpeed, or vLLM.
  • Familiarity with large-scale training infrastructure (multi-GPU clusters, SLURM, Kubernetes).

Collaboration & Mindset

  • Passion for solving open-ended research problems with real-world impact.
  • Strong communication skills, able to explain complex concepts to both technical and non-technical stakeholders.
  • Entrepreneurial spirit—comfortable working in a fast-paced, iterative research environment.

Nice to Haves

  • Experience with multimodal models (vision + language) or autonomous AI agents (CrewAI, AutoGen).
  • Contributions to open-source LLM projects (e.g., LLaMA, Mistral, OLMo).
  • Knowledge of model compression techniques (pruning, distillation, LoRA adapters).

What We Offer

  • Cutting-edge projects: Work on high-impact, international-level research that shapes the future of AI.
  • World-class compute resources: Access to massive GPU/TPU clusters for large-scale training.
  • Collaboration with leading experts: Work alongside top researchers and engineers in AI.
  • Publication & conference support: Opportunities to present at major AI venues and contribute to the global research community.

Unlock the gateway to revelation

Sign Up and Secure Your Spot!

00DAYS
:
00HOURS
:
00MINUTES
:
00SECONDS
Register Now
view all arrow
; ; ;; ; ;