Who We Are
At Prem, we are building the self-sovereign AI infrastructure of tomorrow, combining privacy-preserving techniques, open-source models, and censorship-resistant protocols. Leveraging the capabilities of cutting-edge Large Language Models (LLMs), we provide an intuitive GenAI Developer Platform in order to integrate GenAI into your product or business with ownership and control.
Our team is a diverse ensemble of individuals with extensive expertise in various technical domains, coming together with a shared vision of pushing the frontiers of AI.
The Role
We are currently on the lookout for an experienced ML Engineer focused on re-shaping the current AI landscape trying to find out new model architectures that could solve transformer's limitations. As part of our team, you will have an opportunity to shape and drive the future of AI by fine-tuning open-source models, and training foundational models using the latest techniques.
If you have a knack for problem-solving, possess excellent communication skills, and share our vision of redefining AI democratization, this role is for you!
Key Responsibilities
- Develop and implement state-of-the-art fine-tuning methodologies for pre-trained models
- Build and monitor controlled fine-tuning experiments while tracking key performance indicators
- Process high-quality datasets tailored to specific domains for fine-tuning tasks
- Debug and optimize the fine-tuning process by analyzing computational and model performance metrics
- Deploy fine-tuned models into production pipelines with clear success metrics
Skills needed:
- Degree in Computer Science or related field with practical AI experience
- Hands-on experience with large-scale fine-tuning experiments that led to measurable improvements in domain-specific model performance
- Deep understanding of advanced fine-tuning methodologies, including transformer architectures and alternative approaches
- Strong expertise in PyTorch, Unsloth, TRL, Hugging Face libraries with experience in developing fine-tuning pipelines.
- Strong experience with multi-GPU training.
- Ability to apply empirical research to overcome fine-tuning bottlenecks and design evaluation frameworks