Blog - Colossal-AI

Unveiling Colossal-AI Booth and Exciting Hiring Opportunities at EMNLP 2023!

Written by Team | Dec 7, 2023 3:18:55 PM
In a celebration of cutting-edge technology and groundbreaking advancements, Colossal-AI is delighted to announce our participation at EMNLP 2023! As proud sponsors of this premier conference in natural language processing(NLP), we will set up a booth that not only showcases our commitment to innovation but also offers a unique opportunity for attendees to explore exciting career possibilities with us.
 

Booth #5 and Virtual Booth

EMNLP is a premier conference that brings together experts and enthusiasts from the fields of artificial intelligence(AI) and NLP.
We will be hosting both in-person and virtual booth at EMNLP from 8 Dec to 10 Dec at the West Foyer of the Resorts World Convention Centre. Do find us at Booth 5 and communicate with us to explore more about our current achievements and career opportunities if you can attend the expo in person. If you attend the conference virtually, we also have a virtual booth at GatherTown for you to engage with our team.
 

Career Opportunities

In tandem with our presence at EMNLP 2023, Colossal-AI is actively seeking skilled and dedicated individuals to join our dynamic team.
Currently, we are hiring for the following positions:
 

AI Large Model Training R&D Engineer

Job Responsibility

• Participate in the development of the Colossal-AI distributed deep learning system, responsible for the design, implementation and optimization of various distributed training technologies.
• Participate in the integration of Colossal-AI and various community projects (such as PyTorch, Lightning, Hugging Face)
• Maintain open source community, participate in community user interaction and maintain open source project infrastructure.
 

Job Requirements

• Proficient in PyTorch and familiar with TensorFlow/Caffe.
• Experience with distributed training frameworks such as DeepSpeed/NVIDIA Megatron/Ray.
• Knowledge of current popular CV/NLP/Audio models like BERT/GPT/Diffusion.
• Understand HPC-related knowledge such as parallel computing, CUDA, network communication, system optimization, and cluster hardware architecture.
• Strong programming skills in Python, C++, proficiency in data structure and algorithm design, familiar with Linux/Unix system and Shell programming.
• Familiar with version control systems (e.g. Git) and continuous integration/continuous delivery (CI/CD) pipelines.
• Minimum of 1 year of hands-on experience in distributed system development.
• Master's degree or above in computer science, AI, machine learning, or related fields. Undergraduates with rich relevant experience can also be accepted.
 

Multimodal Algorithm Engineer

Job Responsibility

• Develop and train large-scale multimodal models, integrating 2D/3D vision-language capabilities, utilizing distributed systems.
• Efficiently deploy compact neural architectures on various devices, ensuring optimal performance.
• Collaborate closely with cloud computing engineers, Colossal-AI software engineers, and cross-functional teams to enhance multimodal capabilities for real-world scenarios.
• Play a pivotal role in the continuous improvement of multimodal capabilities, staying updated on the latest advancements in multimodal semantics representation and multimodal searching technologies.
• Contribute to the evolution of cutting-edge technology in the field of multimodal algorithms.
 

Job Responsibility

• Demonstrated practical project experience in the development of multimodal algorithms, with a focus on deep learning.
• Familiarity with large-scale NLP models, including GPT, LLaMa, and Multimodal models such as BLIP-2, EmbodiedGPT.
• Experience in creating AI agents for multimodal perception, incorporating visual and auditory signals.
• Strong problem-solving abilities and a proactive approach to staying informed about the latest advancements in NLP and multimodal algorithms.
• Proficient programming skills in Python, C++, with expertise in data structure and algorithm design.
• Familiarity with Linux/Unix systems and Shell programming.
• In-depth understanding of version control using Git.
• Educational background with a Master's degree or higher in computer science, NLP, AI, or a related field.
 

AI Large Model Algorithm Engineer

Job Responsibility

• Engaged in the research and application of NLP/multimodal-related machine learning/deep learning technologies, including but not limited to dialogue systems, information extraction, document summarization, text generation, etc.
• Explore the implementation and innovation of natural language and multimodal technology in business.
• Research and implement the industry's most advanced multilingual NLP/multimodal large model.
 

Job Requirements

• Practical project experience in deep learning, conversational systems, text analysis, text generation, etc.
• Familiar with deep learning algorithms, frameworks, and toolchains(PyTorch, HuggingFace) of NLP.
• Familiar with NLP big models like BERT/GPT-3/Bloom/LLaMa and experience in training and fine-tuning billion-scale/trillion-scale models. Prompt design experience is a plus.
• Strong programming skills in Python, proficient in data structures and algorithm design.
• Familiar with Linux/Unix systems, Shell programming, and Git.
• Master's degree or higher in computer science, NLP, AI, or related fields.
• Minimum of 2 years of experience in NLP, with a deep understanding of NLP, machine learning, deep learning, and reinforcement learning algorithms.
 

AI Large Model Inference Engineer

Job Responsibility

• Optimize the operator layer of the Colossal-AI deep learning framework, and complete the implementation of deep learning operators on CUDA.
• Responsible for and participate in the architecture design, system development, and high-performance optimization of the machine learning inference engine, and create an infrastructure platform for AI large models.
 

Job Requirements

• Bachelor's degree or higher in computer science, mathematics, or related fields.
• Proficient in C/C++, with strong engineering skills, programming practices, and communication abilities.
• Expertise in high-performance computing optimization techniques on GPU platforms.
• Familiar with Transformer and LLM models
• More than 2 years of working experience in CUDA/Triton programming is preferred.
 

Cloud Computing R&D Engineer

Job Responsibility

• Participate in the research and development of AI training cloud-native platform, support ultra-large-scale model training and inference, and create an efficient and stable infrastructure.
• Optimize the experience of using the cloud platform to make it monitorable, easy to use, manage and expand.
 

Job Requirements

• Excellent coding ability, proficiency in at least one language of Golang/Python/C/C++, and experience in back-end service development.
• Familiar with cloud-native Kubernetes development and various common back-end distributed frameworks such as Redis, MySQL, Kafka, etc.
• Relevant work experience in AI platform development, familiar with the concepts of DevOps and MLOps, and relevant system construction and maintenance experience are preferred.
 

About Us

HPC AI TECH is revolutionizing AI productivity with Colossal-AI, a universal deep learning system for the large-model era. With over 35,000 GitHub stars and a top global ranking, our team, composed of experts from prestigious institutions worldwide, has propelled us to secure significant A and A+ funding. Collaborating with Fortune 500 giants, Southeast Asian tech leaders, and national research institutions, we're driving the commercialization of AI large-scale models in diverse sectors. Join us in shaping the future of AI—submit your CV to hr@hpcaitech.com and be part of our dynamic team!
 
For other inquiries, please contact service@hpcaitech.com