Software Engineer III

Published

Software Engineer III

(Remote, USA- PST or EST)

Type- W2

Description

We are seeking a talented Software Engineer to join our team. To effectively contribute to this project, team members need a combination of cloud infrastructure skills (particularly on AWS), strong proficiency in Python and JavaScript, expertise in NLP, and advanced knowledge of ML frameworks like PyTorch and HuggingFace. Additionally, they must have practical experience in using LLMs to build robust AI applications, including a solid understanding of agentic frameworks, RAG workflows, and prompt engineering.

By leveraging these tools and skills, the team will be well-equipped to develop an AI code generation platform that enhances engineering productivity, ensures code consistency, and embeds our security and compliance standards.

Responsibilities and Qualifications

1. Cloud Infrastructure: AWS

• The project will be built on Amazon Web Services (AWS), leveraging its robust cloud infrastructure for scalability, security, and reliability. Team members should be proficient in using AWS services such as EC2, S3, Lambda, RDS, and DynamoDB. Familiarity with AWS tools like CloudFormation or Terraform for Infrastructure as Code (IaC) is also essential to ensure automated and consistent deployment of resources.

2. Programming Languages: Python/JavaScript

• Python is required for developing core components, including AI and machine learning models, data processing pipelines, and backend services. Proficiency in Python libraries such as NumPy, Pandas, and scikit-learn is valuable.

• JavaScript is needed for frontend development, particularly for building user interfaces and integrating with APIs. Familiarity with frameworks like React or Vue.js will help in developing responsive and dynamic UI components for the platform.

3. Natural Language Processing (NLP) Experience:

• Team members should have experience in Natural Language Processing (NLP) to understand and interpret various input sources like LUCID diagrams, API specifications, and FIGMA designs. This includes expertise in text preprocessing, entity recognition, and language modeling techniques, which are crucial for extracting relevant information and generating code.

4. Experience in Agentic Frameworks, Retrieval-Augmented Generation (RAG) Workflows, and Prompt Engineering

• Knowledge of Agentic frameworks (frameworks that enable autonomous agent behavior in AI systems) is essential to develop sophisticated AI models that can autonomously generate code based on input data.

• Proficiency in Retrieval-Augmented Generation (RAG) workflows is required to ensure that the AI models can retrieve relevant information dynamically and generate accurate code snippets. Understanding how to implement RAG will help in building an efficient and context-aware AI platform.

• Prompt engineering skills are critical for designing and optimizing the prompts used to interact with the underlying large language models (LLMs). This involves crafting effective prompts that guide the model to generate the desired outputs, improving both accuracy and relevance.

5. Proficiency in Machine Learning Frameworks: PyTorch, HuggingFace

• The team must be adept in using popular ML frameworks like PyTorch for building and training deep learning models. PyTorch is highly flexible and well-suited for experimentation and custom model development.

• Familiarity with the HuggingFace ecosystem, particularly its Transformers library, is crucial for working with pre-trained large language models (LLMs) and fine-tuning them for specific tasks. Experience with HuggingFace’s tools will enable the team to leverage state-of-the-art models and accelerate the development process.

6. Must-Have Experience: Building Applications on Top of Large Language Models (LLMs)

• Team members should have hands-on experience in creating applications that utilize Large Language Models (LLMs) like GPT-3, GPT-4, or other transformer-based models. This includes knowledge of integrating LLMs into applications, handling model inputs and outputs, and optimizing their performance for specific use cases.

• Practical experience in deploying LLM-based applications in a production environment, managing their scalability, latency, and cost-effectiveness, is also critical to ensure the success of the project.