(352) FASTTEK | (352) 327-8835
FASTTEK GLOBALpowered by Fast Switch - Great Lakes
info@fasttek.com
(352) FASTTEK | (352) 327-8835
Role Description
Position Overview
We are looking for AI Solutions Engineer to lead the design, development, and deployment of AI/ML-powered solutions within an enterprise environment. This is a hands-on role focused on applied AI — from deep learning and classical ML to LLM integration and agentic systems — with a strong emphasis on delivering production-ready solutions that solve real business problems.
The candidate will proactively identify opportunities where AI can add value, build proof-of-concepts, and take them to production. During early phases they may also contribute to building internal AI-powered developer productivity tools (code documentation agents, Copilot integrations, automated testing) to deliver value from day one.

Key Responsibilities
AI/ML Solution Design and Development
- Identify business problems suited for AI/ML and design end-to-end solution architectures
- Build, train, evaluate, and deploy ML models using Python — covering supervised/unsupervised learning, deep learning, and reinforcement learning as appropriate
- Implement deep learning solutions (CNNs, RNNs/LSTMs, Transformers) using PyTorch or TensorFlow for NLP, computer vision, or structured data problems
- Design and build RAG systems, AI agent workflows, and LLM integrations with proper prompt engineering, guardrails, and evaluation
- Develop and maintain ML pipelines: data ingestion, feature engineering, model training, versioning, and deployment
LLM and Generative AI
- Integrate LLM APIs (OpenAI, Anthropic Claude, Google Gemini, open-source models) into enterprise applications
- Build agentic systems using frameworks like LangChain, LangGraph, or CrewAI — including tool use, multi-agent orchestration, and MCP integration
- Implement vector databases, embedding pipelines, and semantic search for RAG systems
- Apply advanced prompt engineering techniques (chain-of-thought, ReAct, few-shot, structured outputs) to optimize LLM performance
MLOps and Production Delivery
- Establish experiment tracking, model versioning, and reproducibility practices (MLflow, Weights & Biases, or equivalent)
- Deploy and serve models in production with monitoring, drift detection, and automated retraining where needed
- Optimize model inference for latency and cost — quantization, distillation, and efficient serving strategies
- Build APIs and microservices to expose AI capabilities to consuming applications
Technical Leadership
- Mentor teams on AI/ML best practices, integration patterns, and responsible AI principles
- Collaborate with product managers and business stakeholders to translate ambiguous requirements into AI solutions
- Drive AI adoption through documentation, tech talks, and hands-on workshops
Key Responsibilities / Additional Info
Key Responsibilities
AI/ML Solution Design and Development
- Identify business problems suited for AI/ML and design end-to-end solution architectures
- Build, train, evaluate, and deploy ML models using Python — covering supervised/unsupervised learning, deep learning, and reinforcement learning as appropriate
- Implement deep learning solutions (CNNs, RNNs/LSTMs, Transformers) using PyTorch or TensorFlow for NLP, computer vision, or structured data problems
- Design and build RAG systems, AI agent workflows, and LLM integrations with proper prompt engineering, guardrails, and evaluation
- Develop and maintain ML pipelines: data ingestion, feature engineering, model training, versioning, and deployment
LLM and Generative AI
- Integrate LLM APIs (OpenAI, Anthropic Claude, Google Gemini, open-source models) into enterprise applications
- Build agentic systems using frameworks like LangChain, LangGraph, or CrewAI — including tool use, multi-agent orchestration, and MCP integration
- Implement vector databases, embedding pipelines, and semantic search for RAG systems
- Apply advanced prompt engineering techniques (chain-of-thought, ReAct, few-shot, structured outputs) to optimize LLM performance
MLOps and Production Delivery
- Establish experiment tracking, model versioning, and reproducibility practices (MLflow, Weights & Biases, or equivalent)
- Deploy and serve models in production with monitoring, drift detection, and automated retraining where needed
- Optimize model inference for latency and cost — quantization, distillation, and efficient serving strategies
- Build APIs and microservices to expose AI capabilities to consuming applications
Technical Leadership
- Mentor teams on AI/ML best practices, integration patterns, and responsible AI principles
- Collaborate with product managers and business stakeholders to translate ambiguous requirements into AI solutions
- Drive AI adoption through documentation, tech talks, and hands-on workshops
Skills Required
LLM, AI, Google Cloud Platform - Biq Query, Data Flow, Dataproc, Data Fusion, TERRAFORM, Tekton,Cloud SQL, AIRFLOW, POSTGRES, Airflow PySpark, Python, API