AI Data Platform/Pipeline Engineer
Principal Responsibilities
*U.S. citizenship required due to the nature of our work*
We are seeking an AI Data Platform + MLOps engineer to build the pipelines, infrastructure, and automation that powers our AI foundation models, computational chemistry, and operational tools. You will build the data backbone that accelerates foundation model development and drives AI deployment into real manufacturing and operational workflows—with a mandate to reduce friction, eliminate manual pipelines, and enable rapid iteration.
Key Responsibilities
- Build robust pipelines for chemistry, materials, and manufacturing data ingestion and transformation.
- Define data contracts, dataset versioning, and labeling/standardization workflows.
- Operate ETL/training orchestration, experiment tracking, model registry, and deployment automation.
- Manage cloud workflows (Azure + GCP), reproducible environments, monitoring, and cost/performance tuning.
- Connect R&D and Ops systems (ELN/LIMS/ERP/MES) into unified data flows.
Key Requirements
- 10+ years in data science, Dev/Ops, MLOps, or data engineering in an operational setting
- Expert data pipelines, data tech stacks, orchestration, and MLOps/DevOps tooling.
- Ability to design resilient, scalable pipelines in ambiguous, fast-moving environments.
- Strong systems thinking, requirements gathering, and cross functional collaboration skills.
- Familiarity with scientific or manufacturing data is a plus.
Preferred Qualifications
As a condition of employment, the candidate may need to successfully complete a medical evaluation and obtain clearance to wear respiratory personal protective equipment (PPE) as required by OSHA 29 CFR 1910.134.
Cambium is an Equal Opportunity Employer. At Cambium, all qualified applicants will be evaluated for employment without regard to race, color, religion, gender, sexual orientation, national origin, genetic information, age, disability, veteran status, or any other legally protected basis.