Pinnacle

Data Integration Engineering Lead

Pinnacle  •  Pasadena, TX (Onsite)  •  2 days ago
Apply
AI can make mistakes so check important info. Chat history is never stored.
66
AI Success™

Job Description

We are building a team of trailblazers, who embody growth, impact, and excellence.

The Enterprise Data Architect, Lead plays a crucial role in designing, implementing, and maintaining data integration solutions within our organization. This role collaborates with customers and cross-functional teams to ensure seamless data pipelines from customers’ systems. The expertise for this role will contribute to the organization’s overall strategy and architecture for data acquisition.

Job Duties

  • Source System Extraction (This Is the Core of the Role)

  • Independently extract data from industrial source systems including OSIsoft PI historians, SAP PM/EAM, Maximo, eMaint, lab/LIMS systems, and other CMMS/ERP platforms.
  • Navigate customer IT environments to establish connectivity — VPNs, service accounts, firewall rules, read-only database access — often with limited or no documentation.
  • Reverse-engineer undocumented or poorly documented source schemas to identify the right data for integration.
  • Build and own the extraction layer: connectors, API calls, direct database queries, file-based ingestion from heterogeneous client environments.
  • Handle the reality that every customer’s data is messy in a different way — inconsistent tag naming, mismatched equipment IDs, unmaintained asset hierarchies.
  • Data Transformation and Pipeline Development

  • Design, build, and maintain data pipelines that clean, transform, and load extracted data into our reliability platform.
  • Develop integration architecture and blueprints tailored to each customer’s source system landscape.
  • Implement data quality checks, reconciliation processes, and monitoring to ensure ongoing accuracy.
  • Build and maintain master data mapping strategies — including change management processes as clients execute MOCs, add equipment, or decommission assets.
  • Own pipeline monitoring, alerting, and uptime SLAs for all production data extraction and integration systems. These are live production pipelines serving customers — when extraction fails, you are responsible for detecting the failure, diagnosing the root cause, and restoring the data flow within SLA.
  • Client Communication and Technical Leadership

  • Serve as the primary technical point of contact with customer IT teams for all data access and connectivity matters.
  • Respond to detailed technical inquiries from client IT leadership (architecture questions, data mapping strategies, security concerns) with clarity and confidence.
  • Lead discovery sessions with customers to understand their source systems, data flows, and integration requirements.
  • Create and maintain architecture documentation, integration runbooks, and data dictionaries for each client engagement.
  • Provide technical guidance and mentorship to team members and drive knowledge sharing across the data engineering team.
  • Manage integration project plans, timelines, and deliverables across multiple concurrent client engagements. Drive accountability on milestones, coordinate dependencies with client IT teams, and ensure integrations are completed on schedule.
  • Strategy and Team Building

  • Lead the enterprise data integration strategy and platform architecture across the organization.
  • Provide new ideas and approaches to the CTO and enterprise architecture team on data acquisition and integration best practices.
  • Drive recruitment to build and grow a high-performing data engineering team.
  • Continuously evaluate and adopt emerging data technologies and practices.

Accountabilities/Results/Success for this role

  • Successful design and deployment of scalable and secure data architectures and data pipelines.
  • Enhanced data quality, efficiency, and accessibility across the organization.
  • Effective execution of data integration projects, demonstrating strong project management skills and consistent delivery on time and within scope.
  • Continuous improvement and adoption of emerging data technologies and practices.
  • Creation of innovative, customer-focused data solutions that set the organization apart and add measurable value.

Measures

Percentage of projects delivered on time: 80%

Team utilization: 80%

Data Solutions: 3

Production pipeline uptime: 99.5%

Required Qualifications/Skills/Competencies

  • Hands-on experience extracting data from at least two of: OSIsoft PI, SAP PM/EAM, Maximo, eMaint, or similar industrial/operational systems. This is non-negotiable.
  • Direct experience working with customer or client IT teams to negotiate and establish data access (firewall rules, VPN connectivity, service accounts, API credentials).
  • SQL proficiency — specifically the ability to explore unfamiliar database schemas and write extraction queries with little or no documentation.
  • Python for data extraction, transformation, and pipeline automation.
  • Experience with cloud-based data integration (Azure Data Factory, Azure Functions, or comparable).
  • Strong knowledge of data integration patterns, ETL/ELT, APIs, and messaging protocols (REST, SOAP, OPC).
  • Demonstrated experience with enterprise database technologies and data modeling.
  • Excellent communication skills — you’ll be the person answering detailed technical emails from client IT directors and leading discovery calls

Preferred Qualifications

  • Experience in oil and gas, refining, chemicals, or heavy industry environments.
  • Familiarity with reliability engineering concepts (RBI, CMMS workflows, asset hierarchy management, inspection data).
  • Experience with Cognite Data Fusion (CDF) or similar industrial data platforms.
  • Knowledge of PI Web API, PI SDK, or AF SDK for historian data extraction.
  • Experience with OPC-UA/DA protocols for real-time industrial data.
  • Background in data governance and compliance measures.
  • Understanding of microservices architecture and containerization (Docker, Kubernetes).
  • Experience with DevOps tools and practices (Azure DevOps, CI/CD pipelines

Equipment and Software Knowledge

  • Expertise in data integration tools and platforms (e.g., Azure Data Factory, Informatica, Talend).
  • Proficiency in big data platforms (Hadoop, Spark, etc.) and analytics tools (Power BI, Tableau).
  • Familiarity with DevOps tools and practices (e.g. Azure DevOps).

Direct Reports

Data Engineers will report to this role

Pinnacle is an equal employment opportunity employer and does not discriminate based on race, color, national origin, religion, gender identity, sexual orientation, sex, age, disability, veteran or military status, genetic information, or any other characteristic protected by applicable law.

Pinnacle

About Pinnacle

Headquartered in Pasadena, Texas, Pinnacle is exclusively focused on helping industrial facilities in the oil and gas, chemical, mining, and water and wastewater industries better leverage their data to improve reliability performance, resulting in increased production, optimized reliability and maintenance spend, and improvement in process safety and environmental impact. Pinnacle is privately held, and has been consistently recognized for its growth by Inc Magazine, the Houston Business Journal, and more. For more information, visit pinnaclereliability.com.

Industry
Architecture & Engineering
Company Size
1,001-5,000 employees
Headquarters
Pasadena, Texas
Year Founded
2006
Social Media