Data Engineer (Streaming & Analytics)
Information Technology
North Fort Myers Florida Direct Hire Feb 4, 2026
Data Engineer (Streaming & Analytics)
Must sit onsite in Florida
Monday – Friday / 8:00am - 5:00pm
Salary: $115,000+

No Sponsorship Available / No C2C

The Data Engineer is responsible for building and maintaining data pipelines, streaming systems, and transformation layers that power our new Microsoft-centered analytics ecosystem. This role is essential in modernizing our data platform that integrates Apache Kafka, Apache Spark, Python, MongoDB, SQL Server, Data Frames, Rapids, Microsoft Fabric, Power BI, Copilot, and Purview.

Responsibilities:
  • Design, build, and maintain scalable batch and streaming data pipelines that support LCEC’s Microsoft Fabric–based analytics ecosystem.
  • Develop reliable data ingestion and transformation processes across layered architectures (e.g., Bronze/Silver/Gold) to enable operational analytics, BI, and advanced use cases.
  • Engineer high-performance, fault-tolerant solutions for both real-time and batch data processing.
  • Design and implement logical and physical data models that align with enterprise analytics, semantic layers, and Power BI consumption.
  • Collaborate closely with BI analysts, data consumers, and platform teams to ensure data products are well-modeled, discoverable, and trusted.
  • Work in a managed data environment that maintains lineage, metadata, and thorough documentation.
  • Apply engineering best practices, including code reviews, monitoring, optimization, and cost-aware design.
  • Contribute to emerging analytics capabilities, including AI-assisted and Copilot-enabled data experiences.
  • Ensure smooth operations, productive communications, and effective understanding during all interpersonal contacts.  Provide current and accurate information to all requesters, courteously and in a timely manner.
  • Support Storm Restoration efforts when needed.  Work in emergency storm situations (i.e. hurricanes) and work long hours (>12 hours per day) for many continuous days/weeks as needed.

Requirements:
  • Bachelor’s degree in computer science, Engineering, or a related field.
  • 6 years' professional experience in data engineering
  • Apache Kafka, including producers, consumers, topic design, and retention concepts.   
  • Integrating data from MongoDB, SQL Server, APIs, and operational systems.            
  • Dimensional modeling, including star schemas, fact tables, and slowly changing dimensions.
  • Apache Spark / PySpark for scalable batch and streaming workloads.    
  • Microsoft Fabric, including Lakehouse, Warehouse, OneLake, notebooks, and pipelines.  
  • Demonstrated experience with Power Platform tools, including Power Apps and Power Automate.
  • Designing and operating ETL/ELT pipelines in production environment.            
  • Operating in governed environments using Microsoft Purview. 

Preferred Qualifications:     
  • Experience integrating data pipelines with machine learning or MLOps workflows.    
  • Experience implementing real-time monitoring, alerting, and observability.         
  • Experience optimizing data platforms for cost, performance, and scalability.
Category Code: JN008
#LI-LC1