Job Posting, Grand Rapids, MI

Data Engineer

About Mutually Human

At Mutually Human, we believe that innovation ensures relevancy. We help companies harness the power of Artificial Intelligence, Data, and Software through a holistic approach that emphasizes People, Process, and Technology. Our services are designed to solve key business challenges like optimizing operational efficiency, driving data-driven decision-making, and enhancing customer experiences. Join our team to help businesses stay ahead in an ever-evolving digital landscape.

Position Overview

We are seeking a Data Engineer to join our growing team. The ideal candidate will play a critical role in designing, implementing, and optimizing data pipelines and databases with a focus on SQL-based solutions. You will work closely with cross-functional teams to develop scalable, high-performance systems, leveraging platforms like Microsoft Fabric, Azure Data Lake, Databricks, and Snowflake to create robust, efficient data environments.

Key Responsibilities

  • SQL Development & Optimization: Write, optimize, and manage complex SQL queries, stored procedures, and scripts to support data processing and analytics.
  • Data Pipeline Design: Build and maintain reliable ETL/ELT pipelines for data ingestion, transformation, and integration using SQL and modern data engineering tools.
  • Platform Expertise:
    • Create and manage dataflows and models in Microsoft Fabric, utilizing SQL-driven techniques for analytics.
    • Leverage Databricks for SQL-based big data processing, focusing on Delta Lake and advanced transformations.
    • Develop efficient data warehousing and analytics solutions in Snowflake using SQL and native features.
  • Data Modeling: Design and implement data models to support business intelligence and analytical reporting, emphasizing SQL best practices.
  • DevOps Integration: Implement CI/CD pipelines to automate data pipeline deployments, testing, and monitoring using tools like Git, Terraform, or Azure DevOps.
  • AI/ML Enablement: Collaborate with data scientists to integrate AI/ML models into production pipelines and workflows, enhancing analytics capabilities.
  • Performance Tuning: Optimize database performance and SQL queries for efficiency, reliability, and scalability.
  • Collaboration: Work closely with data scientists, analysts, and business stakeholders to gather requirements and deliver tailored data solutions.
  • Governance & Security: Implement and enforce best practices for data governance, security, and compliance.

Qualifications

Required Skills & Experience:

  • SQL Mastery: 3+ years of hands-on experience with advanced SQL for data analysis, modeling, and pipeline development.
  • Professional Experience: Strong background in data engineering, with expertise in Microsoft Fabric, Azure Data Lake, Databricks, or Snowflake.
  • Database Expertise: Experience designing and optimizing relational databases such as SQL Server, PostgreSQL, SAP HANA, Progress or Oracle and working with cloud-based databases like Snowflake or Azure SQL.
  • AI/ML Understanding: Familiarity with AI/ML concepts and tools such as Azure Machine Learning or PyTorch and their integration into data pipelines.
  • Programming: Proficiency in Python, C#, Java or Scala for data transformations, connections, AI/ML model integration and automation.
  • DevOps Knowledge: Experience with CI/CD pipelines, version control systems like Git, and infrastructure-as-code tools like Terraform or Ansible.
  • Cloud Platforms: Familiarity with cloud services such as Azure, AWS, or Google Cloud.

Preferred Skills:

  • Microsoft Data and AI certifications.
  • Expertise in BI tools like Power BI, Tableau, or Looker.
  • Knowledge of Machine Learning and AI model development.
  • Knowledge of Azure and AWS networking concepts
  • Knowledge of data governance frameworks and compliance standards.

Think you have what it takes? Apply today!