Data Engineer

Full Time
Madison, WI 53713
Posted
Job description

Offices located in Wisconsin: Madison, Milwaukee, Green Bay, Kenosha, Fond Du Lac, Wausau, and Eau Claire
Office located in Illinois: Rockford

Essential Functions and Responsibilities:

  • Design and develop data architectures, including data models, data flows, and data storage solutions.
  • Build and maintain data pipelines to move data to/from source systems to the M3 data warehouse.
  • Develop ETL (Extract, Transform, Load) processes to ensure data is clean, accurate, and ready for analysis and use.
  • Optimize data pipelines and architectures for performance and scalability.
  • Collaborate with business users, data scientists, and other stakeholders to understand their data needs and design solutions to meet those needs.
  • Integrate new data sources that can generate additional analytical value.
  • Implement data governance policies and ensure compliance with data privacy regulations.
  • Develop and maintain documentation for data systems and processes.
  • Continuously monitor data systems to ensure data quality and reliability.
  • Stay up-to-date with the latest developments in data engineering technologies and tools.

Continuously Advance Enterprise Data Warehouse and Data Modeling Capabilities

Responsibility or Duty:

  • Design, implement and maintain M3.ai’s data and analytics infrastructure to enable high quality creation of data visualizations, models and other data products that will be delivered through the M3.ai digital core experience layer.
  • Design, maintain and continuously improve the enterprise data warehouse, optimizing performance, scalability, cost, security and governance based on current and projected needs.
  • Execute the creation and optimization of data pipelines to/from the data warehouse to ensure a complete customer analytic record for use in downstream systems of engagement and to also ensure the systems of record are updated and kept current.
  • Develop, execute and maintain ETL processes to ensure data is clean, accurate, and ready for analysis and use.
  • Optimize data pipelines and architectures for performance and scalability.
  • Create and optimize SDKs and APIs to connect data sources to the data warehouse.

Drive Business Results Through Data and Analytics Application

Responsibility or Duty:

  • Maintain and update the M3 enterprise data model according to external standards and internal needs.
  • Seek out and connect new data sources to the data warehouse to advance M3’s data benchmarking and modeling capabilities in collaboration with internal and external stakeholders and users.
  • Collaborate with business users, data scientists, and other stakeholders to understand their data needs and design solutions to meet those needs.
  • Collaborate directly with internal and other external stakeholders to create data/analytics products that are extensible to other clients and brokers so that M3 can maximize the reach and value of its data products.

Implement Data Governance

Responsibility or Duty:

  • Implement data governance policies and ensure compliance with data privacy regulations.
  • Develop and maintain documentation for data systems and processes.
  • Stay up-to-date with the latest developments in data engineering technologies and tools.

Qualifications

Education

Preferred

Bachelors or better in Computer Science or related field.

Experience

Required

Demonstrated proficiency in Microsoft Office applications, with emphasis on Outlook.
Experience in pub/sub streaming technologies like Kafka, Kinesis, Spark Streaming etc.
Experience building real-time streaming data pipelines and APIs.
Strong analytical and problem-solving skills.
Database architecture testing methodology, including execution of test plans, debugging, and testing scripts and tools.
Strong understanding of data structures and algorithms.
Strong understanding of database technologies and management systems.
Strong analytical and problem-solving skills with ability to represent complex algorithms in software.
Fluent in relational based systems and writing complex SQL.
AWS certifications such as Cloud Practitioner, Solutions Architect, DevOps Engineer, Developer, Database, Data Analytics, Machine Learning.
Strong experience in coding languages like Python, Scala & Java.
Experience working on CI/CD processes such as Jenkins, Codeway etc. and source control tools such as GitHub, etc.
Experience with container management frameworks such as Docker, Kubernetes, ECR etc.
Leverage monitoring tools/frameworks, like Splunk, Grafana, CloudWatch etc.
3+ years of experience in Big Data Distributed systems such as Databricks, AWS EMR, AWS Glue etc.
3+ years of experience in Big Data Distributed ecosystems (Hadoop, SPARK, Hive & Delta Lake).
3+ years of experience with Cloud based DW such as RDS, Redshift, Snowflake etc.
3-5+ years of relevant experience with detailed knowledge of data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools on AWS and/or Azure platforms.
Bachelor's degree in Computer Science, Information Systems, Engineering or equivalent

gatheringourvoice.org is the go-to platform for job seekers looking for the best job postings from around the web. With a focus on quality, the platform guarantees that all job postings are from reliable sources and are up-to-date. It also offers a variety of tools to help users find the perfect job for them, such as searching by location and filtering by industry. Furthermore, gatheringourvoice.org provides helpful resources like resume tips and career advice to give job seekers an edge in their search. With its commitment to quality and user-friendliness, gatheringourvoice.org is the ideal place to find your next job.

Intrested in this job?

Related Jobs

All Related Listed jobs