Responsibilities:
• Design and build data pipelines from various cloud data sources for the enterprise data warehouse
• Partner with Data Engineers, Data architects, domain experts, data analysts and other teams to build foundational data sets that are trusted, well understood, aligned with business strategy and enable self-service
• Own and document data pipelines and data lineage.
• Support and Maintain analytics tech ecosystem (data warehouse, ETL and BI tools)
Requirements:
• Bachelor degree in Computer Science or other engineering discipline.
• 2+ years of experience working as a Data Engineer or a similar role
• Very strong experience in writing complex SQLs and dimensional modelling
• Hands-on experience working with data warehouse technologies (Snowflake, Redshift) and Big Data technologies (e.g Pyspark, Hadoop)
• Hands on experience building data pipelines using ETL tools (AWS Glue)
• Proficiency with Python
• Ability to work on multiple areas like Data pipeline ETL, Data modelling & design, writing complex SQL queries etc.
• Ability to build the automation processes for the data quality and data
reconciliation.
• Proficiency in Airflow is a big plus.
• Passionate about various technologies including but not limited to SQL/No
SQL/MPP databases etc.
• Excellent written and verbal communication and interpersonal skills, able to effectively collaborate with technical and business partners. Skills:- SQL, Python and PowerBI
Sign up for our newsletter to stay up to date with new jobs posted on Profilehunt
Please confirm your email address once you subscribe.