Role: Data Engineer
Total Experience: 5 to 8 Years
Job Location: Gurgaon
Budget -26 28 LPA
Must have - Technical & Soft Skills:
- Python: Data Structures, List, Libraries, Data engineering basics
- SQL: Joins, Groups, Aggregations, Windowing functions, analytic functions etc.
- Worked in AWS services S3, EC2, Glue, Data Pipeline, Athena and Redshift
- Solid hands-on working experience in Big-Data Technologies
- Strong hands-on experience of programming languages like Python, Scala with Spark.
- Good command and working experience on Hadoop/Map Reduce, HDFS, Hive, HBase, and No-SQL Databases
- Hands on working experience on any of the data engineering/analytics platform AWS preferred
- Hands-on experience on Data Ingestion Apache Nifi, Apache Airflow, Sqoop, and Ozzie
- Hands on working experience of data processing at scale with event driven systems, message queues (Kafka/ Flink/Spark Streaming)
- Hands on working Experience with AWS Services like EMR, Kinesis, S3, CloudFormation, Glue, API
- Gateway, Lake Foundation
- Operationalization of ML models on AWS (e.g. deployment, scheduling, model monitoring etc.)
- Feature Engineering/Data Processing to be used for Model development
- Experience gathering and processing raw data at scale (including writing scripts, web scraping, calling APIs, write SQL queries, etc.)
- Hands-on working experience in analyzing source system data and data flows, working with structured and unstructured data
Skills:- Python, Data Structures, SQL, Amazon Web Services (AWS), Amazon S3, Amazon EC2, athena, Scala, HDFS, Apache Hive, Apache HBase, NOSQL Databases and Apache Sqoop