<![CDATA[Data Engineer]]>

<![CDATA[Freight Commerce Solutions Pvt. Ltd.]]>

Posted: about 2 months ago

Company Website
<![CDATA[https://cutsh...
Position type
full time
Job source
Cutshort
Category
programming
Remote
No
Salary
<![CDATA[5 - 15 lacs/annum]]>
Job location
<![CDATA[Mumbai]]>
About

<![CDATA[

The Nitty-Gritties

Location: Bengaluru/Mumbai

About the Role:

Freight Tiger is growing exponentially, and technology is at the centre of it. Our Engineers love solving complex industry problems by building modular and scalable solutions using cutting-edge technology. Your peers will be an exceptional group of Software Engineers, Quality Assurance Engineers, DevOps Engineers, and Infrastructure and Solution Architects.

This role is responsible for developing data pipelines and data engineering components to support strategic initiatives and ongoing business processes. This role works with leads, analysts, and data scientists to understand requirements, develop technical solutions, and ensure the reliability and performance of the data engineering solutions.

This role provides an opportunity to directly impact business outcomes for sales, underwriting, claims and operations functions across multiple use cases by providing them data for their analytical modelling needs.

Key Responsibilities

  • Create and maintain a data pipeline.
  • Build and deploy ETL infrastructure for optimal data delivery.
  • Work with various product, design and executive teams to troubleshoot data-related issues.
  • Create tools for data analysts and scientists to help them build and optimise the product.
  • Implement systems and processes for data access controls and guarantees.
  • Distil the knowledge from experts in the field outside the org and optimise internal data systems.





Preferred Qualifications/Skills

  • Should have 5+ years of relevant experience.
  • Strong analytical skills.
  • Degree in Computer Science, Statistics, Informatics, Information Systems.
  • Strong project management and organisational skills.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • SQL guru with hands-on experience on various databases.
  • NoSQL databases like Cassandra, and MongoDB.
  • Experience with Snowflake, Redshift.
  • Experience with tools like Airflow, and Hevo.
  • Experience with Hadoop, Spark, Kafka, and Flink.
  • Programming experience in Python, Java, and Scala.
Skills:- ETL, SQL, Python, Data Analytics, Data Visualization, Data-flow analysis, Big Data, Hadoop, Apache Kafka and Snow flake schema]]>

Similar jobs

Subscribe to our daily job alerts

Sign up for our newsletter to stay up to date with new jobs posted on Profilehunt

Please confirm your email address once you subscribe.