Roles & Responsibilities:
• The responsibilities range from being at the vanguard of solving technical problems to
venturing into unchartered areas of technologies to solve complex problems.
• Implement, or operate comprehensive data platform components to balance optimization
of data access with batch loading and resource utilization factors, per customer
requirements.
• Develop robust data platform components for sourcing, loading, transformation, and
extracting data from various sources.
• Build metadata processes and frameworks.
• Create supporting documentation, such as metadata and diagrams of entity relationships,
business processes, and process flow.
• Maintain standards, such as organization, structure, or nomenclature, for data platform
elements, such as data architectures, pipelines, frameworks, models, tools, and databases.
• Implement business rules via scripts, middleware, or other technologies.
• Map data between source systems and data lake
• Ability to be independent and produce high-quality code on components related to the Data
Platform. Should also possess Creativity, Responsibility, and Autonomy.
• Participate in the planning, design, and implementation of features, working with small
teams that consist of engineers, product managers, and marketing.
• Demonstrate strong technical talent throughout the organization and engineer products that
meet future scalability, performance, security, and quality goals while maintaining a
cohesive user experience across different components and products.
• Adopt and share best practices of software development methodology and frameworks
used in the data platform.
• Passion for continuous learning, experimenting and applying cutting-edge technology and
software paradigms. Also responsible for fostering this culture across the team.
Qualifications
• 2+yrs production software experience
• Experience with Cloud platforms, preferably AWS.
• Strong knowledge of popular database and data warehouse technologies & concepts from
Google, and Amazon such as BigQuery, Redshift, Snowflake, etc.
• Experience in data modeling, data design, and persistence on large complex datasets.
• Experience with object-oriented design and development (preferably Python/Java)
• Background in Spark or other Big Data-related technologies, and non-relational databases
is a plus.
• Experience with software development best practices, including unit testing and
continuous delivery.
• Desire to apply agile development principles in a fast-paced startup environment.
• Strong teamwork and communications. Skills:- ETL, Spark, Amazon Web Services (AWS) and Docker
Sign up for our newsletter to stay up to date with new jobs posted on Profilehunt
Please confirm your email address once you subscribe.