Who we are:
SkyPoint’s mission is to bring people and data together.
We are the industry's first Modern Data Stack Platform with built-in data lakehouse, account 360, customer 360, entity resolution, data privacy vault, ELT / Reverse ETL, data integration, privacy compliance automation, data governance, analytics and managed services for organizations in several industries including healthcare, life sciences, senior living, retail, hospitality, business services and financial services.
Here is what you can expect to work on in this critical role:
You will lead the efforts to leverage the data to its maximum value. Our platform processes billions of rows in data every month on behalf of millions of users.
How do our Senior Data Engineers spend their time?
You can expect to spend about 50% building and scaling the SkyPoint Lakehouse, data pipelines and about 20% of your time defining and implementing DataOps methodologies.
Additionally, 20% of your time will be spent writing and optimizing queries and algorithms. Lastly, you’ll spend about 10% of your time supporting and monitoring pipelines.
Our team values collaboration, a passion for learning and a desire to become a master of your craft. We thrive in asynchronous communication. You will have a lot of support from leadership when you communicate proactively with detailed information about any roadblocks you may encounter.
Qualities of Senior Data Engineers Who Thrive in This Role
🔥 You are a driven, self-starter type of person who isn’t afraid to dig for answers, stays up-to-date on industry trends and is always looking for ways to enhance your knowledge (yes, Databricks-related podcasts count! 🎧)
💡Your skill set includes a blend of Databricks-related technologies in Azure or AWS
🖥️ Experience with Scala is a must! (you’ve got a software engineering hat)
💡 Working with Scala, Spark (Databricks) interacting with Delta Lakehouse and Unity Catalog
Skills & Experience Required:
💡3+ years of industry experience
🔥 Spark (Scala), Databricks
💡 Strong backend programming skills for data processing, with practical knowledge of availability, scalability, clustering, microservices, multi-threaded development and performance patterns.
- Experience with the use of a wide array of algorithms and data structures.
-- Experience in workflow orchestration platforms like ADF/Glue/Airflow etc.
-- Strong Distributed System fundamentals
-- Strong handle on REST APIs
-- Experience in NoSQL databases
-- Most recent work experience MUST include work on Scala and Spark (Databricks)
🔥 BS / BE / MS in Computer Science from a Top Tier school.
Skills:- Data modeling, Logical data model and Physical data model
Perks of working with us:
§ Professional development and training opportunities
§ Company happy hours and fun team-building activities.
§ Flexi work hours plus enjoy the benefit of having your workstation at your home.
§ Add-On Internet reimbursement within the company's permissible limits.
§ Opportunity to work with a US-based SaaS start-up working on new tech stacks, Azure Cloud.
§ Meal cards and gift hampers, other incentives
§ Awards and recognition programs.
§ Industry-focused certifications and ongoing training opportunities
§ Competitive total compensation package (Salary + Equity), Performance-based bonus plans.