SkyPoint is an Artificial Intelligence (AI) driven Master Data Management Platform (MDM). We build SaaS products to deliver personal relationships with your customers in the Retail, Sports, Hospitality, Healthcare, and eCommerce industries.
Who we are:
SkyPoint’s mission is to bring people and data together.
We are the industry's first Modern Data Stack Platform with built-in data lakehouse, account 360, customer 360, entity resolution, data privacy vault, ELT / Reverse ETL, data integration, privacy compliance automation, data governance, analytics, and managed services for organizations in several industries including healthcare, life sciences, senior living, retail, hospitality, business services, and financial services.
We follow a flexible culture founded on awareness, trust, collaboration, ethics, a strong outlook towards commitment, & customer fascination which are the building pillars of SkyPoint Cloud.
We believe in practicing the Ideal Behaviour at SkyPoint Cloud: Treat Human Asset Fair, Fun work environment, 4 E's (Embrace, Engage, Encourage & Empower), Open Communication, Curiosity & Passion.
Who we want:
SkyPoint Cloud is looking for ambitious, independent engineers who want to have a significant impact at a fast-growing company. You will work on our core data pipeline and the integrations that bring in data from many sources we support. We are looking for people who can understand the key values that make our product great and implement those values in the many small decisions you make every day as a developer.
As a Data Engineer at SkyPoint:
- You will work with Python, PySpark, Azure Databricks, VS Code, REST APIs, Azure Durable Functions, Cosmos DB, Serverless and Kubernetes container-based microservices and interact with various Delta Lakehouse and NoSQL databases.
- You will process the data into clean, unified, incremental, automated updates via Azure Durable Functions, Azure Data Factory, Delta Lake, and Spark.
- You will analyze customer data from various connectors, generalize the PII attributes involved in our product’s state-of-the-art Stitch process, and create unified customer profiles.
Primary Duties & Responsibilities:
- Bachelor’s degree, preferably in Data Engineering or Computer Science with 6+ years of experience working on SaaS products.
- Experience working with languages like Python, and Java and technologies such as serverless and containers.
- Strong technical and problem-solving skills, with recent hands-on in Databricks.
- Experience in reliable distributed systems, with an emphasis on high-volume data management within the enterprise and/or web-scale products and platforms that operate under strict SLAs.
- Broad technical knowledge which encompasses Software development and automation. Experience with the use of a wide array of algorithms and data structures.
- Knowledge of working with Azure Functions, Azure Data Lake, Azure Data Factory, Azure Databricks, Spark, Azure DevOps, and Delta Lake.
- Outstanding track record in liaising directly with global clients and prioritizing working on flexible timings to connect with the virtual team, including daily stand-up calls and cross-regional collabs.
Required Qualifications and Skills:
Skills:- Python, SQL, Java, Azure Data bricks, SQL Azure, PySpark, NOSQL Databases and Microsoft Windows Azure
- BS / BE / MS in Computer Science & Engineering and professional work experience.
- Most recent work experience MUST include work on Python (Programming Language) and Databricks.
- Good to have knowledge in Azure Durable Functions, Azure SQL, Cosmos DB, Azure Data Factory, Delta Lakehouse, PySpark, NoSql DB, Serverless, and Kubernetes container-based microservices
- Excellent verbal and written communication skills.
- Exceptional track record of exposure to global clients.