Hybrid (Onsite 3 days a week)Description:The client’s Server Automation team are currently seeking a highly skilled Data Warehouse Engineer to join our dynamic team. The ideal candidate will be instrumental in designing and implementing robust data pipelines, ensuring the integrity and accessibility of data across our organization.RESPONSIBILITIES:Design, build, and maintain efficient and reliable data pipelines to move and transform data (both large and small amounts) within our Azure ecosystem.Work closely with Azure Data Lake Storage, Azure Databricks, and Azure Data Explorer to manage and optimize data processes.Develop and maintain scalable ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) processes.Ensure the seamless integration and compatibility of data solutions with Databricks Unity Catalog Data Warehouse and adhere to general data warehousing principles.Collaborate with data scientists, analysts, and other stakeholders to support data-centric needs.Implement data governance and quality processes, ensuring data integrity and compliance.Optimize data flow and collection for cross-functional teams.Provide technical leadership and mentorship to junior team members.Stay current with industry trends and developments in data architecture and processing.REQUIREMENTS:Bachelor’s or master’s degree in computer science, Engineering, or a related field.Minimum of 5 years of experience in a Data Engineering role.Strong expertise in the Azure data ecosystem, including Azure Data Lake Storage, Azure Databricks, and Azure Data Explorer.Proficient in Databricks Data Warehouse and a solid understanding of data warehousing principles.Experience with ETL and ELT processes and tools.Strong programming skills in languages such as Python, SQL, Scala, or Java.Experience with data modeling, data access, and data storage techniques.Ability to work in a fast-paced environment and manage multiple projects simultaneously.Excellent problem-solving skills and attention to detail.Strong communication and teamwork skills.Top 3 skills:Databricks Data Warehouse and a solid understanding of data warehousing principlesDevelop and maintain scalable ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) processesProgramming skills in languages such as Python, SQL, Scala, or Java - Python and SQL are the most importantPower BI knowledge would be beneficial, but is not a requirementNotes:Hybrid (Onsite 3 days a week)VIVA is an equal opportunity employer. All qualified applicants have an equal opportunity for placement, and all employees have an equal opportunity to develop on the job. This means that VIVA will not discriminate against any employee or qualified applicant on the basis of race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status
Description:The client’s Server Automation team are currently seeking a highly skilled Data Warehouse Engineer to join our dynamic team. The ideal candidate will be instrumental in designing and implementing robust data pipelines, ensuring the integrity and accessibility of data across our organization.RESPONSIBILITIES:Design, build, and maintain efficient and reliable data pipelines to move and transform data (both large and small amounts) within our Azure ecosystem.Work closely with Azure Data Lake Storage, Azure Databricks, and Azure Data Explorer to manage and optimize data processes.Develop and maintain scalable ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) processes.Ensure the seamless integration and compatibility of data solutions with Databricks Unity Catalog Data Warehouse and adhere to general data warehousing principles.Collaborate with data scientists, analysts, and other stakeholders to support data-centric needs.Implement data governance and quality processes, ensuring data integrity and compliance.Optimize data flow and collection for cross-functional teams.Provide technical leadership and mentorship to junior team members.Stay current with industry trends and developments in data architecture and processing.REQUIREMENTS:Bachelor’s or master’s degree in computer science, Engineering, or a related field.Minimum of 5 years of experience in a Data Engineering role.Strong expertise in the Azure data ecosystem, including Azure Data Lake Storage, Azure Databricks, and Azure Data Explorer.Proficient in Databricks Data Warehouse and a solid understanding of data warehousing principles.Experience with ETL and ELT processes and tools.Strong programming skills in languages such as Python, SQL, Scala, or Java.Experience with data modeling, data access, and data storage techniques.Ability to work in a fast-paced environment and manage multiple projects simultaneously.Excellent problem-solving skills and attention to detail.Strong communication and teamwork skills.Top 3 skills:Databricks Data Warehouse and a solid understanding of data warehousing principlesDevelop and maintain scalable ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) processesProgramming skills in languages such as Python, SQL, Scala, or Java - Python and SQL are the most importantPower BI knowledge would be beneficial, but is not a requirementNotes:Hybrid (Onsite 3 days a week)
(Please ensure email matches your resume email)
(document types allowed: doc/docx/rtf/pdf/txt) (max 2MB)
By submitting this form, you are consenting to the VIVA team contacting you via Phone/Email