Main Purpose: Collaborate with data scientists and business stakeholders to design, develop, and maintain efficient data pipelines feeding into the organization's data lake.Ensure the data lake contains accurate, up-to-date, and high-quality data, enabling data scientists to develop insightful analytics and business stakeholders to make well-informed decisions.Utilize expertise in data engineering and cloud technologies to contribute to the overall success of the organization by providing the necessary data infrastructure and fostering a data-driven culture.Demonstrate a strong architectural sense in defining data models, leveraging the Poly-base concept to optimize data storage and access.Facilitate seamless data integration and management across the organization, ensuring a robust and scalable data architecture.Take responsibility for defining and designing the data catalogue, effectively modelling all data within the organization, to enable efficient data discovery, access, and management for various stakeholders. Knowledge Skills and Abilities, Key Responsibilities: KEY RESPONSIBILTIES: Design, develop, optimize, and maintain data architecture and pipelines that adhere to ETL principles and business goals. Develop complex queries and solutions using Scala, .NET, Python/PySpark languages. Implement and maintain data solutions on Azure Data Factory, Azure Data Lake, and Databricks Create data products for analytics and data scientist team members to improve their productivity. Advise, consult, mentor, and coach other data and analytic professionals on data standards and practices. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. Lead the evaluation, implementation, and deployment of emerging tools and processes for analytic data engineering in order to improve our productivity as a team. Develop and deliver communication and education plans on analytic data engineering capabilities, standards, and processes. Partner with business analysts and solutions architects to develop technical architectures for strategic enterprise projects and initiatives. Collaborate with other team members and effectively influence, direct, and monitor project work. Develop strong understanding of the business and support decision making. REQUIREMENTS: Experience: 10 years of overall experience & at least 5 years of relevant experience 5 years of experience working with Azure data factory & Databricks in a retail environment Skills: Bachelor's degree required; Computer Science Engineering. 5+ years of experience working in data engineering or architecture role. Expertise in SQL and data analysis and experience with at least one programming language (Scala and .NET preferred). Experience developing and maintaining data warehouses in big data solutions. Experience with, Azure Data Lake, Azure Data Factory, and Databricks) in the data and analytics space is a must Database development experience using Hadoop or Big Query and experience with a variety of relational, NoSQL, and cloud data lake technologies. Worked with BI tools such as Tableau, Power BI, Looker, Shiny. Conceptual knowledge of data and analytics, such as dimensional modelling, ETL, reporting tools, data governance, data warehousing, and structured and unstructured data. Big Data Development experience using Hive, Impala, Spark, and familiarity with Kafka. Exposure to machine learning, data science, computer vision, artificial intelligence, statistics, and/or applied mathematics. Languages: Fluency in verbal and written English mandatory. Fluency in Spanish & French is useful. Key Relationships and Department Overview: Internal: CEO & COO of Africa, Managers across various departments, Senior Management, Head of Departments in other regional hubs of Puma Energy External: External Consultants
#J-18808-Ljbffr