14th April 2020
Cape Town, South Africa
5 to 20 years
R900000 - R1000000
Cost to company
Senior
Full Time
We are proudly supporting the leading FMCG company in southern Africa right now, on a really exciting search for a Technical Lead within Data Engineering. This is an incredible opportunity to work at the forefront of the industry, with incredible minds and leading talent.
Working as a core member of an Agile team, you will hold responsibility for building and supporting data pipelines. You must be experienced in handling raw data, from a broad range of sources and integrate them seamlessly.
You will enable solutions, using emerging technologies across the big data and cloud spaces.
Tools:
Key responsibilities:
- Responsible for data solution software and hardware required.
- Strategic thinker, understanding the commercial requirements and translates them into system requirements.
- You must have experience of building DA frameworks; you will architect next-generation analytics framework. This will be developed on the Group’s range of core technology.
- Build a strong working relationship with developers, you will be responsible for making sure data solutions are integrated.
- You will Build automation tools, ensuring that all automated processes preserve integrated data.
- You will liaise with key stakeholders across the business, Interpreting data results for business customers.
- You will work with the business to maintain a fully adaptable technical solution.
- This role is responsible for communication across third parties and product managers (tech/non-tech) on solution design.
Candidate Requirements:
- Bachelor's in computer science, computer engineering
- AWS Certification – this must be from Associate +
- Data engineering or software engineering experience 5+ yrs
- Experience of leading a team of engineers 3+ yrs
- Big Data, ETL, Could AWS
- Agile or similar/ Scrum
- Experienced coding and testing pattern
- Creating data channels from on-site to AWS Cloud 2 + yrs
- Talend or similar
- Data manipulation using python and pyspark
- Hadoop paradigm for processing data
- Experienced in devops across Big Data and BI, including automated testing and deployment