Share this
Location: Remote, United Kingdom
Duration: 6 months
Rate: Up to £550 per day (DOE)
IR35 Status: Outside IR35
We are seeking an experienced AWS Data Engineer to join our clients team. The successful candidate will be responsible for designing, building, and optimising scalable data pipelines within the AWS ecosystem to support key decision-making processes across the business.
In this role, you will collaborate with cross-functional teams to gather data requirements and translate them into actionable solutions. You will champion best practices in data architecture, data modelling, and pipeline design while working with large volumes of structured and unstructured data.
You must possess a strong background in Python programming, particularly in data manipulation and transformation. Proficiency in PySpark is essential for building scalable data processing applications. Experience with Databricks is also required, as you will be utilising the platform to orchestrate workflows, debug pipelines, and collaborate with other team members in a shared workspace.
Key Responsibilities:
If you are passionate about data engineering and thrive in a fast-paced, technology-driven environment, we encourage you to apply.
Duration: 6 months
Rate: Up to £550 per day (DOE)
IR35 Status: Outside IR35
We are seeking an experienced AWS Data Engineer to join our clients team. The successful candidate will be responsible for designing, building, and optimising scalable data pipelines within the AWS ecosystem to support key decision-making processes across the business.
In this role, you will collaborate with cross-functional teams to gather data requirements and translate them into actionable solutions. You will champion best practices in data architecture, data modelling, and pipeline design while working with large volumes of structured and unstructured data.
You must possess a strong background in Python programming, particularly in data manipulation and transformation. Proficiency in PySpark is essential for building scalable data processing applications. Experience with Databricks is also required, as you will be utilising the platform to orchestrate workflows, debug pipelines, and collaborate with other team members in a shared workspace.
Key Responsibilities:
- Design and develop robust data pipelines and ETL processes using AWS services.
- Optimise data workflows for performance and efficiency.
- Work closely with data scientists, analysts, and other engineers to deliver end-to-end data solutions.
- Ensure compliance with data governance and security standards.
- Maintain documentation and provide support for deployed data systems.
- Strong knowledge of AWS cloud services including S3, Lambda, Glue, Redshift, and EMR.
- Proficient in Python with experience in data engineering libraries and frameworks.
- Hands-on experience with PySpark for distributed data processing.
- Extensive experience with Databricks and notebook-based development.
- Understanding of data modelling techniques and building data warehouses or lakes.
- Experience with CI/CD practices for data workflows is desirable.
- Excellent problem-solving abilities and attention to detail.
If you are passionate about data engineering and thrive in a fast-paced, technology-driven environment, we encourage you to apply.
Share this