Senior AWS Data Engineer

Job Details

permanent
London, London, United Kingdom
Vitol
16.03.2024
Want updates for this and similar Jobs?


Full Job Description

Company Description

Vitol is a leader in the energy sector with a presence across the spectrum: from oil through to power, renewables and carbon.  From 40 offices worldwide, we seek to add value across the energy supply chain, including deploying our scale and market understanding to help facilitate the energy transition. To date, we have committed over $2 billion of capital to renewable projects, and are identifying and developing low-carbon opportunities around the world.

Our people are our business . Talent is precious to us and we create an environment in which individuals can reach their full potential, unhindered by hierarchy. Our team comprises more than 65 nationalities and we are committed to developing and sustaining a diverse work force. Learn more about us here .

Job Description

As a Senior Data Engineer, you will be responsible for designing, implementing and maintaining large scale data processing systems on AWS, ensuring that they are scalable, reliable and efficient.  

You will be highly technical, with extensive experience working in MPP platforms/Spark, “big data” (e.g., weather forecasts, vessel location, satellite imagery, …), and developing resilient and reliable data pipelines. You will be responsible for data pipelines end to end: acquisition, loading, transformation, implementing business rules/analytics, and delivery to the end user (business / data science / AI).  

You will also collaborate directly with both the Business and other delivery teams the Data Science team to understand their data requirements and deliver the necessary data infrastructure to support their activities, as well as optimisation of performance data processing systems by tuning database queries, improving data access times and reducing latency.  

This role will require strong coding skills in SQL and Python, and following engineering best practices.  

You must be a strong communicator, and easily translate technical concepts to non-technical users, as well as translate business requirements into technical requirements. 

Qualifications
  • 10+ years in the data engineering space
  • Proficient with MPP Databases (Snowflake, Redshift, Big Query, Azure DW) and/or Apache Spark  
  • Proficient building resilient data pipelines for large datasets 
  • Deep AWS or cloud understanding across core and extended services.  
  • 8+ years experience working with at least 3 of the following: ECS, EKS, Lambda, DynamoDB, Kinesis, AWS Batch, ElasticSearch/OpenSearch, EMR, Athena, Docker/Kubernetes 
  • Proficient with Python and SQL, and with good experience with data modelling 
  • Experience with a modern orchestration tools (Airflow / Dagster / Prefect / similar) and/or DBT 
  • Comfortable working in a dynamic environment with a certain degree of uncertainty 

Additional Information

Desired: 

  • Infrastructure as Code (Terraform, Cloud Formation, Ansible, Serverless) 
  • CI/CD Pipelines (Jenkins / BitBucket Pipelines / similar) 
  • Database/SQL tuning skills 
  • Basic data science concepts 
Report Job