205 Data Engineer jobs in London
Data Engineer
Posted 5 days ago
Job Viewed
Job Description
Overview
GlaxoSmithKline (GSK) is a science-led global healthcare company with a special purpose: to help people do more, feel better, live longer. We are on an audacious journey to impact the health of 2.5 billion people over the next decade. Our R&D division is at the forefront of this mission, dedicated to the discovery and development of groundbreaking vaccines and medicines. We are transforming the landscape of medical research by integrating cutting-edge science and technology and harnessing the power of genetics and new data. By fostering a collaborative environment that unites the talents of our people, we are revolutionizing R&D to pre-empt and defeat diseases. Join us in our commitment to uniting science, technology, and talent to get ahead of disease together.
Position Summary
At GSK we see a world in which advanced applications of Machine Learning and AI will allow us to develop novel therapies to existing diseases and to quickly respond to emerging or changing diseases with personalized drugs, driving better outcomes at reduced cost with fewer side effects. It is an ambitious vision that will require the development of products and solutions at the cutting edge of Machine Learning and AI. We’re looking for a highly skilled Backend Engineer to help us make this vision a reality.
The ideal candidate will have a track record of shipping data products derived from complex sources, responsible for the process from conceptual data pipelines to production scale. We have a commitment to quality, so the person in this role will be able to use modern cloud tooling and techniques to deliver reliable data pipelines and continuously improve them.
This role requires a passion for solving challenging problems aligned to exciting Artificial Intelligence and Machine Learning applications. Educational or professional background in the biological sciences is a plus but is not necessary; passion to help therapies for new and existing diseases, and a pattern of continuous learning and development is mandatory.
Key Responsibilities
- Build data pipelines using modern data engineering tools on Google Cloud: Python, Spark, SQL, BigQuery, Cloud Storage.
- Ensure data pipelines meet the specific scientific needs of data consuming applications.
- Responsible for high quality software implementations according to best practices, including automated test suites and documentation.
- Develop, measure, and monitor key metrics for all tools and services and consistently seek to iterate on and improve them.
- Participate in code reviews, continuously improving personal standards as well as the wider team and product.
- Liaise with other technical staff and data engineers in the team and across allied teams, to build an end-to-end pipeline consuming other data products.
Minimum Requirements
- 2+ years of data engineering experience with a Bachelors degree in a relevant field (including computational, numerate or life sciences), or equivalent experience.
- Cloud experience (e.g. Google Cloud preferred).
- Strong skills with industry experience in Python and SQL.
- Unit testing experience (e.g. pytest).
- Knowledge of agile practices and able to perform in agile software development environments.
- Strong experience with modern software development tools / ways of working (e.g. git/GitHub, DevOps tools for deployment).
Preferred Requirements
- Demonstrated experience with biological or scientific data (e.g. genomics, transcriptomics, proteomics), or pharmaceutical industry experience.
- Bioinformatics expertise, familiarity with large scale bioinformatics datasets.
- Experience using Nextflow pipelines.
- Knowledge of NLP techniques and experience of processing unstructured data, using vector stores, and approximate retrieval.
- Familiarity with orchestration tooling (e.g. Airflow or Google Workflows).
- Experience with AI/ML powered applications.
- Experience with Docker or containerized applications.
Why GSK?
Uniting science, technology and talent to get ahead of disease together.
GSK is a global biopharma company with a special purpose – to unite science, technology and talent to get ahead of disease together – so we can positively impact the health of billions of people and deliver stronger, more sustainable shareholder returns – as an organisation where people can thrive. We prevent and treat disease with vaccines, specialty and general medicines. We focus on the science of the immune system and the use of new platform and data technologies, investing in four core therapeutic areas (infectious diseases, HIV, respiratory/ immunology and oncology).
Our success absolutely depends on our people. While getting ahead of disease together is about our ambition for patients and shareholders, it’s also about making GSK a place where people can thrive. We want GSK to be a place where people feel inspired, encouraged and challenged to be the best they can be. A place where they can be themselves – feeling welcome, valued, and included. Where they can keep growing and look after their wellbeing. So, if you share our ambition, join us at this exciting moment in our journey to get Ahead Together.
Inclusion at GSK
GSK is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive equal consideration for employment without regard to race, color, national origin, religion, sex, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class.
If you need any adjustments in the recruitment process, please get in touch with our Recruitment team ( ) to further discuss this today.
Important notice to employment businesses/agencies
GSK does not accept referrals from employment businesses and/or employment agencies in respect of the vacancies posted on this site. All employment businesses/agencies are required to contact GSK's commercial and general procurement/human resources department to obtain prior written authorization before referring any candidates to GSK. The obtaining of prior written authorization is a condition precedent to any agreement (verbal or written) between the employment business/ agency and GSK. In the absence of such written authorization being obtained any actions undertaken by the employment business/agency shall be deemed to have been performed without the consent or contractual agreement of GSK. GSK shall therefore not be liable for any fees arising from such actions or any fees arising from any referrals by employment businesses/agencies in respect of the vacancies posted on this site.
Please note that if you are aUS Licensed Healthcare Professional or Healthcare Professional as defined by the laws of the state issuing your license, GSK may be required to capture and report expenses GSK incurs, on your behalf, in the event you are afforded an interview for employment. This capture of applicable transfers of value is necessary to ensure GSK’s compliance to all federal and state US Transparency requirements. For more information, please visit GSK’s Transparency Reporting For the Record site.
Data Engineer
Posted today
Job Viewed
Job Description
Data Engineer
Croydon (Must be London-based, able to work from site in Croydon/London)
SC Clearance required / or eligible for clearance
12 months
Umbrella only
The required skills are:
- Attention to detail and ability to follow defined processes
- Drive and commitment to learn new technical concepts quickly
- Familiarity with Agile & DevOps ways of working
- Familiarity with and experience of using UNIX Knowledge of CI toolsets
- Good client facing skills and problem solving aptitude
- DevOps knowledge of SQL Oracle DB Postgres ActiveMQ Zabbix Ambari Hadoop Jira Confluence BitBucket ActiviBPM Oracle SOA Azure SQLServer IIS AWS Grafana Oracle BPM Jenkins Puppet CI and other cloud technologies.
All profiles will be reviewed against the required skills and experience. Due to the high number of applications we will only be able to respond to successful applicants in the first instance. We thank you for your interest and the time taken to apply!
Data Engineer
Posted 2 days ago
Job Viewed
Job Description
Data Engineer
Location: Hammersmith - Hybrid after probation
Salary: 38,000 -45,000
Join Our Team as a Data Engineer!
About Us : Join a forward-thinking organisation dedicated to revolutionising retail through innovative solutions and operations. We leverage physical spaces to test new concepts and pilot initiatives, helping brands and retailers thrive in a dynamic marketplace.
What You'll Do :
As our Data Engineer, you will play a pivotal role in the day-to-day management and maintenance of our data infrastructure, driving our large-scale programmes in the USA. Your primary responsibilities will include :
- Design & Development: Create, develop, and maintain scalable data pipelines and ETL processes across various systems.
- Data Integration: Build and manage seamless data integrations from diverse sources, including POS systems, e-commerce platforms, RFID sensors, IoT devices, and CRM systems.
- Data optimisation: Develop and optimise data models to support reporting, analytics, and machine learning initiatives.
- Data Quality Assurance: Ensure data accuracy, consistency, security, and compliance with U.S. data privacy standards (CCPA, HIPAA).
- Monitoring & Troubleshooting: Monitor, troubleshoot, and enhance data pipelines to maximise reliability and performance.
- Collaboration: Work closely with teams to deliver datasets that empower insight and decision-making.
- Strategic Contribution: Contribute to our data strategy by evaluating and implementing modern data tools and practises.
- Documentation: Document data flows, schemas, and processes to support knowledge sharing and scalability.
About You :
To thrive in this role, you will need :
- Strong knowledge of Python and APIs for effective data integration.
- Familiarity with cloud platforms, preferably Azure (AWS, GCP experience is also valuable).
- Experience managing large and complex datasets, with a strong command of SQL for cloud-based environments (Fabric, Snowflake, BigQuery, Redshift, etc.).
- A solid understanding of data modelling techniques (star schema, data vault, dimensional modelling).
- Proficiency in Excel-based data workflows for various Agile Retail projects.
- Hands-on experience with data pipeline orchestration tools (Airflow, dbt, Prefect, or similar).
Benefits:
- Unlimited holiday
- Annual Wellbeing Allowance
- Flexible work culture
- Monthly socials and events
- Complimentary snack bar
- Employer pension contribution
If you're a data enthusiast ready to innovate and drive impactful solutions in retail, we want to hear from you!
Office Angels is an employment agency and business. We are an equal-opportunities employer who puts expertise, energy and enthusiasm into improving everyone's chance of being part of the workplace. We respect and appreciate people of all ethnicities, generations, religious beliefs, sexual orientations, gender identities, abilities and more. By showcasing talents, skills and unique experiences in an inclusive environment, we help individuals thrive. If you require reasonable adjustments at any stage, please let us know and we will be happy to support you.
Office Angels acts as an employment agency for permanent recruitment and an employment business for the supply of temporary workers. Office Angels UK is an Equal Opportunities Employer.
By applying for this role your details will be submitted to Office Angels. Our Candidate Privacy Information Statement explaining how we will use your information is available on our website.
Data Engineer
Posted 3 days ago
Job Viewed
Job Description
Data Engineer
12 months
Remote
470 per day inside IR35 - Umbrella only
Active SC clearance and eligible candidates will be considered
Project description
Our client playing a crucial role in the ESN project in 2025, which aims to modernize and enhance communication for frontline emergency services in the UK. This initiative will greatly enhance the capabilities of emergency services, allowing them to share vital data and information quickly and securely, improving response times and overall public safety.
Required Skills
- Data Stage, RedShift, QuickSight, S3
- data migration/ ETL both batch and real time
- data warehouse development
- DevSecOps
- Java
- SQL
- relational databases
- GitHub/lab experience
Nice to have:
- Data quality
- xml
- AWS Data Specialty certification
All profiles will be reviewed against the required skills and experience. Due to the high number of applications we will only be able to respond to successful applicants in the first instance. We thank you for your interest and the time taken to apply!
Data Engineer
Posted 9 days ago
Job Viewed
Job Description
My client within Venture capital is looking for a data engineer to join their team.
The role will be working maintaining & modernising data pipelines.
Requirements
- Python
- Azure, CI/CD
- Airflow, DBT
- Docker
- GitHub, Azure DevOps
- LLM Experience desirable
Contract: 12 Months
Rate: (Apply online only) Via Umbrella
Location: London - 3 days per week in the office.
If this role is of interest, please reply with your up-to-date CV and I will be in touch to discuss further.
Thanks,
Please click here to find out more about our Key Information Documents. Please note that the documents provided contain generic information. If we are successful in finding you an assignment, you will receive a Key Information Document which will be specific to the vendor set-up you have chosen and your placement.
To find out more about Huxley, please visit (url removed)
Huxley, a trading division of SThree Partnership LLP is acting as an Employment Business in relation to this vacancy | Registered office | 8 Bishopsgate, London, EC2N 4BQ, United Kingdom | Partnership Number | OC(phone number removed) England and Wales
Data Engineer
Posted 9 days ago
Job Viewed
Job Description
If you are an experienced Data Engineer with excellent communication skills and a proven track record of working on large scale data enablement projects, we have a new contract we would like to discuss with you.
Please note this role requires onsite attendance once a week and has been deemed inside IR35.
Requirement:
- Experience in Azure Synapse, ETL, Pyspark, SQL , data Modelling and Data Bricks.
- Design and develop Azure Pipelines including data transformation and data cleansing
- Document source-to-target mappings
- Re-engineer manual data flows to enable scaling and repeatable use
- Build accessible datasets for distribution and analysis
- Development of Azure Pipelines for transforming data
- A scalable meta-data driven ingestion and transformation framework
- Aligning transformation pipelines and datasets with Purview
- Experience of working with Azure Data Lakes, Data warehousing, and pipelines and Storage accounts
- Experience in building and managing Pipelines including: building data interfaces to source systems, combining and transforming data into appropriate storage formats
- Experience identifying and resolving issues in databases, data processes, data products and services.
- Proficiency in T-SQL and Python to develop automation
Data Engineer
Posted 11 days ago
Job Viewed
Job Description
Contract Role: Data Engineer - Azure Migration Project
Location: Hybrid (2 days/week onsite in London)
Contract Length: 6 months
Day Rate: 450/day (Outside IR35)
Start Date: ASAP
Sector: Consultancy
Overview
We are recruiting on behalf of a consultancy client for an experienced Data Engineer to support a strategic data migration project . The role involves migrating legacy data systems to a modern Azure-based architecture, with a focus on performance, scalability, and reliability.
Responsibilities
- Design and implement robust data migration pipelines using Azure Data Factory, Synapse Analytics, and Databricks
- Develop scalable ETL processes using PySpark and Python
- Collaborate with stakeholders to understand legacy data structures and ensure accurate mapping and transformation
- Ensure data quality, governance, and performance throughout the migration lifecycle
- Document technical processes and support knowledge transfer to internal teams
Required Skills
- Strong hands-on experience with Azure Data Factory, Synapse, Databricks, PySpark, Python, and SQL
- Proven track record in delivering data migration projects within Azure environments
- Ability to work independently and communicate effectively with technical and non-technical stakeholders
- Previous experience in consultancy or client-facing roles is advantageous
Be The First To Know
About the latest Data engineer Jobs in London !
Data Engineer
Posted 11 days ago
Job Viewed
Job Description
Contract Role: Data Engineer - Azure Migration Project
Location: Hybrid (2 days/week onsite in London)
Contract Length: 6 months
Day Rate: 450/day (Outside IR35)
Start Date: ASAP
Sector: Consultancy
Overview
We are recruiting on behalf of a consultancy client for an experienced Data Engineer to support a strategic data migration project . The role involves migrating legacy data systems to a modern Azure-based architecture, with a focus on performance, scalability, and reliability.
Responsibilities
- Design and implement robust data migration pipelines using Azure Data Factory, Synapse Analytics, and Databricks
- Develop scalable ETL processes using PySpark and Python
- Collaborate with stakeholders to understand legacy data structures and ensure accurate mapping and transformation
- Ensure data quality, governance, and performance throughout the migration lifecycle
- Document technical processes and support knowledge transfer to internal teams
Required Skills
- Strong hands-on experience with Azure Data Factory, Synapse, Databricks, PySpark, Python, and SQL
- Proven track record in delivering data migration projects within Azure environments
- Ability to work independently and communicate effectively with technical and non-technical stakeholders
- Previous experience in consultancy or client-facing roles is advantageous
Data Engineer
Posted 11 days ago
Job Viewed
Job Description
Contract Role: Data Engineer - Azure Migration Project
Location: Hybrid (2 days/week onsite in London)
Contract Length: 6 months
Day Rate: 450/day (Outside IR35)
Start Date: ASAP
Sector: Consultancy
Overview
We are recruiting on behalf of a consultancy client for an experienced Data Engineer to support a strategic data migration project . The role involves migrating legacy data systems to a modern Azure-based architecture, with a focus on performance, scalability, and reliability.
Responsibilities
- Design and implement robust data migration pipelines using Azure Data Factory, Synapse Analytics, and Databricks
- Develop scalable ETL processes using PySpark and Python
- Collaborate with stakeholders to understand legacy data structures and ensure accurate mapping and transformation
- Ensure data quality, governance, and performance throughout the migration lifecycle
- Document technical processes and support knowledge transfer to internal teams
Required Skills
- Strong hands-on experience with Azure Data Factory, Synapse, Databricks, PySpark, Python, and SQL
- Proven track record in delivering data migration projects within Azure environments
- Ability to work independently and communicate effectively with technical and non-technical stakeholders
- Previous experience in consultancy or client-facing roles is advantageous
Data Engineer
Posted 11 days ago
Job Viewed
Job Description
Data Engineer
I am working with a leading, AI driven organisation that supports its clients' marketing campaigns, expanding its data engineering capabilities and looking for a Data Engineer to join their team. This is a fantastic opportunity to join an award-winning tech business where you will have the opportunity to work at the forefront of data architecture, building distributed systems and leveraging Gen AI tools to accelerate development and improve quality.
You will be encouraged to explore new ways to utilise modern technologies and will be positioned to help mentor other members of the technical team whilst leading innovative technical projects.
As part of this role, you will be responsible for some of the following areas:
- Design and build distributed data pipelines using languages such as Spark, Scala, and Java
- Collaborate with cross-functional teams to deliver user-centric solutions
- Lead on the design and development of relational and non-relational databases
- Apply Gen AI tools to boost development speed and code quality
- Maintain and optimise large-scale data collection processes
- Support the deployment of machine learning models into production
To be successful in the role you will have:
- Creating scalable ETL jobs using Scala and Spark
- Strong understanding of data structures, algorithms, and distributed systems
- Experience working with orchestration tools such as Airflow
- Familiarity with cloud technologies (AWS or GCP)
- Hands-on experience with Gen AI tools for coding and debugging
This is a remote-first role with flexibility to work from anywhere in the UK. Some of the benefits included with the role are listed below:
- Salary of up to 140,000
- Unlimited holiday allowance
- Fully covered private medical care
- Enhanced parental leave policies
- Bespoke training and career development plans
- Regular socials, volunteer days, and a birthday day off
This is just a brief overview of the role. For the full information, simply apply with your CV and I'll be in touch to discuss further. Interviews are already underway, so don't miss out - APPLY now! To do so, please email me at (url removed) or call directly on (phone number removed).