Senior Python Data Engineer
Daltix is enabling retailers & suppliers to make decisions based on data rather than gut-feeling and for that it’s built up significant experience in how to collect data but also how to transform it in order to support decision making. To this end we’re looking for a data engineer/analyst who’ll aid some of the biggest names in the industry (don’t take our word for it, check our website) in becoming truly data-driven.
We’re looking for a Data Engineers to join our data teams that are in charge of standardizing and extracting information from the data we collect as well as making it accessible for analytics & reporting.
The position is called Analytics Engineer because your responsibilities will involve both Data analysis and Data Engineering. It's the perfect position for people with strong software engineering expertise who'd like to improve their data analysis/engineering expertise.
We need people to aid us with:
- Adding new data processing modules to our pipeline so we can standardise data collected from the web.
- Managing the infrastructure (schedulers, computing frameworks) used for our big scale data processing & reporting.
- Quality Assurance of our data. We have some tooling in place already, but some more might be necessary. You will likely want to automate some of these checks.
- Assist with existing ETL pipelines that make our data ready to use by our customers.
- Enabling our professional services team by providing Python based toolkits to make their jobs easier.
We offer the opportunity for you to work for only 4 days a week if that's what you prefer!
- Our office is in central Lisbon, at São Sebastião (crossing of red and blue metro lines); we're quite flexible regarding remote work and are 100% remote at the moment and will so until the end of 2021. Afterwards we expect to adopt a hybrid model.
- Your future colleagues will be Nelson Torres, data engineer, and Miguel Almeida, data scientist. Your team leader will be Manuel Garrido, data architect or Simon Esprit our CTO (depending on the team you end up.
- We scrape around 3TB of compressed data per month (20TB uncompressed), if you'd like to learn how this is done + the challenge that come with that, here's your chance.
- You'll work with a modern tech stack, including Python, Docker, Terraform, AWS (S3, Batch), Grafana, Airflow, Snowflake & Looker.
- You'll work following best practices for software engineering, including mandatory code reviews, unit tests and benchmarks running on every commit, infrastructure-as-code, among others. We're not where we want to be yet, so there's room to add your touch here.
- We also offer squad rotations, allowing you to spend some time per week doing work for another team in order to learn more about the challenges other teams face.
- As a company we also offer private health insurance, a solid laptop (MacBook, Linux-friendly or Windows - it's up to you!) and a lot of flexibility.
The application process involves a challenge. That challenge, and the interviews, are all conducted remotely.
We are looking for people with at least 5 years of relevant working experience. If you have more, or less, but fit the role, we encourage you to apply.
The following experience is required:
- We communicate exclusively in English so fluent technical English is mandatory.
- Programming experience (ideally in Python), SQL, git and bash.
- Working experience in Data Analysis and Data Engineering.
However it's a plus in case you also have experience with pandas/dask, Looker or Tableau, Jupyter, regex, AWS (or another cloud), Snowflake, Docker. Most of our data is in Dutch, so knowledge of Dutch is a plus.
No Portuguese required.