We’re looking for a data engineer
MarketResponse is a renowned insights company and focuses on supporting companies with their data journey. We provide our customers with our SAAS solution for data science as a service as well as tailored research projects. We are passionate about enabling our customers (such as NS, De Volksbank, Greenchoice and ACHMEA) to use data driven insights to improve their customer experience. We use cutting-edge techniques to analyze different types of (big) data with our own developed methods, software and state-of-the-art tooling.
Our office is located close to the train station Utrecht Centraal (1 minute walking). The MarketResponse Group consists of solutions such as Underlined’s ROCX’R, 4Orange – CDP Connect and CMNTY. We are an informal organization with an open and international culture. MarketResponse is an organization with a total of 100 employees divided between Research, Data Science and Product Development including support services such as Marketing & Sales.
As a Data Engineer within MarketResponse you are responsible for advising on and developing and managing everything concerning data. You design and implement data pipelines for collecting, transforming and loading structured and unstructured data. Your work will contribute in making raw data useful for MarketResponse as well as for our clients. Continuously staying up-to-date with the market and academic information, documenting your work and contributing to MarketResponse modules and processes is part of your job. A Data Engineer is part of the Data Science team and will work closely together with customer- and internal project teams.
Tasks and responsibilities
- Automate data pipelines and develop custom data solutions using tools or languages like KNIME, Azure solutions (such as Azure Data Factory and Data Bricks), Python and SQL;
- Exploring data requirements with clients and internal stakeholders and translating it to (standardized) data models;
- Designing, building, troubleshooting, and monitoring complex data pipelines;
- Create reliable and scalable data collection processes;
- Share learnings and best practices within the organization;
Knowledge and experience
- A bachelor or master’s degree in Computer Science, Data Science, Information Technology or a related technical discipline, or equivalent experience;
- Experience in building automated data pipelines and setting up and maintaining data models for analysis purposes;
- Extensive knowledge of and experience with KNIME, Python and SQL in the context of ETL processes;
- At least 5 years of relevant experience;
- Experience with Agile, Scrum, DevOps is a plus;
- You enjoy working in multidisciplinary project teams