Permanent employment
Apply before:
Hours p/wk:


We are looking for you:

You are passionate about IT, IT Data and functional and operational management of clusters. In the field of Data engineering you will be working on projects to leverage on the ING IT Infrastructure data and help to bring AIOps towards IT Operations. Working in the Global IT Operations of ING we expect you to be passionate about the IT Infrastructure and all aspects of modern enterprise datacenters. The data and Analytics is a relative new field within the Infrastructure domain and there is lot’s of room to innovate and experiment. You will have a lot of freedom to use your own creativity, however you will be the specialist and person “to go to” for the data onboarding, data integration and Hadoop cluster management. Strong focus will be in the changes of the setup, keeping it operational and ens abling the use of the analytics clusters.


Your responsibilities:

  • The data engineer develops, constructs, tests and maintains architectures, such as databases and large-scale processing systems.
  • The data engineers will recommend and implement ways to improve data reliability, efficiency, and quality.
  • The data engineer role requires a significant set of technical skills, including a deep knowledge of SQL/NoSQL, database design and multiple programming languages.
  • Data engineer has the right communication skills to work across departments to understand the possible gain from large datasets.
  • Data engineers are responsible to provide easy access to raw data and they need to understand company and client’s objectives.
  • Good understanding of the field of Enterprise Infrastructure, data science, analytics techniques.
  • Support IT design, analysis and exploration of IT datasets by using available tools.
  • Bring in ideas to business when data and results are available
  • Implement and maintain IT risk and security controls
  • Understand the need for data governance, however also assist users to understand data science principles and bring forward data literacy and everything as code.


Do you recognize yourself in this profile:

  • Hands-on experience (Several years) managing and further developing distributed systems and clusters for both batch as well as streaming data (Hadoop/Spark and/or Kafka/Flink)
  • Working experience with the Cloudera Hadoop Distribution and / or several product from the Apache software foundation
  • Knowledge of data manipulation and transformation, e.g. SQL
  • Hands-on experience building complex data pipelines e.g. ETL
  • Programming and scripting languages, e.g. Python
  • Deployment and provisioning automation tools e.g. Docker, Kubernetes, Openshift, CI/CD
  • Strong Linux systems understanding and scripting skills
  • Security, authentication and authorization (LDAP / Kerberos)
  • Affinity with Advanced Analytics, Data Science
  • You have good communication skills at technical and business level.



  • You have a learning attitude. Not only to master new technologies, but also on the interpersonal level.
  • You have experience in building data solutions with your full-stack capabilities.
  • You feel at home in a high-performing team and you have the independence to speak up when needed.  
  • You have mature problem-solving capabilities and have the creativity and tenacity in cracking the problem and finding a solution to it.