Senior Big Data Engineer

Why you should join our team

At Aidoc we build A.I. that matters and saves people’s lives. We are a passionate group of people, who love working together and building awesome software systems that provide real value on a daily basis.

Our products need to handle information at a massive scale, integrate with complex healthcare systems with very demanding resources, and extend well beyond running A.I.


About this role

Aidoc Data Engineering team is looking for an experienced Big Data Engineer to join it’s forces. As a Big Data Engineer at Aidoc you’ll get to take part in designing and implementing a data architecture from scratch, all while dealing with unusual semi structured and structured types of data. You’ll be required to understand the bigger picture, understand the business needs and closely work with a variety of data teams, from AI Algorithm Developers to Data Scientists and analysts, each with its own unique data need. You’ll take a significant part in improving the research and development process of highly sensitive machine learning algorithms, all for the purpose of flagging acute anomalies and helping save lives.

Data is the core of our business, come and join the ride.

You are:

  • You love data, processing it, shaping it and watching it grow. You are passionate about designing the right data architecture for the right data and you believe that data has a critical role in decision making.
  • You’re a self proclaimed autodidact who never stops learning and always seeks the next challenge
  • Always up to date with the latest technological trends that’ll help you reshape data architectures and take them to the next level (Maybe you’re an OSS contributor?)
  • You have a “can-do” approach
  • You are highly motivated by challenges and purpose

We are looking for

  • Experience with designing and benchmarking different data architectures
  • Experience with Apache Spark or a similar framework in a large scale production environment
  • 4+ years of experience as a software engineer, preferably in Scala/Python/Java
  • Experience working with SQL and NoSQL databases
  • A strong understanding of SQL
  • Experience working with public clouds (AWS, GCP, Azure)
  • Excellent communication skills



  • Deep understanding of Apache Spark internals
  • Experience with managing data pipelines using tools such as Prefect/Airflow
  • Experience with AWS cloud
  • Experience working with docker, k8s
Apply for this job

    AI Consultation Request
    Get your complimentary consultation with an AI expert