Offene Stellen

Offene Stellen

Wir suchen dich!

Big Data DevOps

Scigility combines academic and practical knowhow to a unique Big Data service offering for our customers. We define, build and operate integrated Big Data platforms, build applications to create new insights to our customer’s data and support customers within the legal framework.

Your Tasks are

  • Support in operating Big Data clusters based on Hadoop
  • Design and build Big Data solutions based on Hadoop, Kafka or other NoSQLsystems, either in Cloud or on-premise
  • Provide business support to our customers on new technologies
  • Integration of new and existing IT systems

Your Profile

  • Experience with Linux system administration, RedHat based distributions
  • Experience with Security related topics SSL/Kerberos/LDAP/AD-integration
  • Experience with Cloud architectures and platforms (AWS, GCP, Azure)
  • Experience with DevOps tools like Ansible/Puppet/Jenkins/…
  • Knowledge of scalable systems like Hadoop, Hive, HBase, Kafka and a higher level language like Java/Scala/Python is an asset
  • You are a proactive team player with the ability to work independently and accurately in interdisciplinary projects
  • Fluent in German and English, French would be an asset

Your Opportunities

Design your career with Scigility in a culture that promotes innovation and diversity. We offer you to join a young and professional team in an environment that constantly opens new doors through knowledge sharing, flexibility and recognition.

Your Application

We are looking forward receiving your application online on jobs@scigility.com. For further information please contact Natalya Sidler (jobs@scigility.com / +41 44 214 62 89).

E-Mail

Big Data Software Engineer

Scigility combines academic and practical knowhow to a unique Big Data service offering for our customers. We define, build and operate integrated Big Data platforms, build applications to create new insights to our customer’s data and support customers within the legal framework.

Your Tasks

  • Build, scale and maintain solutions based on Hadoop and/or other NoSQL systems
  • Write and tune complex near-real-time and batch data pipelines with Spark or Java MapReduce
  • Explore available technologies to provide business support to our clients
  • Work closely with customers and other stakeholders in an agile environment

Your Profile

  • Bachelor or Master in Computer Science / Information Management or related field
  • Proficient in OOP (Java) and FP (Scala, Clojure), willing to learn new programing languages
  • Experience in Big Data technologies (Hadoop, Spark, Kafka, HBase)
  • Knowledge of real-time processing systems like Storm, Spark Streaming or others
  • Knowledge and experience in agile software development
  • Superb problem-solving skills
  • You are a proactive team player with the ability to work independently and accurately in interdisciplinary projects
  • Knowledge of data structures and algorithms for Big Data, Machine Learning is a plus
  • Very good skills in German and English, French would be an asset

Your Opportunities

Design your career with Scigility in a culture that promotes innovation and diversity. We offer you to join a young and professional team in an environment that constantly opens new doors through knowledge sharing, flexibility and recognition.

Your Application

We are looking forward receiving your application online on jobs@scigility.com. For further information please contact Natalya Sidler (jobs@scigility.com / +41 44 214 62 89).

E-Mail