Cloud Data Engineer (Barings)

This job is no longer active!

View all jobs MassMutual Romania active

View all jobs Cloud Data Engineer (Barings) active on

View all jobs Engineering active on

View all jobs IT Hardware active on

View all jobs IT Software active on

Employer: MassMutual Romania
  • Engineering
  • IT Hardware
  • IT Software
  • Job type: full-time
    Job level: 1 - 5 years of experience
  • Cluj Napoca
  • Oradea
  • nationwide
    Updated at: 06.02.2023
    Job remote: Remote(from home)

    Department name: Enterprise Engineering, Enterprise Data Group

    Location: Bucharest, Ro / Cluj, Ro

    Key Goals:

    • Building out Baring’s future data landscape within a modern cloud architecture.

    • Create a data platform that is scalable and flexible in acquiring and analyzing data sets to provide users with reliable and accurate data.

    • Create centralized operational hubs around strategic data domains within cloud-based platform.

    Job Description

    As a Senior Cloud Data Engineer, you will be responsible for designing and building data pipelines in a collaborative environment. The right candidate will be experienced in cloud computing best practices using cloud service providers such as Microsoft Azure or GCP or AWS and their tool sets. Joining the Data Engineering team positions you to contribute to an enterprise-wide hybrid cloud transformation.

    As such, you must be a self-starter, love learning new concepts, and have the ability to innovate.


    •  Extensive experience with Databricks
    •  Work with a technology stack including SQL, Python, Spark, ADF and the Azure Suite
    •  Collaborate with a team to design and build robust and highly automated batch and streaming data pipelines
    •  Discover, introduce, and implement modern cloud technologies into the enterprise domain
    •  Work in cross-functional agile teams to continuously experiment, iterate and deliver on new product objectives


    •  5+ years working with cloud platforms such as Microsoft Azure or AWS or GCP
    •  5+ years working experience with SQL and scripting languages such as Python or Scala
    •  3+ years working experience with data pipelines tools like Azure Data Factory/ADF or Databricks or Dataflow or similar technologies
    •  Extensive experience working on Databricks
    •  Working experience with Apache Spark engine
    •  Working experience with Kafka and Steaming technologies is a plus
    •  Expert knowledge in developing efficient, robust, and scalable data pipelines
    •  Familiar with modern data lake, data warehousing, and ETL/ELT concepts
    •  Knowledgeable about data modeling, data access, and data storage techniques