Junior Data Engineer

This job is no longer active!

View all jobs Vauban active

View all jobs Junior Data Engineer active on Hipo.ro

View all jobs Acquisitions - Logistics - Supplies active on Hipo.ro

View all jobs IT Software active on Hipo.ro

View all jobs Production active on Hipo.ro

Employer: Vauban
  • Acquisitions - Logistics - Supplies
  • IT Software
  • Production
  • Job type: full-time
    Job level: 1 - 5 years of experience
  • Pitesti
  • Updated at: 07.05.2020
    Short company description

    Vauban joined the GFI Group at the beginning of January 2019. With more than 450 consultants in Romania, Vauban is a leading provider of IT services and innovative applications. Created in 2007, the company has experienced strong growth and has established itself as a reference partner for key accounts in the Banking, Telecom, Industry and Energy sectors.


    - Basic knowledge with 1 or 2 years of experience on Hadoop environment (Hive, Oozie, HDFS, Knox etc.)
    - A specific training and support will be given by the company on the Data Lake Loader tool
    - Knowledge and experience of Agile methods and JIRA
    - Knowledge and practice of Git, ideally Gitlabee
    - Certification as Data Engineer on GCP will be required
    - English speaking
    - Previous experience in Supply Chain will be appreciated.


    Data Lake activities:
    - Ingestion of “raw” data from Information systems to Data Lake
    - Transformation of raw data to “gold” data into the Data Lake
    - Maintenance of ingestions and transformations in production
    - Provide GUI application to manage some ingestion and transformation processes
    Support activities:
    - Support to business on framing, build solutions
    - Maintenance of Supply Chain universes
    - Maintenance of 40 tools Excel/VBA
    - Monitoring scheduled treatments, and raise incident when needed
    - Analyze cause of incidents
    - Fix incidents or escalate to Data Lake Platform team if the root cause is in their scope
    - All activities include part of documentation, analysis, and tests.

    Other info

    The data engineer will join a team of 6 persons and will manage most of the activity around the data for the Supply Chain department of the company.
    Most of the ingestion & transformation activities is managed with a homemade framework - “DLL” (Data Lake Loader).
    Part of transformation is developed in Scala/Spark or Python, embedded in a DLL structure to support CI/CD.