Data Engineer

Employer: Groupe Renault Romania
Domain:
  • IT Software
  • Job type: full-time
    Job level: peste 5 years of experience
    Location:
  • BUCHAREST
  • Updated at: 05.06.2020
    Short company description

    Să te alături Groupe Renault România înseamnă să faci parte dintr-o echipă de peste 18.000 de colegi alături de care îți poți pune amprenta pe oricare dintre etapele de dezvoltare ale unui vehicul: design, proiectare, testare, fabricaţie, logistică, vânzare, post vânzare şi finanţare. Te aşteptăm în echipa noastră mobilă, conectată și autonomă să punem lumea în mișcare și să conturăm viitorul automobilului.

    #MOVEOURWORLDFORWARD

    Prin aplicarea la posturile disponibile, entitățile Renault Romania prelucrează datele dumneavoastră cu caracter personal furnizate prin CV, în scopul recrutării.
    Pentru informarea dvs. completă și transparentă, înainte de a aplica, vă rugăm să luați la cunoștință Politica de confidenţialitate privind protecţia datelor personale în scop de recrutare și selecție accesând link-ul: https://www.gruprenault.ro/politica-privind-cadrul-general-de-protectie-datelor-cu-caracter-personal-groupe-renault-romania

    Requirements

    • Mastery in SQL
    • Mastery of Scala and/ or Python
    • Knowledge of Elastic Search
    • Knowledge of Hadoop, Kafka, Nifi, Flume, scikit-learn, Jupyter/ Zeppelin, R
    • Knowledge of Spotfire, ggplots, matplotlib, bokeh
    • Knowledge of Tensorflow, Keras, Theano
    • Expertise in the implementation of end-to-end data processing chains
    • Mastery of distributed development
    • Basic knowledge and interest in the development of ML algorithms
    • Knowledge of the ingestion framework
    • Knowledge of Spark and its different modules
    • Knowledge of the AWS or GCP ecosystem (Cloud or Google Cloud Platform)
    • Knowledge of the ecosystem of NOSQL databases
    • Knowledge in the construction of APIs of data products
    • Knowledge of Dataviz tools and libraries
    • Ability to debug Spark and distributed systems
    • Popularization of complex systems
    • Knowledge of algorithm complexity
    • Mastering the use of notebooks data
    • Expertise in data testing strategies
    • Strong problem solving, intelligence, initiative and ability to withstand pressure
    • Excellent interpersonal skills and a great sense of communication (ability to go into detail)
    • Fluent in English (verbal and written)

    Responsibilities

    During project definition:
    • Design of data ingestion chains
    • Design of data preparation chains
    • Basic ML algorithm design
    • Data product design
    • Design of NOSQL data models
    • Design of data visualizations (Spotfire, Qlikview, Power BI)
    • Participation in the selection of services / solutions to be used according to the uses
    • Participation in the development of a data toolbox

    During the iterative realization phase:
    • Implementation of data ingestion chains
    • Implementation of data preparation chains
    • Implementation of basic ML algorithms
    • Implementation of data visualizations
    • Using ML framework
    • Implementation of data products
    • Exposure of data products
    • Setting up NOSQL databases (Cassandra, Elasticsearch, MongoDB)
    • Implementation in distributed mode of treatments
    • Use of functional languages
    • Debugging distributed processes and algorithms
    • Identification and cataloging of reusable elements
    • Contribution to the evolution of labor standards
    • Contribution and opinion on data processing problems

    During integration and deployment:
    • Participation in problem solving