Big Data Engineer

Angajator: ALTEN Romania
Domeniu:
  • IT Software
  • Tip job: full-time
    Nivel job: 1 - 5 ani experienta
    Orase:
  • BUCURESTI
  • Actualizat la: 24.08.2019
    Scurta descriere a companiei

    ALTEN Romania, part of the international ALTEN Group - with a unique position as Leader in IT & Engineering Consulting, provides support for its clients’ development strategies in the fields of innovation, R&D and IT systems since 1996. The company comprises 2 divisions specialized on its core capabilities: ENGINEERING and IT. These two divisions are: ALTEN TECHNO and ALTEN KEPLER.

    Cerinte

    ALTEN Group, a worldwide leader in engineering & technology consulting, has a strong commitment to invest in Romania.

    Its employees have the skills and capabilities to cover the whole development cycle and to offer a wide choice of service levels, from technology consulting to complete projects outsourcing.

    ALTEN Group has reached 1000 consultants in Romania that work on ambitious projects from offices in Bucharest, Timisoara, Sibiu, Cluj-Napoca, Iasi, Craiova, Pitesti. ALTEN Romania comprises 2 divisions specialized on its core capabilities: ENGINEERING and IT.

    These two divisions are: ALTEN TECHNO and ALTEN KEPLER, working as an agile
    organization.
    ALTEN KEPLER division serves clients both locally and internationally from the fields of telecommunications, banking,
    automotive, aeronautics, insurances, the pharmaceutical industry and retail.

    Being part of ALTEN KEPLER division means:
    • Delivering high-quality projects for the customers.
    • Working with new technologies.
    • Taking part in learning and development programs: training with international and local
    training/workshops.
    • Calibrating the career evolution with the career management system of the ALTEN Group.
    • Being part of a company that works with local and foreign companies for over 20 years, which brings well
    deserved experience & diversity.
    • Another element which is a common day reality for us is flexibility. This goes beyond the classical work
    schedule flexibility and refers to the possibility of changing both projects and technologies.

    We have the following project that might interest you:
    Quality of data transformation for connection with Cluster Datalake's solution ("on premises") or Cloud architectures,

    in order to design and implement "end to end " solutions: proper operation of data processing, data ingestion by API's exposure and data visualization, within an DevOps culture. All the applications are new developments/ products for different LOB's (line of business) inside our Group.
    General Skills: Experience in "end to end " data streams implementation and design in Big Data architectures (Hadoop clusters, NOSQL databases, Elastic search) and also in massive data processing environments of distributed data with frameworks as Spark/Scala.

    A typical day might include the following:
    During project definition:
    • Design of data ingestion chains;
    • Design of data preparation chains;
    • Basic ML algorithm design;
    • Data product design;
    • Design of NOSQL data models;
    • Design of data visualizations;
    • Participation in the selection of services / solutions to be used according to uses;
    • Participation in the development of a data toolbox;
    During the iterative implementation phase:
    • Implementation of data ingestion chains;
    • Implementation of data preparation chains;
    • Implementation of basic ML algorithms;
    • Implementing data visualizations;
    • Using ML framework;
    • Implementation of data products;
    • Exposure of data products;
    • NOSQL database configuration/ parametrization;
    • Use of functional languages;
    • Debugging of distributed processes and algorithms;
    • Identification and cataloging of reusable entities;
    • Contribution to the working development standards;
    • Contribution and solution proposals on data processing issues;
    During integration and deployment phase
    • Participation in problem solving.

    Responsabilitati

    Technical requirements:

    • Expertise in the implementation of end-to-end data processing chains;
    • Experience in distributed architecture;
    • Basic knowledge and interest in the development of ML algorithms;
    • Knowledge of different ingestion mechanism/ framework;
    • Knowledge of Spark and its different modules;
    • Proficiency of Scala and / or Python;
    • Knowledge of the AWS or GCP environment;
    • Knowledge of NOSQL databases environment;
    • Knowledge in building API's for data products;
    • Knowledge of Dataviz tools and libraries;
    • Experience in Spark debugging and distributed systems;
    • Extension of complex systems;
    • Proficiency in the use of notebook data;
    • Experience in data testing strategies;
    • Strong problem-solving skills, intelligence, initiative and ability to withstand pressure;
    • Strong interpersonal skills and great communication skills (ability to go into detail);

    You should be comfortable with the following work organization patterns:
    • Have very good command of English language (both written and spoken).
    • Have a proactive approach towards your work and processes.
    • Adapt and adjust to change.
    • An outgoing, “get things done”, positive attitude.