Big Data Engineer

Angajator: Brainspotting
  • IT Software
  • Tip job: full-time
    Nivel job: 1 - 5 ani experienta
  • Actualizat la: 29.06.2022
    Scurta descriere a companiei

    We are the leading technology recruitment and selection consultancy in Romania, working on highly specialized technologies for permanent and interim positions. Since our inception in 2001 as the first specialized IT recruitment agency in Romania we supported over 400 national and global clients in acquiring strategic IT talent for their business.

    We are deeply passionate about technology and highly responsible about our work. We understand the pressure of the business and the fact that you need fast and quality results. Therefore we always go for the extra mile to deliver the best IT&C talent for your business, no matter how challenging the project may seem.


    To be successful in this role you will:

    - Be a Graduate in Computer Science/Software Engineering/Mathematics, or related field with a minimum of 4 yrs of experience in information technology
    - Have several years of experience in data warehouse/data lake technical architecture and in Big Data and Big Data tools like Redshift, S3, Glue, Athena, DynamoDB, Python, Pyspark etc.
    - Have experience in Kafka, Spark, Hadoop is preferred
    - Have experience working in cloud data platforms or tools
    - Have familiarity with batch processing and workflow tools like AirFlow
    - Having AWS certifications will be a plus


    You will design, build, and implement production data pipelines from ingestion to consumption within a big data architecture. Using AWS native or custom programming and partners with data engineering, architecture, and cloud team members.

    As a Data Engineer, you will be responsible for:
    - Building and supporting reusable frameworks to ingest, integrate and provision data
    - Automating of end to end data pipeline with metadata, data quality checks, and audit
    - Building and supporting a big data platform on the cloud
    - Building and supporting data pipelines for data extraction, transformation, and loading processes using cloud native services or custom scripting using python, pyspark, etc.,
    - Developing, and operationalizing large-scale enterprise data solutions and/or reporting platforms