Evalueserve is a global professional services provider offering research, analytics, and data management
services. We’re powered by mind+machine – a unique combination of human expertise and best-in-class
technologies that use smart algorithms to simplify key tasks. This approach enables us to design and
manage processes that can generate and harness insights on a large scale, significantly cutting costs and
timescales and helping businesses that partner with us to overtake the competition.
We work with clients across a wide range of industries and business functions, helping them to make
better decisions faster; reach new levels of efficiency and effectiveness; and see a tangible impact on their
top and bottom line.
Good understanding of other Azure services like Azure Data Lake Analytics & U-SQL, Azure SQL DW
Good understanding of Azure Databricks platform and can build data analytics solutions to support the required performance & scale
Demonstrated analytical and problem-solving skills, particularly those that apply to a big data environment
Good Understanding of Modern Data Warehouse/Lambda Architecture, Data warehousing concepts
Proficient in a source code control system such as GIT, Jenkins, etc.
Ability to code
Ability to use word to create required technical documentation
Excellent written and verbal skills (English)
University graduate and post-graduate
Hands-on experience in Teradata and Azure stack (Azure Data Lake, Azure Data Factory, Azure HDInsight, Azure Databricks) is mandatory
8 Years of Expertise in Any ETL tool i.e. (SSIS, Informatica, Data Stage)
8 Years of Expertise to Implementing Data warehousing Solutions
3 to 10 years of experience as Data Engineer in Azure Big Data Environment
Programming experience in Scala or Python, SQL
As a Data Engineer, you will work with multiple teams to deliver solutions on the Teradata & Azure Cloud using core cloud data warehouse tools (Azure Data Factory, Azure HDInsight, Azure Databricks, Azure SQL DW and other Big Data related technologies). In addition to building the next generation of application data platforms (not infrastructure) and/or improving recent implementations. Note: This is a data engineer from the application side. Must be able to analyze data and develop strategies for populating data lakes. This is not an infrastructure position. This person may be called upon to do complex coding using U-SQL, Scala or Python and T-SQL.
Work as part of a team to develop Cloud Data and Analytics solutions
Participate in development of cloud data warehouses, cloud data migration, data as a service, business intelligence solutions
Ability to provide solutions that are forward-thinking in data and analytics
Coding complex U-SQL, Spark (Scala or Python), T-SQL, Stored Procedures
Data wrangling of heterogeneous data
Developing Modern Data Lake and Data Warehouse solutions using Azure Stack (Azure Data Lake, Azure Data Factory, Azure HDInsight, Azure Databricks, Azure SQL DW, etc.)