This recruiter is online.

This is your chance to shine!

Apply Now

Senior ETL Data Engineer to create ELT/ETL data pipelines and analytics for our large banking client- 35190

Toronto, ON
  • Number of positions available : 1

  • To be discussed
  • Contract job

  • Starting date : 1 position to fill as soon as possible

Our large banking client is looking for a Senior ETL Data Engineer to create ELT/ETL data pipelines and analytics.


Location Address: Toronto - Hybrid model (1 day a week in office- No specific days)

Contract Duration: 3 Months and will be extended beyond January.

Possibility of extension & conversion to FTE - Possibly - Depending on performance and funding approval

Schedule Hours: 9 am-5 pm Monday-Friday (No overtime - to be discussed with the hiring manager if needed)


Story Behind the Need

Business group: GWRT Data and Analytics Technology (OU20015033) - Build the data pipelines and analytics for the bank.


Project: International banking Salesforce effectiveness - Support to migrate IB Commercial Banking data to Google Cloud Platform (GCP) and to CB Commercial Banking's Salesforce Financial Services Cloud (FSC) instance to create a global commercial Salesforce org. A total of 10 people working on this project - Current pre-planning.


Candidate Requirements/Must Have Skills:

• 10 + years of experience with Data Warehouse / Data Platforms

• 5 years of experience creating ELT data pipelines from scratch, working with structured, semi-structured, and unstructured data and SQL.

• 2 years of experience configuring and using data ingestion tools such as Fivetran, Qlik, Airbyte or others.

• 3+ years of experience with Cloud: AWS, Azure, GCP

• 3+ years of experience working as a data developer, data engineering, programming, ETL, ELT, processes for data integration.

• Good understanding of continuous integrations and continuous deployment pipeline (CI/CD) and working with source control systems such as Github, Bitbucket


Nice-To-Have Skills:

• Experience in data modelling, manipulating large data sets and handling raw SQL, and handling other cleaning techniques.

• Python - nice to have

• DBT - nice to have


Typical Day in Role:

• Design, develop and maintain robust data pipelines for ingestion, transformation, and distribution of large datasets.

• Utilize services and tools to automate data workflows and streamline the data engineering process.

• Collaborate with stakeholders and product managers to analyze, and build data mapping, models and reporting needs.

• Monitor application and pipeline performance.

• Conduct data quality checks


Soft Skills Required:

• Expert at problem solving.

• Experience collaborating and working with DevOps and Scrum Teams

• Demonstrated team player with strong communication skills and a track record of successful delivery of product development.

• Ability to collaborate across organizational boundaries, build relationships, and achieve broader organizational goals.

• Ability to adapt to a number of conflicting deadlines and quickly grasp and work independently with minimal supervision


Education & Certificates:

Bachelor's degree in a technical field such as computer science, computer engineering or related field required or sufficient experience.


Best VS. Average Candidate:

The ideal candidate is someone who can quickly adapt to any change, solve problems effectively, and implement sustainable solutions.


1st round -Interview with the Hiring manager and Tech lead

Apply

Requirements

Level of education

undetermined

Work experience (years)

undetermined

Written languages

undetermined

Spoken languages

undetermined