Applications must be a Citizen or a Permanent Resident.
Roles & Responsibilities6 to 9 years of hands‑on development experienceStrong experience with big data platformsProficient in Apache Spark and Python development.
Job DescriptionWe are seeking an experienced GCP Data Architect to lead the cloud data modernization initiative for enterprise data platforms, transitioning legacy Teradata data warehouse, Informatica ETL pipelines, and Control-M scheduling into a scalable, secure, and cloud-native architecture on Google Cloud Platform (GCP).
Responsibilities:• Architect and lead the modernization of legacy data warehouse and ETL systems into GCP-native services.• Architect scalable data pipelines using Teradata, Informatica, SQL, BTEQ, and Python to enable reliable enterprise data flows.• Design scalable data warehouse architecture on BigQuery including landing, staging, curated, and consumption layers.• Establish standardized ingestion and orchestration frameworks using Cloud Composer, Dataflow, and Cloud Storage.• Ensure data validation, reconciliation, and functional parity between legacy outputs and GCP-transformed datasets.• Define data modeling standards optimized for BigQuery performance and cost efficiency.• Optimize workloads for performance, scalability, and cost governance on GCP.• Collaborate with business SMEs, data engineering teams, and cloud infrastructure stakeholders.• Lead architecture reviews, assessments, risk mitigation planning, and technical governance throughout the migration lifecycle.• Collaborate with cross-functional teams, stakeholders, and end-users to align data strategies with business needs and regulatory requirements.