Description
We are seeking a hands-on, highly skilled and motivated individual to join our dynamic team as a Big Data Solutions Architect. As a key member of our data engineering team, you will play a crucial role in leading the design and implementation of GAM’s hybrid Data Platform. Your primary focus will be on designing, implementing, and maintaining robust data infrastructure and solutions in the cloud to support our organization’s data needs and advance our capabilities pertaining to data engineering, BI, and data analytics.
3-5 years of architecture experience in both Databricks and Snowflake platforms; Azure and/or AWS public clouds; ETL processes, OLAPOLTP systems, SQL and NoSQL Databases; data pipelines leveraging big data technologies such as Spark, Airflow, Nifi, Kafka, Cassandra, and Elasticsearch; DataOps/DevOps; at least one programming languages like Java, Scala and/or Python
Lead and drive the bank’s Data Strategy and moving into a more hybrid architecture. (they are currently mostly on-prem, but looking to have more cloud presence)
- Bachelor’s or master’s degree in computer science, Information Systems, or a related field.
- 3-5 years of architecture experience in both Databricks and Snowflake platforms to design and implementing big data solutions including migration towards hybrid architecture.
- Professional certification on designing solutions for Azure and/or AWS public clouds.
- Strong knowledge of ETL processes, OLAPOLTP systems, SQL and NoSQL Databases.
- Experience building batch and real-time data pipelines leveraging big data technologies such as Spark, Airflow, Nifi, Kafka, Cassandra, and Elasticsearch
- Expertise in DataOps/DevOps practices for deploying and monitoring automated data pipelines and data lifecycle management.
- Proficiency in writing and optimizing SQL queries and at least one programming languages like Java, Scala and/or Python.
- Continuously learning mindset and enjoy working on open-ended problems.
Nice to have
- System administration experience including Docker and Kubernetes platforms.
- Experience with OpenShift, S3, Trino, Ranger and Hive.
- Knowledge of machine learning and data science concepts and tools.
- Experience with BI tools.