Description
Competent data infrastructure development with limited coaching and guidance: Pipeline Design and Development - Architects and builds scalable data pipelines using modern ETL (Extract, Load, Transform) tools and frameworks such as dbt (Data Build Tool), Apache Airflow, or similar. Automates data ingestion processes from various sources including databases, APIs, and third party services. Data Storage and Management - Designs and implements data warehousing solutions using platforms like Snowflake, Redshift, or BigQuery.
Requirements
PySpark - Intermediate Python - Intermediate
Software / Tool Skills
Databricks - Intermediate (4-6 Years) Snowflake - Intermediate (4-6 Years) Datawarehouse/Data Engineering
Benefits
Competitive compensation and benefits package:
- Competitive salary and performance-based bonuses
- Comprehensive benefits package
- Career development and training opportunities
- Flexible work arrangements (remote and/or office-based)
- Dynamic and inclusive work culture within a globally renowned group
- Private Health Insurance
- Pension Plan
- Paid Time Off
- Training & Development
Note: Benefits differ based on employee level.