# Staff Data Engineer

**Company**: Bayer
**Work arrangement**: remote
**Experience**: staff
**Job type**: full-time
**Salary**: $114,400 to $171,600
**Category**: Engineering
**Industry**: Healthcare

**Apply**: https://talent.bayer.com/careers/job/562949976928777
**Canonical**: https://yubhub.co/jobs/job_7275ef33-009

## Description

At Bayer, we're seeking a Staff Data Engineer to join our team. As a Staff Data Engineer, you will design and lead the implementation of data flows to connect operational systems, data for analytics and business intelligence (BI) systems. You will recognize opportunities to reuse existing data flows, lead the build of data streaming systems, optimize the code to ensure processes perform optimally, and lead work on database management.

Communicating Between Technical and Non-Technical Colleagues

As a Staff Data Engineer, you will communicate effectively with technical and non-technical stakeholders, support and host discussions within a multidisciplinary team, and be an advocate for the team externally.

Data Analysis and Synthesis

You will undertake data profiling and source system analysis, present clear insights to colleagues to support the end use of the data.

Data Development Process

You will design, build and test data products that are complex or large scale, build teams to complete data integration services.

Data Innovation

You will understand the impact on the organization of emerging trends in data tools, analysis techniques and data usage.

Data Integration Design

You will select and implement the appropriate technologies to deliver resilient, scalable and future-proofed data solutions and integration pipelines.

Data Modeling

You will produce relevant data models across multiple subject areas, explain which models to use for which purpose, understand industry-recognised data modelling patterns and standards, and when to apply them, compare and align different data models.

Metadata Management

You will design an appropriate metadata repository and present changes to existing metadata repositories, understand a range of tools for storing and working with metadata, provide oversight and advice to more inexperienced members of the team.

Problem Resolution

You will respond to problems in databases, data processes, data products and services as they occur, initiate actions, monitor services and identify trends to resolve problems, determine the appropriate remedy and assist with its implementation, and with preventative measures.

Programming and Build

You will use agreed standards and tools to design, code, test, correct and document moderate-to-complex programs and scripts from agreed specifications and subsequent iterations, collaborate with others to review specifications where appropriate.

Technical Understanding

You will understand the core technical concepts related to the role, and apply them with guidance.

Testing

You will review requirements and specifications, and define test conditions, identify issues and risks associated with work, analyse and report test activities and results.

## Skills

### Required
- Proficiency in programming language such as Python or Java
- Experience with Big Data technologies such as Hadoop, Spark, and Kafka
- Familiarity with ETL processes and tools
- Knowledge of SQL and NoSQL databases
- Strong understanding of relational databases
- Experience with data warehousing solutions
- Proficiency with cloud platforms
- Expertise in data modeling and design
- Experience in designing and building scalable data pipelines
- Experience with RESTful APIs and data integration

### Nice to have
- Relevant certifications (e.g., GCP Certified, AWS Certified, Azure Certified)
- Bachelor's degree in Computer Science, Data Engineering, Information Technology, or a related field
- Strong analytical and communication skills
- Ability to work collaboratively in a team environment
- High level of accuracy and attention to detail
