Senior Data Engineer at Bayer in St. Louis, MO, US

Apply for the Senior Data Engineer position at Bayer in St. Louis, MO, US. Find the best jobs for you effortlessly with InJob.AI, your ultimate solution for job search. Discover top job opportunities and streamline your job search process.

alert circle

Job Description

<h3>Data Engineer</h3><p>The mission of Bayer Crop Science is centered on developing agricultural solutions for a sustainable future that will include a global population projected to eclipse 9.6 billion by 2050. We approach agriculture holistically, looking across a broad range of solutions from using biotechnology and plant breeding to produce the best possible seeds, to advanced predictive and prescriptive analytics designed to select the best possible crop system for every acre.</p><p>To make this possible, Bayer collects terabytes of data across all aspects of its operations, from genome sequencing, crop field trials, manufacturing, supply chain, financial transactions and everything in between. There is an enormous need and potential here to do something that has never been done before. We need great people to help transform these complex scientific datasets into innovative software that is deployed across the pipeline, accelerating the pace and quality of all crop system development decisions to unbelievable levels.</p><h3>What you will do is why you should join us:</h3><ul><li>Be a critical senior member of a data engineering team focused on creating distributed analysis capabilities around a large variety of datasets</li><li>Take pride in software craftsmanship, apply a deep knowledge of algorithms and data structures to continuously improve and innovate</li><li>Work with other top-level talent solving a wide range of complex and unique challenges that have real world impact</li><li>Explore relevant technology stacks to find the best fit for each dataset</li><li>Pursue opportunities to present our work at relevant technical conferences:<ul><li>Google Cloud Next 2019: <a href='https://www.youtube.com/watch?v=fqvuyOID6v4'>https://www.youtube.com/watch?v=fqvuyOID6v4</a></li><li>GraphConnect 2015: <a href='https://www.youtube.com/watch?v=6KEvLURBenM'>https://www.youtube.com/watch?v=6KEvLURBenM</a></li><li>Google Cloud Blog: <a href='https://cloud.google.com/blog/products/containers-kubernetes/google-kubernetes-engine-clusters-can-have-up-to-15000-nodes'>https://cloud.google.com/blog/products/containers-kubernetes/google-kubernetes-engine-clusters-can-have-up-to-15000-nodes</a></li></ul></li><li>Project your talent into relevant projects. Strength of ideas trumps position on an org chart</li></ul><h3>If you share our values, you should have:</h3><ul><li>At least 7 years experience in software engineering</li><li>At least 2 years experience with Go</li><li>Proven experience (2 years) building and maintaining data-intensive APIs using a RESTful approach</li><li>Experience with stream processing using Apache Kafka</li><li>A level of comfort with Unit Testing and Test Driven Development methodologies</li><li>Familiarity with creating and maintaining containerized application deployments with a platform like Docker</li><li>A proven ability to build and maintain cloud-based infrastructure on a major cloud provider like AWS, Azure, or Google Cloud Platform</li><li>Experience data modeling for large scale databases, either relational or NoSQL</li></ul><h3>Bonus points for:</h3><ul><li>Experience with protocol buffers and gRPC</li><li>Experience with: Google Cloud Platform, Apache Beam and or Google Cloud Dataflow, Google Kubernetes Engine or Kubernetes</li><li>Experience working with scientific datasets, or a background in the application of quantitative science to business problems</li><li>Bioinformatics experience, especially large scale storage and data mining of variant data, variant annotation, and genotype to phenotype correlation</li></ul><p><strong>Location:</strong> Creve Coeur, MO or Remote from another location.</p>

AI Powered Job Insights

Senior Data Engineer Position Alert! Bayer, a leader in agricultural solutions, is seeking an experienced Data Engineer to join their team in St. Louis, MO or remotely. This role is vital for transforming complex scientific datasets into innovative software solutions.

📍 Location: St. Louis, MO (or Remote)
💼 Position: Senior Data Engineer
⏰ Type: Contract
📅 Date Posted: 2024-07-22

Role Summary:
- Serve as a senior member of a data engineering team focused on developing distributed analysis capabilities.
- Engage in software craftsmanship with a strong emphasis on algorithms and data structures.
- Address complex challenges with top-tier talent that have significant real-world impacts.

What You'll Do:
- Evaluate and explore relevant technology stacks for optimal dataset handling.
- Have opportunities to present work at esteemed technical conferences.
- Contribute ideas and work on projects that align with core values, valuing innovation over hierarchy.

What's Needed:
- Minimum of 7 years of experience in software engineering.
- At least 2 years experience working with Go programming language.
- Proven background in building and maintaining data-intensive APIs using RESTful methodologies for at least 2 years.
- Experience with stream processing, particularly with Apache Kafka.
- Familiarity with unit testing and test-driven development.
- Skills in creating and maintaining containerized applications using Docker.
- Proficient in building cloud-based infrastructure on major platforms like AWS, Azure, or Google Cloud Platform.
- Experience in data modeling for large-scale databases, both relational and NoSQL.

Bonus Points:
- Experience with protocol buffers and gRPC.
- Knowledge of Google Cloud Platform or Apache Beam, Dataflow, and Kubernetes.
- Background in working with scientific datasets or quantitative science related to business problems.
- Bioinformatics experience, especially with variant data storage and data mining.

Top Interview Questions

  • Q: Can you explain your experience with building and maintaining data-intensive APIs, particularly using a RESTful approach?

    A: In my previous role, I developed a RESTful API for a crop monitoring application that processed real-time data from IoT devices in the field. This involved implementing robust authentication, response caching for efficiency, and thorough documentation using Swagger. Additionally, I utilized Flask to structure the API, ensuring scalability and ease of maintenance, which allowed seamless integration with frontend applications.

  • Q: How have you applied stream processing in your past projects, particularly with Apache Kafka?

    A: In a project focused on agricultural data analysis, I used Apache Kafka to create a real-time data pipeline that ingested sensor data from fields. I configured producers to publish data streams, which were then processed by various consumers to generate analytics dashboards. This setup not only enabled immediate insights but also improved decision-making and operational efficiency across our teams.

  • Q: Describe a challenge you faced while working with cloud-based infrastructure and how you addressed it.

    A: I once encountered issues with provisioning cloud resources on AWS due to unforeseen spikes in user traffic during the harvest season. To address this, I implemented auto-scaling groups and optimized our EC2 instances for performance. By defining clear scaling policies based on metrics like CPU utilization and using CloudWatch for monitoring, we ensured our services remained responsive even during peak usage.

  • Q: Can you walk us through your approach to containerizing applications using Docker?

    A: My approach begins with creating a Dockerfile that outlines the application environment, including dependencies and configurations. I ensure that the image is lightweight and optimized, usually by selecting a minimal base image. After building the Docker image, I implement both unit and integration tests within a CI/CD pipeline to automate deployments. This process greatly simplifies application deployment across different environments, ensuring consistency and reducing downtime.

  • Q: How do you ensure your data models are effective for large-scale databases, either relational or NoSQL?

    A: I typically start by thoroughly understanding the data requirements and relationships involved. For relational databases, I focus on normalization to avoid redundancy, and I leverage indexing for performance optimization. In the case of NoSQL, I determine the usage patterns and data access needs, choosing data structures that align with those patterns. I also employ techniques like sharding for scalability and regularly review query performance to refine our models accordingly.

People Faces

200+ professionals have found their dream job with InJob.ai this week.

salary

Salary Benefits

$142338 - $180231 /year

application process

Want to apply directly?

Apply for the Senior Data Engineer position at Bayer in St. Louis, MO, US using https://www.indeed.com/viewjob?jk=bbe6bd408e2b305c

Get StartedGet Started

Similar Jobs found by InJob.AI


Senior Data Engineer

Grimco, Inc., St. Louis, MO, US

Data Engineer

Build-A-Bear Workshop, St. Louis, MO, US

Associate Data Engineer

Benson Hill, St. Louis, MO, US

Data Engineer II

Benson Hill, St. Louis, MO, US

Data Engineer (Azure)

Emergent Software, St. Louis, MO, US

Data Engineer Intern/Co-op

ARCO a Family of Construction Companies, St. Louis, MO, US

Scroll To Top
Get Started

Frequently asked Questions

Still have a question? Check out our FAQ section below.

FAQ Section

InJob searches for the best jobs, based on your profile and automatically generates customized cover letters for you. It saves a lot of hours in your job hunting time.

InJob creates your profile by having a conversation with you to learn about your skills and requirements. It also scans your resume to gather information about your experiences, skills, and achievements. This information is used to craft your profile in the backend which is further used to match jobs and gives you a personalized cover letter for each job opportunity.

InJob searches for job opportunities across a wide range of sources, including LinkedIn, Indeed, and hundreds of other job boards to find hidden gems. Its search is not limited, ensuring it covers as many potential job listings as possible. It also searches the career pages of individual companies that suit your target industry and location and you get applied there.

InJob is constantly active, scanning for fresh job opportunities every single minute. This ensures that you are the first person to apply to new job listings that align with your profile.

InJob plays matchmaker by comparing your profile and resume with job listings. Each job receives a score from 1-10, indicating how well you match with it.

In the upcoming update, Yes, this will be included and this will be the main differentiator. InJob will apply for jobs on your behalf. It will target top matches and craft custom cover letters for each job, ensuring your application stands out. InJob will also handle the application process, including visiting company websites and filling out forms.

In the upcoming update, Yes, InJob will provide an interactive dashboard that serves as mission control for your job search. It will display all the jobs InJob has applied for you and their current status. You will also be able to track which companies have shown interest in your profile and view the feedback they provided.

In an upcoming feature, Yes, InJob will collect all feedback, including positive and constructive feedback, and presents it to you. This will allow you to know exactly where you stand in the job market and provides insights on how to improve your skills.