Apply for the Senior Data Engineer position at Bayer in St. Louis, MO, US. Find the best jobs for you effortlessly with InJob.AI, your ultimate solution for job search. Discover top job opportunities and streamline your job search process.

Job Description
<h3>Data Engineer</h3><p>The mission of Bayer Crop Science is centered on developing agricultural solutions for a sustainable future that will include a global population projected to eclipse 9.6 billion by 2050. We approach agriculture holistically, looking across a broad range of solutions from using biotechnology and plant breeding to produce the best possible seeds, to advanced predictive and prescriptive analytics designed to select the best possible crop system for every acre.</p><p>To make this possible, Bayer collects terabytes of data across all aspects of its operations, from genome sequencing, crop field trials, manufacturing, supply chain, financial transactions and everything in between. There is an enormous need and potential here to do something that has never been done before. We need great people to help transform these complex scientific datasets into innovative software that is deployed across the pipeline, accelerating the pace and quality of all crop system development decisions to unbelievable levels.</p><h3>What you will do is why you should join us:</h3><ul><li>Be a critical senior member of a data engineering team focused on creating distributed analysis capabilities around a large variety of datasets</li><li>Take pride in software craftsmanship, apply a deep knowledge of algorithms and data structures to continuously improve and innovate</li><li>Work with other top-level talent solving a wide range of complex and unique challenges that have real world impact</li><li>Explore relevant technology stacks to find the best fit for each dataset</li><li>Pursue opportunities to present our work at relevant technical conferences:<ul><li>Google Cloud Next 2019: <a href='https://www.youtube.com/watch?v=fqvuyOID6v4'>https://www.youtube.com/watch?v=fqvuyOID6v4</a></li><li>GraphConnect 2015: <a href='https://www.youtube.com/watch?v=6KEvLURBenM'>https://www.youtube.com/watch?v=6KEvLURBenM</a></li><li>Google Cloud Blog: <a href='https://cloud.google.com/blog/products/containers-kubernetes/google-kubernetes-engine-clusters-can-have-up-to-15000-nodes'>https://cloud.google.com/blog/products/containers-kubernetes/google-kubernetes-engine-clusters-can-have-up-to-15000-nodes</a></li></ul></li><li>Project your talent into relevant projects. Strength of ideas trumps position on an org chart</li></ul><h3>If you share our values, you should have:</h3><ul><li>At least 7 years experience in software engineering</li><li>At least 2 years experience with Go</li><li>Proven experience (2 years) building and maintaining data-intensive APIs using a RESTful approach</li><li>Experience with stream processing using Apache Kafka</li><li>A level of comfort with Unit Testing and Test Driven Development methodologies</li><li>Familiarity with creating and maintaining containerized application deployments with a platform like Docker</li><li>A proven ability to build and maintain cloud-based infrastructure on a major cloud provider like AWS, Azure, or Google Cloud Platform</li><li>Experience data modeling for large scale databases, either relational or NoSQL</li></ul><h3>Bonus points for:</h3><ul><li>Experience with protocol buffers and gRPC</li><li>Experience with: Google Cloud Platform, Apache Beam and or Google Cloud Dataflow, Google Kubernetes Engine or Kubernetes</li><li>Experience working with scientific datasets, or a background in the application of quantitative science to business problems</li><li>Bioinformatics experience, especially large scale storage and data mining of variant data, variant annotation, and genotype to phenotype correlation</li></ul><p><strong>Location:</strong> Creve Coeur, MO or Remote from another location.</p>
AI Powered Job Insights
Senior Data Engineer Position Alert! Bayer, a leader in agricultural solutions, is seeking an experienced Data Engineer to join their team in St. Louis, MO or remotely. This role is vital for transforming complex scientific datasets into innovative software solutions. 📍 Location: St. Louis, MO (or Remote) 💼 Position: Senior Data Engineer ⏰ Type: Contract 📅 Date Posted: 2024-07-22 Role Summary: - Serve as a senior member of a data engineering team focused on developing distributed analysis capabilities. - Engage in software craftsmanship with a strong emphasis on algorithms and data structures. - Address complex challenges with top-tier talent that have significant real-world impacts. What You'll Do: - Evaluate and explore relevant technology stacks for optimal dataset handling. - Have opportunities to present work at esteemed technical conferences. - Contribute ideas and work on projects that align with core values, valuing innovation over hierarchy. What's Needed: - Minimum of 7 years of experience in software engineering. - At least 2 years experience working with Go programming language. - Proven background in building and maintaining data-intensive APIs using RESTful methodologies for at least 2 years. - Experience with stream processing, particularly with Apache Kafka. - Familiarity with unit testing and test-driven development. - Skills in creating and maintaining containerized applications using Docker. - Proficient in building cloud-based infrastructure on major platforms like AWS, Azure, or Google Cloud Platform. - Experience in data modeling for large-scale databases, both relational and NoSQL. Bonus Points: - Experience with protocol buffers and gRPC. - Knowledge of Google Cloud Platform or Apache Beam, Dataflow, and Kubernetes. - Background in working with scientific datasets or quantitative science related to business problems. - Bioinformatics experience, especially with variant data storage and data mining.
Top Interview Questions
A: In my previous role, I developed a RESTful API for a crop monitoring application that processed real-time data from IoT devices in the field. This involved implementing robust authentication, response caching for efficiency, and thorough documentation using Swagger. Additionally, I utilized Flask to structure the API, ensuring scalability and ease of maintenance, which allowed seamless integration with frontend applications.
A: In a project focused on agricultural data analysis, I used Apache Kafka to create a real-time data pipeline that ingested sensor data from fields. I configured producers to publish data streams, which were then processed by various consumers to generate analytics dashboards. This setup not only enabled immediate insights but also improved decision-making and operational efficiency across our teams.
A: I once encountered issues with provisioning cloud resources on AWS due to unforeseen spikes in user traffic during the harvest season. To address this, I implemented auto-scaling groups and optimized our EC2 instances for performance. By defining clear scaling policies based on metrics like CPU utilization and using CloudWatch for monitoring, we ensured our services remained responsive even during peak usage.
A: My approach begins with creating a Dockerfile that outlines the application environment, including dependencies and configurations. I ensure that the image is lightweight and optimized, usually by selecting a minimal base image. After building the Docker image, I implement both unit and integration tests within a CI/CD pipeline to automate deployments. This process greatly simplifies application deployment across different environments, ensuring consistency and reducing downtime.
A: I typically start by thoroughly understanding the data requirements and relationships involved. For relational databases, I focus on normalization to avoid redundancy, and I leverage indexing for performance optimization. In the case of NoSQL, I determine the usage patterns and data access needs, choosing data structures that align with those patterns. I also employ techniques like sharding for scalability and regularly review query performance to refine our models accordingly.
Want to get matched with your dream job?
Try InJob.ai for Free and Get Matched 100s of such opportunities!
200+ professionals have found their dream job with InJob.ai this week.

Salary Benefits
$142338 - $180231 /year

Want to apply directly?
Apply for the Senior Data Engineer position at Bayer in St. Louis, MO, US using https://www.indeed.com/viewjob?jk=bbe6bd408e2b305c


Grimco, Inc., St. Louis, MO, US
Mercy, Chesterfield, MO, US
Wells Fargo, St. Louis, MO, US
Build-A-Bear Workshop, St. Louis, MO, US
Benson Hill, St. Louis, MO, US
Benson Hill, St. Louis, MO, US
Emergent Software, St. Louis, MO, US
ARCO a Family of Construction Companies, St. Louis, MO, US
Still have a question? Check out our FAQ section below.
