Job Description
At Datadog, we’re on a mission to build the best monitoring platform in the world. We operate at high scale—trillions of data points per day—and high availability, providing always-on alerting, visualization, and tracing for our customers' infrastructure and applications around the globe.
If you’re excited to work on a fast-moving data engineering team with the best open-source data tools at high scale, we want to meet you.
What You Will Do
- Build distributed, high-volume data pipelines that power the core Datadog product
- Do it with Spark, Luigi, Kafka and other open-source technologies
- Work all over the stack, moving fluidly between programming languages: Scala, Java, Python, Go, and more
- Join a tightly knit team solving hard problems the right way
- Own meaningful parts of our service, have an impact, grow with the company
What we're looking for
- You have a BS/MS/PhD in a scientific field or equivalent experience
- You have built and operated data pipelines for real customers in production systems
- You are fluent in several programming languages (JVM & otherwise)
- You enjoy wrangling huge amounts of data and exploring new data sets
- You value code simplicity and performance
- You want to work in a fast, high growth startup environment that respects its engineers and customers
Bonus Points
- You are deeply familiar with Spark and/or Hadoop
- In addition to data pipelines, you’re also quite good with Chef or Puppet
- You’ve built applications that run on AWS
- You’ve built your own data pipelines from scratch, know what goes wrong, and have ideas for how to fix it
Is this you? Send your resume and link to your GitHub if available.