Data Engineer - Metering


6 months ago

08/17/2019 10:21:23

Job type: Full-time

Hiring from: US only

Category: All others

About Datadog:

We're on a mission to build the best platform in the world for engineers to understand and scale their systems, applications, and teams.  We operate at high scale—trillions of data points per day—providing always-on alerting, metrics visualization, logs, and application tracing for tens of thousands of companies. Our engineering culture values pragmatism, honesty, and simplicity to solve hard problems the right way.

The team:

The Revenue and Growth Team builds and runs the data pipelines, container-native services, and systems to quantify our customers’ usage across all Datadog products. This team is at the leading edge of any new product we release.

The opportunity:

As a Data Engineer within the Revenue & Growth Metering team, you will work in Spark with big data tooling to build highly reliable, verifiably-accurate data processing pipelines for a high scale mission-critical process. This team ingests the full firehose of data we receive each day - literally trillions of data points and hundreds of TB’s.

You will:

  • Build distributed, high-volume data pipelines that power this core product

  • Do it with Spark, Luigi and other open-source technologies

  • Work all over the stack, moving fluidly between programming languages: Scala, Java, Python, Go, and more

  • Join a tightly knit team solving hard problems the right way

  • Own meaningful parts of our service, have an impact, grow with the company


  • You have a BS/MS/PhD in a scientific field or equivalent experience

  • You have built and operated data pipelines for real customers in production systems

  • You are fluent in several programming languages (JVM & otherwise)

  • You enjoy wrangling huge amounts of data and exploring new data sets

  • You value code simplicity and performance

  • You want to work in a fast, high growth startup environment that respects its engineers and customers

Bonus points:

  • You are deeply familiar with Spark and/or Hadoop

  • In addition to data pipelines, you’re also quite good with Chef or Puppet

  • You’ve built applications that run on AWS

  • You’ve built your own data pipelines from scratch, know what goes wrong, and have ideas for how to fix it

Please mention that you come from Remotive when applying for this job.

Help us maintain Remotive! If this link is broken, please just click to report dead link!

similar jobs

  • 2 weeks ago
    Truveris is a digital health company that partners with employers, brokers, and pharmaceutical companies to dramatically improve people’s ability to afford and access prescription drugs. With expertise and technology solutions that span across the prescription drug ecosystem, we deliver the outcomes people and businesses need to thrive.

    We are on a mission to transform the pharmaceutical industry and are backed by leading venture capital firms including Canaan Partners, First Round, New Atlantic Ventures, New Leaf Venture Partners, Tribeca Venture Partners and McKesson Ventures. In 2018, Truveris was ranked as one of the fastest growing technology companies in the U.S. by Deloitte, Crain's, and Inc.

    The Senior Data Architect will be primarily responsible for the design and governance of the Truveris data model. The Senior Data Architect will work with business leaders, analysts, and engineering to surface data model needs.  The ideal candidate will have both the business acumen and technical prowess required to complete the following: surface data model needs from business leaders, analytics, and engineering; create a data model that meets current needs and is extensible as new requirements are identified; execute on that model to help achieve the company’s overarching business objectives.

    • Develop and maintain a comprehensive data architecture for the Truveris Unified Data Platform (UDP)
    • Work with business stakeholders to identify critical data and data use cases across the organization
    • Develop and revise conceptual, logical, and physical data models including entities and attributes and their inter-relationships and dependencies
    • Integrate disparate data models across the company into one enterprise-wide Unified Data Platform design

    • Develop database design and architecture documentation
    • Work with the Director of Data Integration to create documentation and/or the requirements for a documentation system that covers data lineage, data definitions, and metadata for business-critical data domains
    • Create and maintain data flow diagrams
    • Surface inconsistencies in data definitions so that they can be reconciled

    • Work with the Director of Data Integration to develop and revise the data governance framework

    • Partner with engineering and security/compliance to ensure compliance with data security and privacy policies and procedures 

    • Design data model components to support data transformation and results of analytic algorithms
    • Build a deep understanding of the data and analytic landscape at Truveris including data sources, data/analytic assets, and products
    • Understand business rules that govern how data is transformed, integrated, and used by the organization’s products
    • Design data models to support the results of data transformations and analytic algorithms
    • Make recommendations on what should be supported in the UDP versus in the BI products to create performant products that encourage reuse of analytics
    • Drive culture of standardization in data transformation and analytic logic

    • Optimize data model for scale and performance

    • Bachelor's degree in computer science, computer information system or other data related field
    • 5+ years of work experience as a Data Modeler/Data Architect; deep proficiency in data modeling and data management principles
    • 3+ years of work experience in healthcare (payer, care provider, data analytics, PBM) 
    • Experience with PBM data and the PBM industry strongly preferred
    • Experience with claims and eligibility data strongly preferred
    • Experience with health industry integration technologies such as Mirth, Orion Rhapsody, eGate a plus
    • Standards experience with C-CDAXML V3, HL7, IHE profiles a plus
    • Understanding of SDLC and Agile methodologies, experience with Scaled Agile Framework is a plus

    • Ability to work with business, operations, and engineering stake holders
    • Hands-on experience with data modeling tools
    • Experience with Jaspersoft/Talend ETL
    • Experience with AWS services such as S3, EMR, EC2, and Amazon RedShift strongly preferred
    • Experience with ETL development, including data migrations, integrations, and analytic transformations
    • Ability to manage multiple responsibilities simultaneously
    • Excellent communications and documentation skills; ability to work on a collaborative team
  • About us 

    Beat is one of the most exciting companies to ever come out of the ride-hailing space. One city at a time, all across the globe we make transportation affordable, convenient, and safe for everyone. We also help hundreds of thousands of people earn extra income as drivers. 

    Today we are the fastest-growing ride-hailing service in Latin America. But serving millions of rides every day pales in comparison to what lies ahead. Our plans for expansion are limitless. Our stellar engineering team operates across a number of European capitals where, right now, some of the world’s most ambitious and talented engineers are changing how cities will move in the future.

    Beat is currently available in Greece, Peru, Chile, Colombia, Mexico and Argentina. 

    About the role

    Our Big Data team is an essential ingredient in Beat's aggressive growth plan and vision for the future.

    As a Senior Big Data Software Engineer in our teams, you will tackle some of the hardest problems and your work will impact the entire Beat experience, from making sure drivers are always available for all our passengers, to helping our drivers utilise their working hours. Our team moves very fast, so you'll have the opportunity to make an immediate difference.

    With the various tools and communication technologies we're using, you'll feel connected to your team from wherever you are in the world. Our remote workforce always has the option to travel to our headquarters for meetings, events, and team bonding—or they can join virtually. Whatever works best for you and your work style. 

    What you'll do day in day out:

    • Work with the data science and engineering teams in translating complex models and algorithms into production-grade software systems.

    • Develop components that will analyse, process and react to operational feeds in near real-time, optimizing driver allocation, service pricing and preventing fraudulent use of our services in near real-time.

    • Being agile both within and across teams, bridging software engineering and data science.

    What you need to have:

    • At least one Master's degree in Math, Physics, Computer Science or Engineering. Higher degrees are a significant bonus as is considerable experience with Big Data Analytics and Statistical Analysis in the industry.

    • At least 5 years of experience in developing production-grade software using either Data Warehousing or Big Data frameworks in order to solve real-world problems.

    • Experience in developing with Scala at an idiomatic, expert level is required. Knowledge of advanced Java or C++ is a bonus. We would favour candidates with an exceptionally strong engineering background.

    • At least 6 years of hands-on experience with SQL and NoSQL databases.

    • Proven hands-on experience with Apache Hadoop, Kafka, Spark or Flink.

    • Exposure to designing streaming and batch data pipelines.

    • Applied knowledge in Machine Learning algorithms and their application to vast datasets is considered as a plus.

    • A strong sense of ownership in your work.

    • Excellent numerical and analytical skills with an excellent eye for detail working with qualitative and quantitative data.

    • The desire to build, launch and iterate on quality products on time with minimal technical compromises under a loosely-managed working environment.

    What's in it for you:

    • Competitive salary package

    • Flexible working hours

    • High tech equipment and top line tools

    • A great opportunity to grow and work with the most amazing people in the industry

    • Being part of an environment that gives engineers large goals, autonomy, mentoring and creates incredible opportunities both for you and the company

    • Please note that you will be working as contractor.

    • As part of our dedication to the diversity of our workforce, Beat is committed to Equal Employment Opportunity without regard for race, color, national origin, ethnicity, gender, disability, sexual orientation, gender identity, or religion.

  • Auth0 (US or Argentina)
    2 months ago
    Auth0 is a pre-IPO unicorn. We are growing rapidly and looking for exceptional new team members to add to our teams and will help take us to the next level. One team, one score. 

    We never compromise on identity. You should never compromise yours either. We want you to bring your whole self to Auth0. If you’re passionate, practice radical transparency to build trust and respect, and thrive when you’re collaborating, experimenting and learning – this may be your ideal work environment.  We are looking for team members that want to help us build upon what we have accomplished so far and make it better every day.  N+1 > N.

    The Data engineer will help build, scale and maintain the enterprise data warehouse. The ideal candidate will have a deep understanding of technical and functional designs for Databases, Data Warehousing and Reporting areas. The candidate should feed on challenges and love to be hands on with recent technologies.

    This job plays a key role in data infrastructure, analytics projects, and systems design and development. You should be passionate for continuous learning, experimenting, applying and contributing towards cutting edge open source Data technologies and software paradigms.

    • Contributing at a senior-level to the data warehouse design and data preparation by implementing a solid, robust, extensible design that supports key business flows.
    • Performing all of the necessary data transformations to populate data into a warehouse table structure that is optimized for reporting.
    • Establishing efficient design and programming patterns for engineers as well as for non-technical peoples.
    • Designing, integrating and documenting technical components for seamless data extraction and analysis.
    • Ensuring best practices that can be adopted in our data systems and share across teams.
    • Contributing to innovations and data insights that fuel Auth0’s mission.
    • Working in a team environment, interact with multiple groups on a daily basis (very strong communication skills).

    Skills and Abilities:
    • + BA/BS in Computer Science, related technical field or equivalent practical experience.
    • At least 4 years of relevant work experience
    • Ability to write, analyze, and debug SQL queries.
    • Exceptional Problem solving and analytical skills.
    • Experience with Data Warehouse design, ETL (Extraction, Transformation & Load), architecting efficient software designs for DW platform.
    • Knowledge of database modeling and design in a Data Warehousing context
    • Strong familiarity with data warehouse best practices.
    • Proficiency in Python and/or R.

    Preferred Locations:
    • #AR; #US;
    Auth0’s mission is to help developers innovate faster. Every company is becoming a software company and developers are at the center of this shift. They need better tools and building blocks so they can stay focused on innovating. One of these building blocks is identity: authentication and authorization. That’s what we do. Our platform handles 2.5B logins per month for thousands of customers around the world. From indie makers to Fortune 500 companies, we can handle any use case.

    We like to think that we are helping make the internet safer.  We have raised $210M to date and are growing quickly. Our team is spread across more than 35 countries and we are proud to continually be recognized as a great place to work. Culture is critical to us, and we are transparent about our vision and principles. 

    Join us on this journey to make developers more productive while making the internet safer!

Remotive can help!

Not sure how to apply properly to this job? Watch our live webinar « 3 Mistakes to Avoid When Looking For A Remote Startup Job (And What To Do Instead) ».

Interested to chat with Remote workers? Join our community!