Sr. Infrastructure Engineer, Data Administrator


3 weeks ago

10/27/2019 10:21:18

Job type: Full-time

Hiring from: US only

Category: Software Dev

We at Redox understand that we are all patients at some point, so our mission is to make healthcare data useful and every patient experience better. Our values represent the basis of our culture of trust, transparency, and personal growth, and define how we want to interact with each other and the world.

Redox’s full-service integration platform accelerates the development and distribution of healthcare software solutions by securely and efficiently exchanging healthcare data. With just one connection, data can be transmitted across a growing network of more than 500 healthcare delivery organizations and more than 200 independent software vendors. Members of the Redox Network exchange more than seven million patient records per day, leveraging a single data standard compatible with more than 40 electronic health record systems. We are on a path to double our number of client connections over the next year and need to build infrastructure that scales accordingly.


Redox’s Infrastructure Engineering team builds and manages the AWS cloud platform that provides a safe, reliable, and flexible foundation for Redox’s products and our growing customer base. In production, we run nearly 1000 docker containers, have 20Tb of relational data, 8Tb of ES data, 6Tb of Kafka data, and handle about 400 client-controlled VPN connections. Expertise on the team includes:

  • Data technologies: RDS (Postgres, MySQL), Kafka, Redis, ElastiCache, Elasticsearch

  • Deployment automation for EC2, S3, RDS, VPCs, IAM, Terraform, CloudFormation, Kubernetes, Rancher, infrastructure as code in Python 

  • Logging, metrics, monitoring, and alerting: Sumologic, Telegraf, Statsd, InfluxDB, Grafana, Icinga, Pagerduty

  • Networking/VPN: LibreSwan, IPtables, Banyan, IPSec 

  • Linux system tuning and debugging 


As a senior member of the Infrastructure Engineering team, you’ll play a key role in ensuring that our product engineers are empowered and effective by providing design and debugging expertise, technical leadership and mentorship, and tools for deployment automation. You’ll work closely with members of our Product and Security teams to understand their needs and translate them into well scoped, actionable projects and tasks. In this role, you will:

  • Lead from a technical, architectural, and project perspective, shepherding technical decisions from inception to completion.

  • Evaluate our existing systems, inform stakeholders of emerging technologies and industry changes, and choose tools and develop software that follow industry best practices.

  • Design, implement, and debug robust, secure, scalable, abstracted systems that allow us to scale our systems with the needs of our customers.

  • Ensure that our infrastructure deployments are automated, frequent, secure, and without noticeable user impact.

  • Provide mentoring and coaching to other team members in your areas of expertise.

  • Support the infrastructure for our existing test and production environments and participate in an on-call rotation to assist other engineers in resolving production related issues regarding monitoring, databases, container orchestration services, load balancing, networking, etc. 

  • Identify and collect metrics that allow us to pinpoint areas for improvement and measure change.


At Redox we hire based on our values as well as technical competency. How you accomplish your work is just as important as getting the work done. We’re looking for people who are:

  • Passionate about solving complex problems that improve the state of the world.

  • Enthusiastic about creating an elegant and delightful user experience.

  • Intellectually curious with a desire to learn.

  • Respectful and inclusive, soliciting and incorporating input from others.

  • Biased towards action and creating positive impact. 

  • The particular technical focus we’re searching for in this role is around data technology administration. Your work experience should include:

  • Demonstrated understanding of the strengths and weaknesses of various types of data technologies and ability to suggest the correct tool for a specified workload.

  • Deploying, managing, and tuning SQL databases (Postgres, MySQL), Redis, Elasticsearch, and Kafka for high load.

  • Detailed Linux knowledge and debugging.

  • Deploying and managing containers in production commercial cloud environments.

Any other areas of overlap between your work experience and the Infrastructure team’s areas of expertise are added bonuses. Please apply even if you are not sure you meet all these criteria. If you are interested in the role and think you could be a fit, we'd like to hear from you.

About Redox:

What We Do

Healthcare organizations and technology vendors connect to Redox once, then authorize what data they send to and receive from partners through a centralized hub. Redox's cloud-based platform is vendor and standards agnostic and enables the secure and efficient exchange of healthcare data.

This approach eradicates the need for point-to-point integrations and accelerates the discovery, adoption, and distribution of patient and provider-facing technology solutions. With hundreds of healthcare organizations and technology vendors exchanging data today, Redox represents the largest interoperable network in healthcare. Learn how you can leverage the Redox platform at

Other Stuff About Us

Redox is an EEO company. We fully support the diversity of our team! Here's a recent blog post about our stance on diversity and belonging: Diversity at Redox

We believe in holding ourselves to a high standard of conduct. Here's how we think about this: Redox Code of Conduct

Successful candidates must be eligible to be employed in the US, and must reside in the US.

Thank you for your interest in Redox!

Please mention that you come from Remotive when applying for this job.

Help us maintain Remotive! If this link is broken, please just click to report dead link!

similar jobs

  • 2 days ago
    To our future Senior Embedded Engineer
    At Density, we build one of the most advanced people sensing systems in the world. The product and infrastructure is nuanced and one-of-a-kind. Building this product for scale has been an exercise in patience, creativity, remarkable engineering, laser physics, global logistics, and grit. The team is thoughtful, driven, and world-class.

    Why this is an important role
    Last week our DPUs detected a million humans walking through doors. A number that increases every week.

    As engineers, we think it's pretty cool to be capturing events at this volume. Especially when it's done anonymously, accurately, and in real-time. Although counting people is a DPU's top priority, it needs to do so much more.

    Our system must efficiently and reliably:
    - Receive and process improvements through command and control functionality
    - Seamlessly interact with our network of partner products (building automation and security integrations)
    - Push diagnostic data to allow remote monitoring and troubleshooting.

    Our DPU has a growing set of responsibilities. We need an experienced hand to help us imagine, build, and maintain these mission critical systems and functionalities. Are you up for the job?

    This role reports to our Director of Software Engineering.


    • Deep understanding of modern C++
    • Exceptional comfort with networking, specifically in enterprise environments (this is big for us)
    • Strong experience with the Linux system level APIs, placing an emphasis on designing AsyncIO/event loop based embedded user space daemons.
    • Experience integrating with custom hardware via standard Linux interfaces.
    • Strong understanding of TLS based communication (ideally using OpenSSL).
    • Experience interfacing with large scale cloud based backends.
    • Experience with an embedded Linux build system (Yocto, Buildroot, Linux from Scratch)

    • Bonus Points for:

    • Computer Vision
    • Machine learning and machine learning hardware
    • Experience with building automation. Specifically Bacnet protocol.
    • Python 3 AsyncIO
    • AWS lambda
    • Nomad/Terraform
    • ZMQ
    • Kafka
    • BLE
    • 802.11 
    • CDP or LLDP
    While we have offices in Syracuse (NY), San Francisco, and NYC, we embrace and have built a culture around remote work.


    • Further development of our Pipes Web GUI

    • Development of a Web Admin GUI for supporting Info Automation, Performance Monitoring Tool and System Configuration

    • Development of a web-based GUI for our Logical Data Warehouse (LDW)

    • Bug fixing


    • 2+ years experience in AngularJS, JQuery, TypeScript and related technologies

    • Excellent knowledge of Java

    • Experience in Angular 2.x and 4.x

    • SQL experience

    • Practical knowledge of standard technologies for development, assembly (Maven) and versioning (SVN, Git)


    • An depth understanding of data warehousing and internal operation principles of an RDBMS

    • Practical experience with difference operating systems (Windows, Linux, MacOS)

    • Experience in Scrum or similar agile development methodologies

    • University degree in Computer Science, Information Technology, Software Engineering or related field


    • Fluent spoken and written English is a must-have criterion

    • Analytical thinking, pragmatic development approach with hands-on mentality

    • Being a team-player, but also being able to solve problems independently

    • Understanding the requirements from a customer view

    • Generally very good communication behavior – on both customer and colleague side


    • In short: „We enable the world to use the full potential of data. Data Virtuality is a data integration platform for data access, centralization and automation.“

    • We are an international team with offices in Leipzig, Frankfurt and San Francisco.

    • Our customers are big and small united in the pursuit of digital excellence:: Audi, Kohler, Ergo Direkt, Craftsy,,, Pro7Sat1, Sixt etc.

    • For the last years we were ranked in the top 50 of the fastest growing companies in Germany. In the Big Data sector #1.


    • We care! - For our team, our customers, our products & your well-being.

    • Performance - We tackle every task with the energy and dedication of a top athlete.

    • High-speed - We are market driving not market driven.

    • Fun - That’s the mantra we live by.

    • Autonomy - Is the fundamental of your work with us.

    • Diversity - Makes our job exciting every day.

  • We are the company behind, Jetpack, and WooCommerce. We are looking for someone to help us develop, build, and maintain our growing global network infrastructure – currently serving customers from 28 locations on 6 continents.

    At Automattic, we operate our own global anycast CDN and origin datacenters at 28 locations. We utilize scripting to automate the interactions with our network infrastructure, primarily in Python. We monitor and optimize network routing performance from each datacenter on an ongoing basis. We utilize strategic BGP peering, routing policies, and community tagging to steer requests from other networks to the closest POP for the lowest latency. We design and operate our network with a target of 100% uptime for our customers, including scheduled maintenances.

    If you are passionate about network performance and uptime, network scripting, and network automation, we want to hear from you!

    Experience we look for:

    • At least three years of related experience in a large-scale network engineering operations multi-vendor environment.

    • Automating network administration using Python, or another high-level scripting language.

    • Comfortable working in a Linux/Unix environment.

    • Knowledge of TCP/IP stack, application protocols (DHCP/DNS/HTTPs) and networking concepts (VRRP/VLANs/High Availability/Load Balancing/BGP/IGP/ECMP).

    • Operation experience working on IP networking technologies, routers, switches, and the application of routing protocols. (OSPF, BGP, MPLS, RSVP, LDP).

    • Network troubleshooting on complex internet routing problems, both internal and external.

    • Experience with configuring and operating open source tools for network automation, metric collecting, log collecting, and visualizations.

    • Experience with internet routing, eBGP peer relationships, routing policies, and traffic engineering.

    • Advanced / expert level traceroute/MTR/tcptraceroute/iperf/tcpdump skills.

    • Experience with collecting network metrics using SNMP and streaming telemetry.

    Diversity & Inclusion at Automattic

    We’re improving diversity in the tech industry. At Automattic, we want people to love their work and show respect and empathy to all. We welcome differences and strive to increase participation from traditionally underrepresented groups. Our D&I committee involves Automatticians across the company and drives grassroots change. For example, this group has helped facilitate private online spaces for affiliated Automatticians to gather and helps run a monthly D&I People Lab series for further learning. Diversity and Inclusion is a priority at Automattic, though our dedication influences far more than just Automatticians: We make our products freely available and translate our products into and offer customer support in numerous languages. We require unconscious bias training for our hiring teams and ensure our products are accessible across different bandwidths and devices. Read more about our dedication to diversity and inclusion.

Remotive can help!

Not sure how to apply properly to this job? Watch our live webinar « 3 Mistakes to Avoid When Looking For A Remote Startup Job (And What To Do Instead) ».

Interested to chat with Remote workers? Join our community!