IANULY Talent Accelerators
1 month ago
Job type: Full-time
Hiring from: USA Only
** This role is 100% remote for candidates based in the United States, pre and post COVID-19. Please note this client is unable to sponsor/transfer visas at this time **
Our client is a venture-backed startup based out of NYC working within the High-Performance Computing vertical. Their goal is to enable all organizations to leverage the power of Big Compute and believe that their customer base should focus on the science behind their applications – not on the compute environment.
Our client runs a High-Performance Computing Platform (HPC) on AWS and other cloud providers with many additional open-source technologies and middleware. Their team uses a mix of Linux and some Windows. This means supporting and testing their full stack in a public cloud environment along with distributed schedulers, logging solutions, metrics, storage archiving, and optimization of HPC application cost/performance.
The Data Scientist works as a key member of the Engineering team that leverages Data as a Service (DaaS) for acquiring, securing, cataloging, processing, and analyzing of internal and external data sets. This role will have the opportunity to shape the product strategy, vision and portfolio of the Data platform. Once joining the team, you will become a member of a fast-paced engineering team focused on designing and implementing large-scale distributed data processing systems using cutting edge cloud based, open source and proprietary big data technologies. In this role, you will implement a variety of solutions to ingest data into, process data within, and expose data from the Data Fabric platform.
- 3+ years of specific and applicable experience working cross-functionally with various areas of the business
- Hands-on technical performance with Big Data, Data-at-scale Platform Services, Python, and developing cloud native applicaitons, and deploying to a cloud environment
- API Development (proper microservice separation, HTTP verb usage)
- Experience in infrastructure automation on AWS with Terraform or similar approaches
- Experience working with bash scripting and the AWS CLI in Linux-based systems
- Experience working with data in various compression formats (including Parquet, ORC) and serialization (AVRO)
- Solid knowledge of the following technologies: HTTP, SSL/TLS, REST, SQL, and JSON
- Hands-on expertise with multiple database technologies (Postgres, Mongo, Elastic Search, etc.) as well as SQL and related query languages
- Knowledge of Continuous Integration environment such as Jenkins, CruiseControl, Continuum, Travis, etc.
- Knowledge of Test Driven Development processes and tooling such as JUnit, Mocha, Jasmine, or Protractor
- Experience with or understanding of container services including Docker and Kubernetes
Medical, Dental, Vision
Before you apply, please check if any restrictions apply in terms of time zone or country.
This job has a geo-restriction in place: USA Only.
Please mention that you come from Remotive when applying for this job.
Does this job need an edit? 🙈