Software Engineer, Machine Learning

Fathom


2 months ago

11/13/2019 10:25:19

Job type: Full-time

Category: All others


Are you passionate about machine learning and looking for an opportunity to make an impact in healthcare?

Fathom is on a mission to understand and structure the world’s medical data, starting by making sense of the terabytes of clinician notes contained within the electronic health records of health systems.

We are seeking extraordinary Machine Learning Engineers to join our team, developers and scientists who can not only design machine-based systems, but also think creatively about the human interactions necessary to augment and train those systems.

Please note, this position has a minimum requirement of 3+ years of experience.  For earlier career candidates, we encourage you to apply to our SF and/or Toronto locations

As a Machine Learning Engineer you will:

  • You will develop NLP systems that help us structure and understand biomedical information and patient records

  • You will work with a variety of structured and unstructured data sources

  • You will imagine and implement creative data-acquisition and labeling systems, using tools & techniques like crowdsourcing and novel active learning approaches

  • You will work with the latest NLP approaches (BERT, Transformer)

  • You will train your models at scale (Horovod, Nvidia v100s)

  • You will use and iterate on scalable and novel machine learning pipelines (Airflow on Kubernetes)

  • You will read and integrate state of the art techniques into Fathom’s ML infrastructure such as Mixed Precision on Transformer networks

We’re looking for teammates who bring:

  • 3+ years of development experience in a company/production setting

  • Experience with deep learning frameworks like TensorFlow or PyTorch

  • Industry or academic experience working on a range of ML problems, particularly NLP

  • Strong software development skills, with a focus on building sound and scalable ML

  • Excitement about taking ground-breaking technologies and techniques to one of the most important and most archaic industries

  • A real passion for finding, analyzing, and incorporating the latest research directly into a production environment

  • Good intuition for understanding what good research looks like, and where we should focus effort to maximize outcomes

  • Bonus points if you have experience with:

  • Developing and improving core NLP components—not just grabbing things off the shelf

  • Leading large-scale crowd-sourcing data labeling and acquisition (Amazon Turk, Crowdflower, etc.)

Please mention that you come from Remotive when applying for this job.

Help us maintain Remotive! If this link is broken, please just click to report dead link!

similar jobs

  • As a data scientist you will push the boundaries of deep learning & NLP by conducting research in cutting-edge AI and apply that research to production scale data. Our data includes a large corpora of global patents in multiple languages and a wealth of metadata. The focus of your work will be to analyze, visualize, and interpret large text corpora as well as more traditional structured data. This includes recommending and implementing ML-based product features. We also expect to publish primary research and contribute to FOSS that we use.

    This is an exciting opportunity to part of a startup that is applying deep learning to a real-world problem at global scale. We’re always looking for leaders and there is room to grow for the right person to take increasing responsibility working as part of a small and dynamic team.

    Location:

    Tokyo, Melbourne, San Francisco (willing to negotiate remote work as well)

    Responsibilities:

    • Architect and implement software libraries for batch processing, API-based predictions, and static analyses

    • Rapidly iterate on the design, implementation and evaluation of machine learning algorithms for document corpora

    • Report and present software developments including status and results clearly and efficiently, verbally and in writing

    • Participate in strategy discussions about technology roadmap, solution architecture, and product design

    • Strict adherence to clean code paradigms

    Minimum Qualifications and Education Requirements:

    • BSc/BEng degree in computer science, mathematics, machine learning, computational linguistics or equivalent (MSc/MEng preferable)

    • Experience with implementing statistical methods and data visualization

    • Good knowledge of computer science principles underpinning the implementation of machine learning algorithms

    • Experience with deep learning approaches to NLP, particularly RNNs

    • Experience implementing deep learning models in TensorFlow or PyTorch 

    Preferred Qualifications:

    • Contributions to open source projects

    • Passion for new developments in AI

    • Experience with GCP or AWS

    • A track record of machine learning code that is:

    -Well documented

    -Well commented

    -Version controlled

    -Unit tested

    To apply, please contact us at [email protected] with your CV.

  • Kalepa is looking for Data Scientists to lead efforts at the intersection of machine learning and big data engineering in order to solve some of the biggest problems in commercial insurance.

    Data scientists at Kalepa will be turning vast amounts of structured and unstructured data from many sources (web data, geolocation, satellite imaging, etc.) into novel insights about behavior and risk. You will be working closely with a small team in designing, building, and deploying machine learning models to tackle our customers’ questions.

    Kalepa is a New York based, VC backed, startup building software to transform and disrupt commercial insurance. Nearly one trillion ($1T) dollars are spent globally each year on commercial insurance across small, medium, and large enterprises. However, the process for estimating the risk associated with a given business across various perils (e.g. fire, injury, malpractice) is still reliant on inefficient and inaccurate manual forms or outdated and sparse databases. This information asymmetry leads to a broken set of economic incentives and a poor experience for both businesses and insurers alike. By combining cutting edge data science, enterprise software, and insurance expertise, Kalepa is delivering precision underwriting at scale – empowering every commercial insurance underwriter to be as effective and efficient as possible. Kalepa is turning real-world data into a complete understanding of risk.

    Kalepa is led by a strong team with experiences from Facebook, APT (acquired by Mastercard for $600M in 2015), the Israel Defense Forces, MIT, Berkeley, and UPenn.

    About you:

    ● You want to design a flexible analytics, data science, and AI framework to transform the insurance industry

    ● You have demonstrated success in delivering analytical projects, including structuring and conducting analyses to generate business insights and recommendations

    ● You have in-depth understanding of applied machine learning algorithms and statistics

    ● You are experienced in Python and its major data science libraries, and have deployed models and algorithms in production

    ● You have a good understanding of SQL and non-SQL databases

    ● You value open, frank, and respectful communication

    ● You are a proactive and collaborative problem solver with a “can do” attitude

    ● You have a sincere interest in working at a startup and scaling with the company as we grow

    As a plus:

    • You have experience in NLP and/or computer vision

    • You have familiarity with Spark, Hadoop, or Scala

    • You have experience working with AWS tools

    What you’ll get

    ● Work with an ambitious, smart, and fun team to transform a $1T global industry

    ● Ground floor opportunity – opportunity to build the foundations for the product, team, and culture alongside the founding team

    ● Wide-ranging intellectual challenges working with large and diverse data sets, as well as with a modern technology stack

    ● Competitive compensation package with a significant equity component

    ● Full benefits package, including excellent medical, dental, and vision insurance

    ● Unlimited vacation and flexible remote work policies

    ● Continuing education credits and a healthy living / gym monthly stipend

    [IMPORTANT NOTE]: Salary ranges are for New York based employees. Compensation for remote roles will be adjusted according to the cost of living and market in the specific geography.

  • Bungee empowers enterprises to drive great business decisions.

    Headquartered in Seattle and founded by Amazon veterans, Bungee enables enterprises to gain access to global data on-demand and drive critical business decisions across industries. We’re a small, fast-growing company with our service already in use by several Fortune 50 companies.

    The Data Scientist will shape Bungee’s product offerings and making our existing core data collection and analytics products more efficient and scalable. You will be a key member of the team with own projects end to end and work directly with customers. You will have the opportunity to solve problems, think creatively and try new things. You will be part of the team which is responsible for building machine learning models that drive our platforms, products, marketing, and business analytics.

    We are looking for people who are willing to propose new and better ways of how to achieve our goals and be able to show us why. Willing to sprint to action to solve a problem, even when conditions are not ideal. Can treat systems and models as a whole to decide what to improve. Always Interested in learning, and figuring out ways to make our products and teams better. Delivers early and often, doesn't get bogged down to get it perfect the first time. Wants to work cooperatively in a team that is welcoming and truly understands the value of data. Appreciates working in a phenomenal environment with high ethical standards and a culture of kindness.

    At Bungee Tech, we believe in a workplace where you can be your best, where you and the company can grow together. We attract a highly diverse group of talented people who are both thinkers and doers. We believe we are only scratching the surface on our opportunity, and we’re looking for incredible people like you to help us on that journey. Come join us!

    Responsibilities

    • Architect & develop and train deep learning models for knowledge extraction, entity matching, entity resolution, reinforcement learning, knowledge base extraction.

    • Architect, debug, and improve deep learning models.

    • Improve the accuracy of existing machine learning systems

    • 2+ years of industry experience

    • Participate in team discussions, presentations.

    Requirements

    • Masters or PhD in Computer Science or equivalent

    • Experience in implementing deep learning methods and algorithms, in computer vision and/or NLP

    • Experience optimizing machine learning benchmarks to competitive levels of accuracy

    • 2+ years of industry experience

    • Experience with TensorFlow/Pytorch/Keras, AWS/GCP/Azure, Python

    • Experience in writing algorithms for speed and scalability

    Preferred Skills

    • Experience with learning in distributed systems, containerization, etc.

    • Published research in NLP & CV areas

Remotive can help!

Not sure how to apply properly to this job? Watch our live webinar « 3 Mistakes to Avoid When Looking For A Remote Startup Job (And What To Do Instead) ».

Interested to chat with Remote workers? Join our community!