Machine Learning & Data Science Engineer

GlanceHQ.AI


2 weeks ago

11/04/2019 10:21:50

Job type: Full-time

Category: All others


Glance is building an intelligence layer for marketing technology. You will be joining a team of ambitious folks (ex MIT, Citrix, Adnuance) who have extensive experience founding an agency and running enterprise-level products. We believe in empowering marketers with the power of data science without them having to spend a fortune on this expertise.

Glance is the marketing AI assistant that collates multi-channel marketing data and surfaces hidden insights, opportunities and course corrections all in simple language allowing you to act on the insights from Glance.

If you’re the person who believes in leveraging AI/ML to democratize marketing, this is the right position for you. In this job you will:

  • Create an overall schema to accommodate the insights generation aspect, to marketing channels (such as Google Analytics)

  • Create AI models to organize data by paid and non-paid marketing channels

  • Convert rules-based insights to model generated insights

  • Incorporate causal models in campaigns and create remedial actions based on causes determined.

  • Identify anomalies in marketing data and drive to insights where applicable

  • Use data analysis skills and incorporate automatic insight creation from existing data

About You:

  • Undergraduate/graduate degree in CS-related field OR significant professional experience

  • Professional machine learning experience with a solid understanding of statistical methods

  • We’d like a can-do attitude in the face of constraints and come up with creative solutions. Passion for learning, building, and moving fast.

  • Ability to understand the current platform and existing tech infrastructure to build ML and data layers in a modular fashion

  • Strong experience with high-level machine learning frameworks (Tensorflow, Caffe, Torch, etc.)

  • You are capable of quickly coding and prototyping data pipelines involving any combination of Python, Node, bash, and linux command-line tools

  • Comfortable with running and interpreting common statistical tests, and also with common data science techniques including dimensionality reduction and supervised and unsupervised learning

Please mention that you come from Remotive when applying for this job.

Help us maintain Remotive! If this link is broken, please just click to report dead link!

similar jobs

  • Are you passionate about machine learning and looking for an opportunity to make an impact in healthcare?

    Fathom is on a mission to understand and structure the world’s medical data, starting by making sense of the terabytes of clinician notes contained within the electronic health records of health systems.

    We are seeking extraordinary Machine Learning Engineers to join our team, developers and scientists who can not only design machine-based systems, but also think creatively about the human interactions necessary to augment and train those systems.

    Please note, this position has a minimum requirement of 3+ years of experience.  For earlier career candidates, we encourage you to apply to our SF and/or Toronto locations

    As a Machine Learning Engineer you will:

    • You will develop NLP systems that help us structure and understand biomedical information and patient records

    • You will work with a variety of structured and unstructured data sources

    • You will imagine and implement creative data-acquisition and labeling systems, using tools & techniques like crowdsourcing and novel active learning approaches

    • You will work with the latest NLP approaches (BERT, Transformer)

    • You will train your models at scale (Horovod, Nvidia v100s)

    • You will use and iterate on scalable and novel machine learning pipelines (Airflow on Kubernetes)

    • You will read and integrate state of the art techniques into Fathom’s ML infrastructure such as Mixed Precision on Transformer networks

    We’re looking for teammates who bring:

    • 3+ years of development experience in a company/production setting

    • Experience with deep learning frameworks like TensorFlow or PyTorch

    • Industry or academic experience working on a range of ML problems, particularly NLP

    • Strong software development skills, with a focus on building sound and scalable ML

    • Excitement about taking ground-breaking technologies and techniques to one of the most important and most archaic industries

    • A real passion for finding, analyzing, and incorporating the latest research directly into a production environment

    • Good intuition for understanding what good research looks like, and where we should focus effort to maximize outcomes

    • Bonus points if you have experience with:

    • Developing and improving core NLP components—not just grabbing things off the shelf

    • Leading large-scale crowd-sourcing data labeling and acquisition (Amazon Turk, Crowdflower, etc.)

  • VividCortex (US only)
    1 week ago

    Only candidates residing inside of the United States will be considered for this role

    About VividCortex

    VividCortex provides deep database performance monitoring to drive speed, efficiency and savings. Our cloud-based SaaS platform offers full visibility into major open source databases – MySQL, PostgreSQL, Amazon Aurora, MongoDB, and Redis – at any scale without overhead. By giving entire engineering teams the ability to monitor database workload and query behavior, VividCortex empowers them to improve application speed, efficiency, and up-time.

    Founded in 2012, and headquartered in the Washington, DC metro area with remote teams in the US and abroad, our company’s growth continues to accelerate (#673 Inc. 5000). Hundreds of industry leaders like DraftKings, Etsy, GitHub, SendGrid, Shopify, and Yelp rely on VividCortex. 

    We know our team is our greatest strength so we support our people with excellent benefits including 401k, professional development assistance, flexible paid leave (vacation, parental, sick, etc.), and a health/wellness benefit. We enjoy getting together and giving back to the community through volunteer services. We believe in offering every employee the tools and opportunity to impact the business in a positive way. We care about inclusiveness and working with people who help us learn and grow.

    About the Role

    VividCortex is looking for an experienced Data Engineer to architect and build our next-generation internal data platform for large scale data processing. You are at the intersection of data, engineering, and product, and run the strategy and tactics of how we store and process massive amounts of performance metrics and other data we measure from our customers' database servers.

    Our platform is written in Go and hosted on the AWS cloud. It uses Kafka, Redis, and MySQL for data storage and analysis. We are a DevOps organization building a 12-factor microservices application; we practice small, fast cycles of rapid improvement and full exposure to the entire infrastructure, but we don't take anything to extremes.

    The position offers excellent benefits, a competitive base salary, and the opportunity for equity. Diversity is important to us, and we welcome and encourage applicants from all walks of life and all backgrounds.

    Responsibilities:

    • Work with others to define, and propose for approval, a modern data platform design strategy and matching architecture and technology choices to support it, with the goals of providing a highly scalable, economical, observable, and operable data platform for storing and processing very large amounts of data within tight performance tolerances.

    • Perform high-level strategy and hands-on infrastructure development for the VividCortex data platform, developing and deploying new data management services both in our existing data center infrastructure, and in AWS.

    • Collaborate with engineering management to drive data systems design, deployment strategies, scalability, infrastructure efficiency, monitoring, and security.

    • Discover, define, document, and design scalable backend storage and robust data pipelines for different types of data streams.

    • Write code, tests, and deployment manifests and artifacts, using CircleCI, Git and GitHub, pull requests, issues, etc. Collaborate with other engineers on code review and approval.

    • Measure and improve the code and system performance and availability as it runs in production.

    • Support product management in prioritizing and coordinating work on changes to our data platform, and serve as a lead on  user-focused technical requirements and analysis of the platform.

    • Help provide customer support, and you'll pitch in with other departments, such as Sales, as needed.

    • Rotate through on-call duty.

    • Understand and enact our security posture and practices.

    • Continually seek to understand and improve performance, reliability, resilience, scalability, and automation. Our goal is that systems should scale linearly with our customer growth, and the effort of maintaining the systems should scale sub-linearly.

    • Contribute to a culture of blameless learning, responsibility, and accountability.

    • Manage your workload, collaborating and working independently as needed, keeping management appropriately informed of progress and issues.

    Preferred Qualifications:

    • Experience building systems for both structured and unstructured data.

    • AWS infrastructure development experience.

    • Mastery of relational database technologies such as MySQL.

    • You are collaborative, self-motivated, and experienced in the general development, deployment, and operation of modern API-powered web applications using continuous delivery and Git in a Unix/Linux environment.

    • Experience and knowledge programming in Golang or Java

    • You have experience resolving highly complex data infrastructure design and maintenance issues, with at least 4 years of data-focused design and development experience.

    • You are hungry for more accountability and ownership, and for your work to matter to users.

    • You’re curious with a measured excitement about new technologies.

    • SaaS multitenant application experience.

    • Ability to understand and translate customer needs into leading-edge technology.

    • Experience with Linux system administration and enterprise security.

    • A Bachelor’s degree in computer science, another engineering discipline, or equivalent experience.

    Note to Agencies and Recruiters: VividCortex does not engage with unsolicited contact from agencies or recruiters.  Unsolicited resumes and leads are property of VividCortex and VividCortex explicitly denies that any information sent to VividCortex can be construed as consideration.

  • Legalist (US & Europe)
    1 month ago

    Legalist is breaking new ground in FinTech and LegalTech. Data Science is one of the pillars of Legalist's continuous innovation, and we're looking for someone who can lead the charge on that front.

    You will get to..

    • Use Python, PyCharm, Jupyter to build our products

    • Use AWS, GCP, Kubernetes, Docker, Jenkins to scale our infrastructure

    • Learn at the bleeding edge of web and machine learning technologies

    • Work with a ton of legal data to build analytical tools that support the business team

    Legalist is an investment firm that uses tech to invest in lawsuits. We graduated YCombinator as part of their Summer 2016 batch, and have since garnered international press for our pioneering work. You can read about us in NYT, WSJ, The Guardian, Le Monde, The Economist, and many others.

    If you're interested in the intersection of finance, technology, and law, then you'll find the problems we work on highly interesting. We scrape millions of court records and build technology that streamlines the process of investing in legal assets, while running investment funds that generate high returns for our investors.

    YOUR MISSION

    Ideally you will be interested in learning, be proactive, and enjoy using bleeding edge technologies. Formal 'experience' not necessary but demonstration of capability required. It is a well paid role, with salary based on capability. You will be working alongside the CTO & co-founder and 4 talented engineers.

    Currently, our platform is based around a backend microservices architecture for different use cases.

    We're really looking for people who would love to join a fast growing startup with great financial projections, paying clients, strong investors, and an awesome team where you can always be learning. Our work is multi-disciplinary, and we're looking for engineers with an interest in business as well.

    RESPONSIBILITIES

    • Autonomy over a core product and data pipeline

    • Collaborate with engineers to develop and ship features

    • Write efficient, modular, and reusable libraries and abstractions

    • Identify key drivers & insights to improve our analytics engines

    • Participate in code reviews

    QUALIFICATIONS

    Applicants are not expected to show advanced understanding of all of the below, but must show willingness, ability, and interest in keeping up with cutting edge technologies and frameworks.

    • 4+ years of experience with machine learning and data science techniques

    • Degree in Computer Science, Statistics, Mathematics or equivalent field

    • Ability to implement best practices

    • Ability to identify key insights and technologies

    • Comfort with independently building out MVPs which can then be built out and supported by the engineering team

    • Experience working with modern data stores such as NoSQL/Postgres, S3, Cassandra or similar

    • Experience working with Cloud Computing technologies (e.g. AWS, Azure, GCP)

    • Ability to communicate technical specifications both verbal and written

Remotive can help!

Not sure how to apply properly to this job? Watch our live webinar « 3 Mistakes to Avoid When Looking For A Remote Startup Job (And What To Do Instead) ».

Interested to chat with Remote workers? Join our community!