Remote spark Jobs in March 2020

17 Remote spark Jobs in March 2020

Post a job
  • Software Development (13) Software Development rss feed

    • At Numbrs, our engineers don’t just develop things – we have an impact. We change the way how people are managing their finances by building the best products and services for our users. 

      Numbrs engineers are innovators, problem-solvers, and hard-workers who are building solutions in big data, mobile technology and much more. We look for professional, highly skilled engineers who evolve, adapt to change and thrive in a fast-paced, value-driven environment.

      Join our dedicated technology team that builds massively scalable systems, designs low latency architecture solutions and leverages machine learning technology to turn financial data into action. Want to push the limit of personal finance management? Join Numbrs.

      Job Description

      You will be a part of a team that is responsible for developing, releasing, monitoring and troubleshooting large scale micro-service based distributed systems with high transaction volume. You enjoy learning new things and are passionate about developing new features, maintaining existing code, fixing bugs, and contributing to overall system design. You are a great teammate who thrives in a dynamic environment with rapidly changing priorities.

      All candidates will have

      • a Bachelor's or higher degree in technical field of study or equivalent practical experience
      • experience with high volume production grade distributed systems
      • experience with micro-service based architecture
      • experience with software engineering best practices, coding standards, code reviews, testing and operations
      • hands-on experience with Spring Boot
      • professional experience in writing readable, testable and self-sustaining code
      • strong hands-on experience with Java (minimum 8 years)
      • knowledge of AWS, Kubernetes, and Docker
      • excellent troubleshooting and creative problem-solving abilities
      • excellent written and oral communication in English and interpersonal skills

      Ideally, candidates will also have

      • experience with Big Data technologies such as Kafka, Spark, and Cassandra
      • experience with CI/CD toolchain products like Jira, Stash, Git, and Jenkins
      • fluent with functional, imperative and object-­oriented languages;
      • experience with Scala, C++, or Golang
      • knowledge of Machine Learning

      Location: residence in UK mandatory; home office

    • Medium (US only)
      4 days ago
      At Medium, words matter. We are building the best place for reading and writing on the internet—a place where today’s smartest writers, thinkers, experts, and storytellers can share big, interesting ideas; a place where ideas are judged on the value they provide to readers, not the fleeting attention they can attract for advertisers.

      We are looking for a Senior Data Engineer that will help build, maintain, and scale our business critical Data Platform. In this role, you will help define a long-term vision for the Data Platform architecture and implement new technologies to help us scale our platform over time. You'll also lead development of both transactional and data warehouse designs, mentoring our team of cross functional engineers and Data Scientists.

      At Medium, we are proud of our product, our team, and our culture. Medium’s website and mobile apps are accessed by millions of users every day. Our mission is to move thinking forward by providing a place where individuals, along with publishers, can share stories and their perspectives. Behind this beautifully-crafted platform is our engineering team who works seamlessly together. From frontend to API, from data collection to product science, Medium engineers work multi-functionally with open communication and feedback

      What Will You Do!
      • Work on high impact projects that improve data availability and quality, and provide reliable access to data for the rest of the business.
      • Drive the evolution of Medium's data platform to support near real-time data processing and new event sources, and to scale with our fast-growing business.
      • Help define the team strategy and technical direction, advocate for best practices, investigate new technologies, and mentor other engineers.
      • Design, architect, and support new and existing ETL pipelines, and recommend improvements and modifications.
      • Be responsible for ingesting data into our data warehouse and providing frameworks and services for operating on that data including the use of Spark.
      • Analyze, debug and maintain critical data pipelines.
      • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Spark and AWS technologies.
      Who You Are!
      • You have 7+ years of software engineering experience.
      • You have 3+ years of experience writing and optimizing complex SQL and ETL processes, preferably in connection with Hadoop or Spark.
      • You have outstanding coding and design skills, particularly in Java/Scala and Python.
      • You have helped define the architecture, tooling, and strategy for a large-scale data processing system.
      • You have hands-on experience with AWS and services like EC2, SQS, SNS, RDS, Cache etc or equivalent technologies.
      • You have a BS in Computer Science / Software Engineering or equivalent experience.
      • You have knowledge of Apache Spark, Spark streaming, Kafka, Scala, Python, and similar technology stacks.
      • You have a strong understanding & usage of algorithms and data structures.
      Nice To Have!
      • Snowflake knowledge and experience
      • Looker knowledge and experience
      • Dimensional modeling skills
      At Medium, we foster an inclusive, supportive, fun yet challenging team environment. We value having a team that is made up of a diverse set of backgrounds and respect the healthy expression of diverse opinions. We embrace experimentation and the examination of all kinds of ideas through reasoning and testing. Come join us as we continue to change the world of digital media. Medium is an equal opportunity employer.

      Interested? We'd love to hear from you.
    • We are Givelify®, where fintech meets philanthropy. We help people instantly find causes that inspire them to action so they can change the world—one simple, joyful gift at a time. 
       
      We are looking for a Principle Software Engineer with over 10+ years of experience to join our team with a "lead the charge" attitude. Why is Engineering at Givelify Different: Moonshots are our norm. Our product impacts real people on the ground. We build with passion and maintain a high standard of engineering quality. We solve unique scalability challenges and you will have the ability to help guide all aspects of Engineering and Product Development. 

      Some of the meaningful work you will perform:

      • PHP/Python: You will need to have strong object-oriented design and development skills and advanced knowledge of PHP, Python or similar programming languages. Knowledge and experience with third party libraries, frameworks, and technologies is a plus. 
      • Database: You will need to have strong SQL composition skills. Knowledge of big data and NoSql databases is a plus! We not only write software that collects and queries data, but we also compose queries for investigation and analysis. We collect a lot of data in real time from our applications and being able to compose ad hoc queries is necessary to develop and support our products. 
      • Guidance: Participate in and guide engineer teams on all things technical – Architecture definition & Design ownership that not only include technology but Data Security aspects, Deployment & Cloud strategy, CI/CD, as well as coding best practices.  
      • Analysis & Problem Solving: You will need to understand our codebase and systems and the business requirements they implement so you can effectively make changes to our applications and investigate issues. 
      • Communication: Whether via face-to-face discussion, phone, email, chat, white-boarding, or other collaboration platforms, you must be an effective communicator who can inform, explain, enable, teach, persuade, coordinate, etc. 
      • Team Collaboration: You must be able to effectively collaborate and share ownership of your team’s codebase and applications. You must be willing to fully engage in team efforts, speak up for what you think are the best solutions, and be able to converse respectfully and compromise when necessary. 
      • Knowledge and Experience: A well-rounded software engineer will have broad and/or deep knowledge of various topics, tools, frameworks, and methodologies related to software engineering. None are required, but the more you can bring, the better. Here are some examples:

      • Laravel, Yii and similar frameworks
      • Strong API knowledge and development
      • Git and GitHub
      • Big Data solutions such as Cassandra, Hadoop, Spark, Kafka and Elastic Search
      • Continuous integration and automated testing
      • Agile/Scrum
      • Open source projects
      • Server-side JavaScript, TypeScript and Node.js
      • Familiarity with DevOps configuration tools (Git, Jira, Jenkins, etc.)

      We welcome your talents and experience:

      • BS/MS degree in Computer Science, Computer Engineering, Mathematics, Physics or equivalent work experience
      • Technical Leader with at least 10+ years of work in Software Engineering
      • Webservices and API development experience within a startup and/or e-commerce environment
      • A distinguished member of engineering community, either through extracurricular activities, publications, associations with orgs (i.e., IEEE, etc.)
      • Demonstrated history of living the values important to Givelify suchas integrity and ethics

      Our People 
      We are a virtual team of high-performing professionals who innovate & collaborate to fulfill our mission to help people instantly find causes that inspire them to action so they can change the world – one simple, joyful gift at a time. Our culture of integrity, heart, simplicity, & that "wow" factor fuel our aspiration to be among the tech industry's most inclusive & purpose-driven work environments. 
      We take great pride in providing competitive pay, full benefits, amazing perks, and most importantly, the opportunity to put passion & purpose to work. 
       
      Our Product 
      From places of worship to world-changing nonprofit groups, Givelify harnesses the power of technology to bridge the gap between people and the causes they care about. Tap. Give. Done. Givelify's payment solution is designed to make the experience of giving as beautiful as the act of giving. 
       
      Learn more about us at https://careers.givelify.com ( https://careers.givelify.com/ )

    • Railroad19 (US only)
      1 week ago
      We are looking for a savvy Data Engineer to join our growing team of analytics experts. The hire will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up.  The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.

      Responsibilities for Data Engineer
      • Create and maintain optimal data pipeline architecture,
      • Assemble large, complex data sets that meet functional / non-functional business requirements.
      • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
      • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
      • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
      • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
      • Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
      • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
      • Work with data and analytics experts to strive for greater functionality in our data systems.
      Qualifications for Data Engineer
      • Understanding of concepts such as Change Data Capture, Event Sourcing, and CQRS patterns using event based systems
      • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
      • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
      • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
      • Strong analytic skills related to working with unstructured datasets.
      • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
      • A successful history of manipulating, processing and extracting value from large disconnected datasets.
      • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
      • Strong project management and organizational skills.
      • Experience supporting and working with cross-functional teams in a dynamic environment.
      • We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
      • Experience with stream-processing systems: Kafka, Nifi, Storm, Spark-Streaming, etc.
      • Strong knowledge of object-oriented/functional programming with Java 8+ or other JVM languages (Scala, Clojure, Kotlin, Groovy)
      • Hands-on experience with ETL techniques and frameworks like Apache Spark or Apache Flume.
      • Strong understanding of data serialization formats like Apache Avro, Parquet, Protobuf, Apache Thrift.
      • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra, MongoDB, ElasticSearch.
      • Use of AWS cloud services: EC2, EMR, RDS, Redshift, S3, Lambda, Kinesis.
      • Experience with integration of data from multiple data sources.
      • Understanding of the importance of CI/CD, unit/integration testing, build tooling (maven, gradle, sbt), dependency management.


      About RR19
      • We develop customized software solutions and provide software development services.  We’re a specialized team of developers and architects.  As such, we only bring an “A” team to the table, through hard work and a desire to lead the industry — this is our company culture — this is what sets Railroad19 apart.
      • At Railroad19, Inc. you are part of a company that values your work and gives you the tools you need to succeed. We are headquartered in Saratoga Springs, New York, but we are a distributed team of remote developers across the US. 
      • As a Railroad19 employee, you will be part of a company that values your work and gives you the tools you need to succeed. Our Executive headquarters is in Saratoga Springs, New York, but this position is remote. Railroad19 provides competitive compensation and excellent benefits~ Medical/Dental/Vision vacation and 401K.
      Working at Railroad19:
      Competitive salaries
      Excellent Health Care, Dental and Vision benefits
      3 weeks vacation, 401K, work life balance
      No Agencies***
      This is a non-management position

      We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal opportunity workplace.
    • Prezly (2 hours of CET)
      1 week ago

      Important: although this is a fully remote role, we only accept candidates that are within 2 hours of CET (Central European Timezone). Applying outside of that is a waste of your (and our) time. 

      I made a video about our company and the job

      Prezly is looking for a backend (PHP) developer to boost our capacity in creating a world with more meaningful communication between organisations and people. Working closely with a product designer, the founders and builders to craft high-quality, impactful, and inclusive user experiences for communication experts all around the world.

      About Prezly

      Good stories, told well to the right people, can inspire and spark positive change in the world. That’s why at Prezly, we’re building state-of-the-art storytelling tools for stellar brands.

      Since our founding in 2010, we've grown to become a profitable, 100% globally distributed team of ~16 high-performing, happy people that are dedicated to building a product our customers love.

      About the technology

      Our services are built around a core of PHP (symfony), postgres, React with millions of people per day using some aspect of the system. On the backend side we use a Symfony fork (https://github.com/e1himself/symfony1), Propel (http://propelorm.org/) and interact with a Postgres (RDS) database.

      We're big believers in devops/CI - building, testing, and deploying to any of our environments are as simple as pushing a commit to a git branch. The infrastructure is containerised, built on top of AWS using Kubernetes.

      We're a technology company first: This means that in addition to product and business development plans, we put emphasis on continual improvement of our stack and infrastructure. Current projects include API'ing our full application suite and removing redux.

      About the backend role

      We are looking for a backend engineer with deep understanding of PHP (Symfony) and best practices when it comes to application development. Bonus points for experience with our specific stack.

      You should have senior level experience (~5 years) building modern back-end systems, with at least 3 years of that experience using PHP.

      You will work on a variety of projects, mostly around the core Prezly product. Your work will ship continuously so you will have a direct impact in our customers’ experience and the overall trajectory of the business.

      As our new teammate, you’ll be self-driven and work hard to bring value to your new company in the most effective way possible. You’ll work hard to make those around you better, communicate clearly, and make Prezly a better company.

      I made a video about our company and the job

      • 5+ years of professional full stack development experience.
      • Expertise in PHP, React, CI and javascript
      • Excellent problem solving, critical thinking, and communication skills.
      • Team-oriented person who loves to collaborate and communicate
      • Bonus points for:
        • Leadership role at a startup. Proven ability to mentor / manage engineers and drive a product forward.
        • Knowledge of containerisation, k8s, dev-ops, and AWS.

      You will get

      • Competitive salary

      • Great tools: What would Batman be without his utility belt? He’d still be badass. But you get the point. At Prezly you’ll get to choose your own gear.

      • Flexible hours: There’s a life outside of work. That’s why our distributed team works from where they want, when they want. And they get tons of work done.

      • Unlimited vacation time: We evaluate on value, not on time spent behind desks. Employees can take as many holidays as they need. This way they bring their A-game to the job.

      • Visits to Leuven: A few times per year the entire team gets together in the office in Leuven, the world’s capital of beer. We’ll fly you in so you can have fun with the team.

    • Qntfy is looking for a talented and highly motivated ML Engineer to join our team. ML Engineers are responsible for building systems at the crossroads of data science and distributed computing. You will do a little bit of everything: from tuning machine learning models, to profiling distributed applications, to writing highly scalable software. We use technologies like Kubernetes, Docker, Kafka, gRPC, and Spark. You aren’t a DevOps, but an understanding of how the nuts and bolts of these systems fit together is helpful and you aren't a data scientist, but understanding how models work and are applied is just as important.

      U.S. Citizenship RequiredResponsibilities

      • Collaborate with data scientists to get their models deployed into production systems.
      • Develop and maintain systems for distributed model training and evaluation.
      • Design and implement APIs for model training, inference, and introspection.
      • Build tools for testing, benchmarking, and deploying analytics at scale.
      • Interface with the technical operations team to understand analytic performance and operational behavior.
      • Write and test code for highly available and high volume workloads.

      Qualifications

      • BS or Master’s degree in Computer Science, related degree, or equivalent experience.
      • 5+ years experience with software engineering, infrastructure design, and/or machine learning.
      • Familiarity with Python and machine learning frameworks, paricularly Scikit-learn, Tensorflow, and Pytorch.
      • Experience with distributed machine learning using tools like Dask, Tensorflow, Kubeflow, etc.
      • Write well-structured, maintainable, idiomatic code with good documentation.
      • Strong work-ethic and passion for problem solving.

      Preferred Qualifications

      • Machine learning API development competencies.
      • Golang development experience.
      • Container orchestration and optimization knowledge.
      • Proficiency designing, implementing, and operating large-scale distributed systems.
      • Prior experience working in a distributed (fully remote) organization.

      Qntfy is committed to fostering and supporting a creative and diverse environment. Qntfy is an equal opportunity employer, and as such will consider all qualified applicants for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.

    • Wikimedia Foundation, Inc.
      2 weeks ago

      The Wikimedia Foundation is hiring two Site Reliability Engineers to support and maintain (1) the data and statistics infrastructure that powers a big part of decision making in the Foundation and in the Wiki community, and (2) the search infrastructure that underpins all search on Wikipedia and its sister projects. This includes everything from eliminating boring things from your daily workflow by automating them, to upgrading a multi-petabyte Hadoop or multi-terabyte Search cluster to the next upstream version without impacting uptime and users.

      We're looking for an experienced candidate who's excited about working with big data systems. Ideally you will already have some experience working with software like Hadoop, Kafka, ElasticSearch, Spark and other members of the distributed computing world. Since you'll be joining an existing team of SREs you'll have plenty of space and opportunities to get familiar with our tech (AnalyticsSearchWDQS), so there's no need to immediately have the answer to every question.

      We are a full-time distributed team with no one working out of the actual Wikimedia office, so we are all together in the same remote boat. Part of the team is in Europe and part in the United States. We see each other in person two or three times a year, either during one of our off-sites (most recently in Europe), the Wikimedia All Hands (once a year), or Wikimania, the annual international conference for the Wiki community.

      Here are some examples of projects we've been tackling lately that you might be involved with:

      •  Integrating an open-source GPU software platform like AMD ROCm in Hadoop and in the Tensorflow-related ecosystem
      •  Improving the security of our data by adding Kerberos authentication to the analytics Hadoop cluster and its satellite systems
      •  Scaling the Wikidata query service, a semantic query endpoint for graph databases
      •  Building the Foundation's new event data platform infrastructure
      •  Implementing alarms that alert the team of possible data loss or data corruption
      •  Building a new and improved Jupyter notebooks ecosystem for the Foundation and the community to use
      •  Building and deploying services in Kubernetes with Helm
      •  Upgrading the cluster to Hadoop 3
      •  Replacing Oozie by Airflow as a workflow scheduler

      And these are our more formal requirements:

      •    Couple years experience in an SRE/Operations/DevOps role as part of a team
      •    Experience in supporting complex web applications running highly available and high traffic infrastructure based on Linux
      •    Comfortable with configuration management and orchestration tools (Puppet, Ansible, Chef, SaltStack, etc.), and modern observability infrastructure (monitoring, metrics and logging)
      •    An appetite for the automation and streamlining of tasks
      •    Willingness to work with JVM-based systems  
      •    Comfortable with shell and scripting languages used in an SRE/Operations engineering context (e.g. Python, Go, Bash, Ruby, etc.)
      •    Good understanding of Linux/Unix fundamentals and debugging skills
      •    Strong English language skills and ability to work independently, as an effective part of a globally distributed team
      •    B.S. or M.S. in Computer Science, related field or equivalent in related work experience. Do not feel you need a degree to apply; we value hands-on experience most of all.

      The Wikimedia Foundation is... 

      ...the nonprofit organization that hosts and operates Wikipedia and the other Wikimedia free knowledge projects. Our vision is a world in which every single human can freely share in the sum of all knowledge. We believe that everyone has the potential to contribute something to our shared knowledge, and that everyone should be able to access that knowledge, free of interference. We host the Wikimedia projects, build software experiences for reading, contributing, and sharing Wikimedia content, support the volunteer communities and partners who make Wikimedia possible, and advocate for policies that enable Wikimedia and free knowledge to thrive. The Wikimedia Foundation is a charitable, not-for-profit organization that relies on donations. We receive financial support from millions of individuals around the world, with an average donation of about $15. We also receive donations through institutional grants and gifts. The Wikimedia Foundation is a United States 501(c)(3) tax-exempt organization with offices in San Francisco, California, USA.

      The Wikimedia Foundation is an equal opportunity employer, and we encourage people with a diverse range of backgrounds to apply.

      U.S. Benefits & Perks*

      • Fully paid medical, dental and vision coverage for employees and their eligible families (yes, fully paid premiums!)
      • The Wellness Program provides reimbursement for mind, body and soul activities such as fitness memberships, baby sitting, continuing education and much more
      • The 401(k) retirement plan offers matched contributions at 4% of annual salary
      • Flexible and generous time off - vacation, sick and volunteer days, plus 19 paid holidays - including the last week of the year.
      • Family friendly! 100% paid new parent leave for seven weeks plus an additional five weeks for pregnancy, flexible options to phase back in after leave, fully equipped lactation room.
      • For those emergency moments - long and short term disability, life insurance (2x salary) and an employee assistance program
      • Pre-tax savings plans for health care, child care, elder care, public transportation and parking expenses
      • Telecommuting and flexible work schedules available
      • Appropriate fuel for thinking and coding (aka, a pantry full of treats) and monthly massages to help staff relax
      • Great colleagues - diverse staff and contractors speaking dozens of languages from around the world, fantastic intellectual discourse, mission-driven and intensely passionate people

      *Eligible international workers' benefits are specific to their location and dependent on their employer of record

    • Railroad19 (US only)
      2 weeks ago

      We are looking for a savvy Data Engineer to join our growing team of analytics experts. The hire will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up.  The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.

      Responsibilities for Data Engineer

        • Create and maintain optimal data pipeline architecture,
        • Assemble large, complex data sets that meet functional / non-functional business requirements.
        • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
        • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
        • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
        • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
        • Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
        • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
        • Work with data and analytics experts to strive for greater functionality in our data systems.

      Qualifications for Data Engineer

        • Understanding of concepts such as Change Data Capture, Event Sourcing, and CQRS patterns using event-based systems
        • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
        • Experience building and optimizing ‘big data’ data pipelines, architectures, and data sets.
        • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
        • Strong analytic skills related to working with unstructured datasets.
        • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
        • A successful history of manipulating, processing and extracting value from large disconnected datasets.
        • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
        • Strong project management and organizational skills.
        • Experience supporting and working with cross-functional teams in a dynamic environment.
        • We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
        • Experience with stream-processing systems: Kafka, Nifi, Storm, Spark-Streaming, etc.
        • Strong knowledge of object-oriented/functional programming with Java 8+ or other JVM languages (Scala, Clojure, Kotlin, Groovy)
        • Hands-on experience with ETL techniques and frameworks like Apache Spark or Apache Flume.
        • Strong understanding of data serialization formats like Apache Avro, Parquet, Protobuf, Apache Thrift.
        • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra, MongoDB, ElasticSearch.
        • Use of AWS cloud services: EC2, EMR, RDS, Redshift, S3, Lambda, Kinesis.
        • Experience with integration of data from multiple data sources.
        • Understanding of the importance of CI/CD, unit/integration testing, build tooling (maven, gradle, sbt), dependency management.

      About RR19

        • We develop customized software solutions and provide software development services.  We’re a specialized team of developers and architects.  As such, we only bring an “A” team to the table, through hard work and a desire to lead the industry — this is our company culture — this is what sets Railroad19 apart.
        • At Railroad19, Inc. you are part of a company that values your work and gives you the tools you need to succeed.
        • We are headquartered in Saratoga Springs, New York, but we are a distributed team of remote developers across the US. 
        • Railroad19 provides competitive compensation and excellent benefits~ Medical/Dental/Vision vacation and 401K.
      Working at Railroad19:

      • Competitive salaries
      • Excellent Health Care, Dental and Vision benefits
      • 3 weeks vacation, 401K, work life balance
      • No Agencies***
      • This is a non-management position
      We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal opportunity workplace.
    • SemanticBits (US only)
      2 weeks ago

      SemanticBits is looking for a talented Senior Data Engineer who is eager to apply computer science, software engineering, databases, and distributed/parallel processing frameworks to prepare big data for the use of data analysts and data scientists. You will mentor junior engineers and deliver data acquisition, transformations, cleansing, conversion, compression, and loading of data into data and analytics models. You will work in partnership with data scientists and analysts to understand use cases, data needs, and outcome objectives. You are a practitioner of advanced data modeling and optimization of data and analytics solutions at scale. Expert in data management, data access (big data, data marts, etc.), programming, and data modeling; and familiar with analytic algorithms and applications (like machine learning).

      Requirements

      • Bachelor’s degree in computer science (or related) and eight years of professional experience
      • Strong knowledge of computer science fundamentals: object-oriented design and programming, data structures, algorithms, databases (SQL and relational design), networking
      • Demonstrable experience engineering scalable data processing pipelines.
      • Demonstrable expertise with Python, Spark, and wrangling of various data formats - Parquet, CSV, XML, JSON.
      • Experience with the following technologies is highly desirable: Redshift (w/Spectrum), Hadoop, Apache NiFi, Airflow, Apache Kafka, Apache Superset, Flask, Node.js, Express, AWS EMR, Scala, Tableau, Looker, Dremio
      • Experience with Agile methodology, using test-driven development.
      • Excellent command of written and spoken English
      • Self-driven problem solver
    • 3 weeks ago
      Company Overview
      At Netlify, we're building a platform to empower digital designers and developers to build better, more elaborate web projects than ever before. We're aiming to change the landscape of modern web development.

      Currently, we are looking for a Data Engineer to help us with that mission. If you are someone who enjoys helping people succeed with technology, aren’t afraid to help improve our practices and our product, and are an effective and enthusiastic self-directed learner, read on.

      As the company scales with light-speed, data science is critical for all sorts of decisions.We as a company value data science and have a centralized and growing data science team. Our data science team started in 2018 when we were about 20 people as a company! Now we are a company of 90 people and growing fast! The hire will be responsible for setting and maintaining our data environment, management, and pipeline. Databricks is our primary data environment.

      We’re a venture-backed company, and so far we've raised about $45 million from Andreessen Horowitz, Kleiner Perkins, Bloomberg, and prominent founders and professionals in our space.

      Netlify is a diverse group of incredible talent from all over the world. We’re ~40% woman or non-binary, and are composed of about half as many nationalities as we are team members.

      About the role:
      • Determine and construct data schema to support analytical and modeling needs
      • Work on creating and maintaining existing ETL to get data for business users and support our BI dashboard
      • Assist in developing a framework to automate ingestion and integration of structured data from a wide variety of data sources
      • Use tools, processes, guidelines to ensure data is correct, standardized and documented.
      • Assist in scaling pipelines to meet performance requirements
      • Collaborate and work alongside data engineers to set up and maintain the production environment to support data science workflow
      • Work on a Data retention strategy for different pipelines/sources and help with implementation
      • Build frameworks to validate data integrity and work with Infra data engineers to improve the testability of our data pipeline
      Qualifications

      The data engineer will have an immediate connection to our values. This individual contributor will be extremely flexible and enjoy a “startup” mentality and environment that changes day-to-day.

      • 2+ years of professional engineering experience, working with SQL & relational databases building data pipelines
      • Experience in data extraction, cleaning, mining and maintaining data pipelines
      • Ability to analyze sources of data, build and maintain schemas
      • Experience building frameworks to validate data integrity and improve the testability of our data pipelines
      • Proved ability to keep proper documentation
      • BS in computer science or related background
      Nice to have:
      • Experience with building scalable and reliable data pipelines using big data engine technologies like Spark, AWS EMR, Redshift, etc.
      • Experience with cloud technologies such as AWS or Google Cloud
      • Experience working with BI
      • SaaS Experience
      • Some knowledge of web development
      About Netlify

      Of everything we've ever built at Netlify, we are most proud of our team.

      We believe that empowered, engaged colleagues do their best work. We’ll be giving you the tools you need to succeed and looking to you for suggestions to improve not just in your daily job, but every aspect of building a company. Whether you work from our main office in San Francisco or you are a remote employee, we’ll be working together a lot—paring, collaborating, debating, and learning. We want you to succeed! About 60% of the company are remote across the globe, the rest are in our HQ in San Francisco.

      To learn a bit more about our team and who we are, make sure to visit our about page.

      Applying

      Not sure you meet 100% of our qualifications? Please apply anyway!

      With your application, please include: A thoughtful cover letter explaining why you would enjoy working in this role and why you’d like to work at Netlify. A resume or short listing of job history. (A link to a LinkedIn profile would be fine.)

      When we receive your complete application with the items above, we’ll get back to you about the next steps.

    • The shift to on-demand expectations is the biggest change in the workplace in decades. Employees want immediate, convenient, and personalized access to the knowledge and services they need to get their job done.

      Unlike traditional employee support tools, askSpoke was built specifically to power the on-demand workplace. askSpoke’s innovative design and AI gives employees what they need, where they need it, and when they need it, resulting in happier, more productive workplaces. And IT, HR and CS leaders get time back to work on the things that matter, and credit for the effort they’re putting into bringing their companies into the on-demand future.

      We’re a Series B startup backed by Accel and Greylock, and have raised $28M in funding. Our HQ is located in South Park, San Francisco, and we have colleagues in New York, Nashville, Los Angeles, and other locations!

      We are looking for exceptional engineers to join our team and implement various ML and NLP technologies. The work spans many disciplines: Information Retrieval, NLP, ML, and deep learning.

      Responsibilities:

      • Lead design, implementation, evaluation, and productionization of ML/NLP/IR projects that are tightly coupled with the product.

      • Implement new features and algorithms for our search and conversation engine.

      • Handle data collection and data annotation tasks.

      • Write high-quality code, with emphasis on maintainability, readability, and testing.

      • General intellectual property -- write patents and publish papers.

      • Review technical designs and code. Mentor and onboard new engineers.

      • Willingness to learn and work on other parts of our platform stack.

      Requirements:

      • Track record (3+ years professionally or 5+ years academically) of building ML and/or data pipeline systems.

      • Solid understanding of practical and theoretical aspects of standard ML and NLP concepts.

      • Experience with ML and NLP software packages and tools (Scikit-learn, TensorFlow, Torch, Spacy, NLTK).

      • Proficient in at least one or more programming languages including but not limited to Python, JavaScript, and C++.

      • Familiarity with modern data storage messaging and processing tools (MongoDB, Cassandra, Redis, Spark, Elastic).

      • Experience with high-pace test driven development  and continuous integration environments.

      • Ability to balance priorities while working alone or within a team.

      Benefits:

      • Competitive salary and meaningful equity in a fast-growing start-up.

      • Catered lunches every day (dinner is with family!). Fully stocked kitchen with snacks and drinks.

      • Comprehensive health, vision and dental insurance for you and your dependents.

      • 401(k) program.

      • Gym membership of your choosing.

      • Flexible vacation policy and paid parental leaves.

      • Commuting benefits include transport allowance or parking in SF.

      • If there’s something important to you that’s missing, we'll add it!

      We're building a talented team where everyone has the opportunity to have immediate impact. In addition to improving how companies of all sizes handle their everyday work, you will also help form the cultural foundation of the askSpoke team. If this sounds like your kind of challenge, we want to hear from you!

    • TileDB (US or Greece)
      1 month ago

      We are looking for a Python-focused software engineer to build and enhance our existing APIs and integrations with the Scientific Python ecosystem. TileDB’s Python API (https://github.com/TileDB-Inc/TileDB-Py) wraps the TileDB core C API, and integrates closely with NumPy to provide zero-copy data access. You will build and enhance the Python API through interfacing with the core library; build new integrations with data science, scientific, and machine learning libraries; and engage with the community and customers to create value through the use of TileDB.

      Location

      Our headquarters are in Cambridge, MA, USA and we have a subsidiary in Athens, Greece. However, you will have the flexibility to work remotely as long as your residence is in the USA or Greece. US candidates must be US citizens, whereas Greek candidates must be Greek or EU citizens.

      Expectations

      In your first 30 days, you will familiarize yourself with TileDB, the TileDB-Py API and the TileDB-Dask integration. After 30 days, you will be fully integrated in our team. You’ll be an active contributor and maintainer of the TileDB-Py project, and ready to start designing and implementing new features, as well as engaging with the Python and Data Science community.

      Requirements

      • 5+ years of experience as a software engineer
      • Expertise in Python and experience with NumPy
      • Experience interfacing with the CPython API, and Cython or pybind11
      • Experience with Python packaging, including binary distribution
      • Experience with C, C++, Rust, or a similar systems-level language
      • Distributed computation with Dask, Spark, or similar distributed computation system
      • Experience with a machine learning library (e.g. scikit-learn, TensorFlow, Keras, PyTorch, Theano)
      • Experience with Amazon Web Services or a similar cloud platform
      • Experience with dataframe-focused systems (e.g. Arrow, Pandas, data.frame, Vaex)
      • Experience with technical data formats such as (e.g. Parquet, HDF5, VCF, DICOM, GeoTIFF)
      • Experience with other technical computing systems (e.g. R, MATLAB, Julia)

      Benefits

      • Competitive salary and stock options
      • 100% medical and dental insurance coverage (for you and your dependents!)
      • Paid parental leave
      • Paid time off (vacation, sick & public holidays)
      • Flexible time off & flexible hours
      • Flexibility to work remotely (anywhere in the US or Greece)

      TileDB, Inc. is proud to be an Equal Opportunity Employer building a diverse and inclusive team.

    • Doximity is transforming the healthcare industry. Our mission is to help doctors be more productive, informed, and connected. As a software engineer, you'll work within cross-functional delivery teams alongside other engineers, designers, and product managers in building software to help improve healthcare.  

      Our team brings a diverse set of technical and cultural backgrounds and we like to think pragmatically in choosing the tools most appropriate for the job at hand.

      About Us

      Here's How You Will Make an Impact

      • Improve the performance and scalability of services, optimize our REST and GraphQL APIs
      • Address security concerns and proficiently maintain our application stack
      • Troubleshoot issues across the whole stack, such as high-load, memory full, network issues and come up with temporary/long term solutions based on the root cause
      • Hands-on maintenance on our Ruby on Rails and Go (Golang) applications
      • Increase our automated test coverage and deployment infrastructure robustness 
      • Manage infrastructure using Chef and Terraform
      • Active involvement in design, implementation, and maintenance of the development, staging, and production infrastructure and services your team is responsible for
      • Create concise postmortems in the event of an outage
      • Write and maintain run-books for other engineers to leverage
      • Ensure proper security, monitoring, alerting, and reporting for the applications your team is responsible for
      • Collaborate with other engineers to make sound infrastructure decisions, improve workflow, and deploy applications ready for production
      • Monitor capacity, cost and plan for upgrades
      • Participate in an on-call rotation

      About you

      • You are a Ruby engineer at heart, very familiar and passionate about the Rails ecosystem
      • You are knowledgeable of memory and CPU profiling tools to help adjust Ruby jobs and processes to use resources effectively
      • You have experience working with Terraform and Chef (or similar tooling) either in a DevOps or product support capacity
      • You have experience deploying, configuring, and maintaining NGINX
      • You are proficient with Unix, AWS, and Git
      • You are self-motivated and able to manage yourself and your own queue
      • You are a problem solver with a passion for simple, clean, and maintainable solutions
      • You agree that concise and effective written and verbal communication is a must for a successful team
      • You are able to maintain a minimum of 5 hours overlap with 9:30 to 5:30 PM Pacific time
      • You can dedicate about two weeks per year for travel to company events

      Benefits & Perks

      • Generous time off policy
      • Comprehensive benefits including medical, vision, dental, Life/ADD, 401k, flex spending accounts, commuter benefits, equipment budget, and continuous education budget
      • Pre-IPO stock incentives
      • .. and much more! For a full list, see our career page

      More info on Doximity

      We’re thrilled to be named the Fastest Growing Company in the Bay Area, and one of Fast Company’s Most Innovative Companies. Joining Doximity means being part of an incredibly talented and humble team. We work on amazing products that over 70% of US doctors (and over one million healthcare professionals) use to make their busy lives a little easier. We’re driven by the goal of improving inefficiencies in our $3.5 trillion U.S. healthcare system and love creating technology that has a real, meaningful impact on people’s lives. To learn more about our team, culture, and users, check out our careers page, company blog, and engineering blog. We’re growing steadily, and there’s plenty of opportunity for you to make an impact.

      Doximity is proud to be an equal opportunity employer, and committed to providing employment opportunities regardless of race, religious creed, color, national origin, ancestry, physical disability, mental disability, medical condition, genetic information, marital status, sex, gender, gender identity, gender expression, pregnancy, childbirth and breastfeeding, age, sexual orientation, military or veteran status, or any other protected classification. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law.

  • Marketing / Sales (1) Marketing / Sales rss feed

    • As the Director of Acquisition Marketing at Skillshare, you’ll own our digital paid marketing efforts and be a leader within the marketing team. Reporting to the Head of Growth Marketing and overseeing a multi-million dollar budget, you’ll be charged with scaling and optimizing programs that grow, engage, and retain our vibrant global user base.

      What you'll do:
      • Manage acquisition and retention efforts (inclusive of strategy, execution and testing) with a multi-million dollar budget across paid search, paid social, affiliate marketing, referrals, influencer marketing, and podcasts
      • Expand into new digital channels, platforms, audiences and content through rapid testing
      • Build upon current channel analytics to track and report performance, including budget tracking and forecasting across paid channels
      • Establish CAC goals, as well as manage traffic, conversion and LTV
      • Work effectively with Engineering, Product and Finance teams to execute paid channel priorities
      • Work with the influencer team to optimize influencer / agency partnerships and align on strategy, budgets, goals, and testing plans
      • Manage and develop a team of three direct reports
      • Manage a growing network of contractors and freelancers
      • Effectively communicate results and analysis to relevant stakeholders
      Why we're excited about you:
      • You have a proven track record of driving user acquisition through strategy, budgeting, forecasting, and measuring performance across a variety of paid channels
      • You have strong business acumen with the ability to think strategically and creatively, evidenced by having effectively managed multi-million dollar budgets
      • You have deep knowledge of attribution and ROI analyses, especially for complex funnels
      • You’re highly analytical and proficient with business intelligence tools such as Chartio, Looker, etc.
      • You have a pulse on digital marketing trends – and know how to deliver compelling recommendations and strategies for testing and measuring new paid channels
      • You have experience managing agency and third-party relationships
      • You’re a manager with team-building experience, and a track record of attracting and retaining great talent
      • You’re proactive, collaborative, organized, and a natural problem solver – you’re excited by growth stage environments and not happy waiting for someone to tell you exactly what to do
      • You’re a persuasive presenter with excellent verbal and written communication skills
      Why you're excited about us:
      • Impact: You’ll be directly responsible for creating scalable, robust acquisition strategies that have a major impact on the bottom line of our business.
      • Growth: You’ll get to develop and grow a team, and work with them to drive the next phase of paid marketing for Skillshare.
      • Our team: We have a passionate, talented team that is a lot of fun to work with.
      • Our mission: We’re doing work that matters – connecting lifelong learners around the world and empowering them to pursue their creativity.
      • Flexibility: We believe that doing your best work means living a full life. That means different things for everyone, so we optimize for trust, invest to support remote teams, have an unlimited vacation policy (with a required minimum!), and encourage work-life balance.
      About Skillshare:

      Skillshare is an online learning community for creatives. We have thousands of inspiring classes for creative and curious people, and millions of members who come together to find inspiration and take the next step in their creative journey. We are backed by Union Square Ventures, Spark Capital, Amasia, Spero Ventures, and Burda Principal Investments.

      Skillshare is committed to building a diverse team that reflects a variety of backgrounds, perspectives, and skills. We’re proud to be recognized as a top place to work by BuiltinNYC and Crain’s, one of the five best places to work for women by Bpeace, and a top-rated workplace for dads by Fatherly. We work to ensure a consistent interview process, fair compensation, and inclusive work environment for all.
  • All others (3) All others rss feed

    • Auth0 (North America)
      1 week ago
      Auth0 is a pre-IPO unicorn. We are growing rapidly and looking for exceptional new team members to add to our teams and will help take us to the next level. One team, one score. 

      We never compromise on identity. You should never compromise yours either. We want you to bring your whole self to Auth0. If you’re passionate, practice radical transparency to build trust and respect, and thrive when you’re collaborating, experimenting and learning – this may be your ideal work environment.  We are looking for team members that want to help us build upon what we have accomplished so far and make it better every day.  N+1 > N.

      Auth0 is looking for a Data Scientist to join our Growth Marketing team.  This role will blend research and application to develop and apply data-centric models that support all aspects of marketing at Auth0.  Additionally, this role will collaborate directly with analysts and business owners in these functional areas to understand their challenges and develop practical solutions.  If you love to work with big data, scaled optimization, probabilistic inference, and machine learning this is the right role for you! 

      The Growth team is an innovative and forward thinking team of analysts and engineers working to impact Auth0’s marketing funnel and user engagement.  This is an individual contributor role but will act with substantial independence in both technical model-building and stakeholder collaboration.
      This role can be based from our Bellevue, WA office or from a remote home office anywhere in North America.

      What You Will Do
      • Work with Growth Marketers to understand best practices and propose strategic ideas and turn them into initiatives for driving growth
      • Define and regularly monitor KPIs, success metrics, and other analytics to maximize our conversion rate across their digital channels.
      • Build models, such as LTV, and help drive strategic and operational changes across all marketing initiatives
      • Collaborate with internal marketing and other cross-company teams to understand challenges and create data-focused solutions
      • Stay abreast of technology trends and best practices in data modeling 
      • Apply creative analytical problem-solving skills to a wide variety of marketing questions to deepen our understanding of campaign effectiveness, customer journeys, and go-to-market performance
      • Collaborate with leadership in marketing and cross-functionally to provide a data-driven viewpoint on strategic decisions
      What You Bring
      • 5+ years of relevant analytics/data science experience in a technology company with at least 3 years leading innovative projects dealing with applications of analytics/data science
      • Deep understanding of multi-channel digital marketing
      • Strong communication ability across multi-functional teams for technical and non-technical audiences
      • Be the expert on emerging or existing Data Science technologies and techniques to enhance Auth0’s marketing efficiency
      • Assist in designing a data and computational infrastructure that can handle near real time model execution, perform machine learning, and batch large scale data for our users including data pipelines, training environments and decreasing production runtime
      • Understanding of computer science fundamentals, data structures, and algorithms. In-depth knowledge of one or more programming languages (including Java, C/C++, Python)
      • Experience in specialized areas such as Optimization, NLP, Probabilistic Inference, Machine Learning, Recommendation Systems
      • Proven experience with large data sets and related technologies, e.g., Hadoop/Spark. Knowledge of SQL is needed
      • Experience with LTV/CAC models, Marketing ROI and Performance management, Attribution models, digital and paid media analysis
      Auth0’s mission is to help developers innovate faster. Every company is becoming a software company and developers are at the center of this shift. They need better tools and building blocks so they can stay focused on innovating. One of these building blocks is identity: authentication and authorization. That’s what we do. Our platform handles 2.5B logins per month for thousands of customers around the world. From indie makers to Fortune 500 companies, we can handle any use case.

      We like to think that we are helping make the internet safer.  We have raised $210M to date and are growing quickly. Our team is spread across more than 35 countries and we are proud to continually be recognized as a great place to work. Culture is critical to us, and we are transparent about our vision and principles

      Join us on this journey to make developers more productive while making the internet safer!
    • Auth0 is a pre-IPO unicorn. We are growing rapidly and looking for exceptional new team members to add to our teams and will help take us to the next level. One team, one score. 

      We never compromise on identity. You should never compromise yours either. We want you to bring your whole self to Auth0. If you’re passionate, practice radical transparency to build trust and respect, and thrive when you’re collaborating, experimenting and learning – this may be your ideal work environment.  We are looking for team members that want to help us build upon what we have accomplished so far and make it better every day.  N+1 > N.

      The Data Scientist will help build, scale and maintain the entire data science platform. The ideal candidate will have a deep technical understanding, hands-on experience in building Machine Learning models coming up with valuable insights, and promoting a data-driven culture across the organization. They would not hesitate to wrangle data, if necessary, understand the business objectives and have a good understanding of the entire data stack.

      This position plays a key role in data initiatives, analytics projects, and influencing key stakeholders with critical business insights. You should be passionate for continuous learning, experimenting, applying and contributing towards cutting edge open source Data Science technologies.

      Responsibilities

        • Use Python and the vast array of AI/ML libraries to analyze data and build statistical models to solve specific business problems.

        • Improve upon existing methodologies by developing new data sources, testing model enhancements, and fine-tuning model parameters.

        • Collaborate with researchers, software developers, and business leaders to define product requirements and provide analytical support.

        • Directly contribute to the design and development of automated selection systems.

        • Build customer-facing reporting tools to provide insights and metrics which track system performance.

        • Communicate verbally and in writing to business customers and leadership team with various levels of technical knowledge, educating them about our systems, as well as sharing insights and recommendations

      Basic Qualifications

        • Bachelor's degree in Statistics, Applied Math, Operations Research, Engineering, Computer Science, or a related quantitative field2 years of working experience as a Data ScientistProficient with data analysis and modeling software such as Spark, R, Python etc.

        • Proficient with using scripting language such as Python and data manipulation/analysis libraries such as Scikit-learn and Pandas for analyzing and modeling data.

        • Experienced in using multiple data science methodologies to solve complex business problems.

        • Experienced in handling large data sets using SQL and databases in a business environment.

        • Excellent verbal and written communication.

        • Strong troubleshooting and problem solving skills.

        • Thrive in a fast-paced, innovative environment.

      Preferred Qualifications

        • Master's degree in Statistics, Applied Math, Operations Research, Engineering, Computer Science, or a related quantitative field.

        • 2+ years’ experience as a Data Scientist.

        • Fluency in a scripting or computing language (e.g. Python, Scala, C++, Java, etc.)

        • Superior verbal and written communication skills with the ability to effectively advocate technical solutions to scientists, engineering teams and business audiences.

        • Experienced in writing academic-styled papers for presenting both the methodologies used and results for data science projects.

        • Demonstrable track record of dealing well with ambiguity, ability to self-motivate, prioritizing needs, and delivering results in a dynamic environment.

        • Combination of deep technical skills and business savvy to interface with all levels and disciplines within our and our customer’s organizations

      Skills and Abilities

        • + BA/BS in Computer Science, related technical field or equivalent practical experience.

        • At least 3 years of relevant work experienceAbility to write, analyze, and debug SQL queries.

        • Exceptional Problem solving and analytical skills.

        • Fluent in implementing logistic regression, random forest, XGBoost, bayesian and ARIMA in Python/RExperience in User path navigation with Markov Chain, STAN Bayesian analysis for A/B testingFamiliarity with Sentiment Analysis (NLP) and LSTM AI modelsExperience in full AI/ML life-cycle from model development, training, deployment, testing, refining and iterating.

        • Experience in Tableau, Apache SuperSet, Looker or similar BI tools.

        • Knowledge of AWS Redshift, Snowflake or similar databases

      Preferred Locations:


        • #US; #AR;
      Auth0’s mission is to help developers innovate faster. Every company is becoming a software company and developers are at the center of this shift. They need better tools and building blocks so they can stay focused on innovating. One of these building blocks is identity: authentication and authorization. That’s what we do. Our platform handles 2.5B logins per month for thousands of customers around the world. From indie makers to Fortune 500 companies, we can handle any use case.

      We like to think that we are helping make the internet safer.  We have raised $210M to date and are growing quickly. Our team is spread across more than 35 countries and we are proud to continually be recognized as a great place to work. Culture is critical to us, and we are transparent about our vision and principles

      Join us on this journey to make developers more productive while making the internet safer!
    • Kalepa is looking for Data Scientists to lead efforts at the intersection of machine learning and big data engineering in order to solve some of the biggest problems in commercial insurance.

      Data scientists at Kalepa will be turning vast amounts of structured and unstructured data from many sources (web data, geolocation, satellite imaging, etc.) into novel insights about behavior and risk. You will be working closely with a small team in designing, building, and deploying machine learning models to tackle our customers’ questions.

      Kalepa is a New York based, VC backed, startup building software to transform and disrupt commercial insurance. Nearly one trillion ($1T) dollars are spent globally each year on commercial insurance across small, medium, and large enterprises. However, the process for estimating the risk associated with a given business across various perils (e.g. fire, injury, malpractice) is still reliant on inefficient and inaccurate manual forms or outdated and sparse databases. This information asymmetry leads to a broken set of economic incentives and a poor experience for both businesses and insurers alike. By combining cutting edge data science, enterprise software, and insurance expertise, Kalepa is delivering precision underwriting at scale – empowering every commercial insurance underwriter to be as effective and efficient as possible. Kalepa is turning real-world data into a complete understanding of risk.

      Kalepa is led by a strong team with experiences from Facebook, APT (acquired by Mastercard for $600M in 2015), the Israel Defense Forces, MIT, Berkeley, and UPenn.

      About you:

      ● You want to design a flexible analytics, data science, and AI framework to transform the insurance industry

      ● You have demonstrated success in delivering analytical projects, including structuring and conducting analyses to generate business insights and recommendations

      ● You have in-depth understanding of applied machine learning algorithms and statistics

      ● You are experienced in Python and its major data science libraries, and have deployed models and algorithms in production

      ● You have a good understanding of SQL and non-SQL databases

      ● You value open, frank, and respectful communication

      ● You are a proactive and collaborative problem solver with a “can do” attitude

      ● You have a sincere interest in working at a startup and scaling with the company as we grow

      As a plus:

      • You have experience in NLP and/or computer vision

      • You have familiarity with Spark, Hadoop, or Scala

      • You have experience working with AWS tools

      What you’ll get

      ● Work with an ambitious, smart, and fun team to transform a $1T global industry

      ● Ground floor opportunity – opportunity to build the foundations for the product, team, and culture alongside the founding team

      ● Wide-ranging intellectual challenges working with large and diverse data sets, as well as with a modern technology stack

      ● Competitive compensation package with a significant equity component

      ● Full benefits package, including excellent medical, dental, and vision insurance

      ● Unlimited vacation and flexible remote work policies

      ● Continuing education credits and a healthy living / gym monthly stipend

      [IMPORTANT NOTE]: Salary ranges are for New York based employees. Compensation for remote roles will be adjusted according to the cost of living and market in the specific geography.