Director of Data and Analytics

GitLab


5 months ago

08/03/2019 10:21:23

Job type: Full-time

Hiring from: Americas

Category: All others


The director of data and analytics is an organizational leader with a passion for data and analysis who has a clear vision for how data can transform company strategy. The role requires hands on leadership necessary to grow a team through the start up phase into a mature organization. The director of data and analytics is responsible for scaling the data function in an environment based primarily on cloud based SaaS systems. The director must consider global operations and be able to manage a remote, geographically dispersed team. The director ensures that the organization always approaches work with the goal of continuous improvement.

Responsibilities

  • Drive the scope and effectiveness of the data and analysis function at GitLab.

  • Ensure the Company’s cloud and on-premise data is centralized into a single data lake and modeled to support data analysis requirements from all functional groups of the Company.

  • Create a common data framework so that all company data can be analyzed in a unified manner.

  • Work with the product, operations, and executive management teams to create a data enabled user journey.

  • Create and execute a plan to develop and mature our ability to measure and optimize usage growth, mapped to our user journey.

  • Ensure that all transactional systems can communicate with the data warehouse and that production data adheres to a unified data model.

  • Develop a roadmap for the data and analytics function that clearly defines ownership and responsibility between the central data function and the functional groups.

  • Collaborate with all functions of the company to ensure data needs are addressed and the required data is modeled and available to analysts and end-users.

  • Build a multi-modal service model that meets the non-homogeneous needs of our functional groups -- Full-service to Self-Service, and across our data stack.

  • Work with product, operations and executive management to guide maintain a holistic vision of the future of data at GitLab, and help leadership plan for any changes in our data strategy or needs. An example is in-product analytics.

  • This position reports directly to the CFO and works closely with the executive team to develop an organization plan that addresses company wide analytic resources in either a direct report or matrix model.

Requirements

  • Postgraduate work or equivalent experience (Masters or PhD) in a quantitative field such as math, physics, computer science, statistics etc.

  • Minimum of 7 years experience in a senior leadership position managing an analytics team.

  • Experience with a high growth company using on-premise tools and on-demand (SaaS) transactional systems.

  • Hands on experience with Python, SQL and relational databases. Experience with Snowflake is a plus.

  • Have previously lead a corporate data platform project.

  • Experience with open source data & analytics tools.

  • Experience working with multiple executive level business stake holders.

  • Must have experience with analytic and data visualization tools such as Periscope.

  • Share and work in accordance with our values.

  • Must be able to work in alignment with Americas timezones.

  • Successful completion of a background check.

Hiring Process 

  • Candidates for this position can expect the hiring process to follow the order below. Please keep in mind that candidates can be declined from the position at any stage of the process. To learn more about someone who may be conducting the interview, find her/his job title on our team page.

  • Selected candidates will be invited to schedule a screening call with our Global Recruiters

  • Next, candidates will be invited to schedule a first interview with our CFO

  • Candidates will then be invited to schedule a second round of interviews with members of the e-group, finance and data teams. Additional details about our process can be found on our hiring page.

Compensation

To view the full job description and its compensation calculator, view our handbook. The compensation calculator can be found towards the bottom of the page.

Please mention that you come from Remotive when applying for this job.

Help us maintain Remotive! If this link is broken, please just click to report dead link!

similar jobs

  • 🇫🇷 - This listing in in French, looking for a French-speaking candidate

    Mission 🇫🇷

    La mission principale est de piloter et mettre en oeuvre l’acquisition clients, sur tous les canaux. Cela inclut :

    - Elaborer la stratégie globale d’acquisition client et sa déclinaison sur tous les canaux.

    - Mettre en place les outils analytiques de suivi de la performance de chaque canal d’acquisition.

    - Suivre les partenariats actuels, rechercher de nouveaux partenaires et définir les modèles de partenariat, en France et à l’international.

    - Mettre en oeuvre différents moyens d'acquisition clients en direct (linkedIn, emailing, marchés publics…).

    - Piloter l’acquisition client depuis notre site web (via SEO).

    Profil 🇫🇷

    - Vous avez une première XP dans le monde logiciel ou data.

    - Vous avez des connaissances techniques de base, sans forcément les pratiquer (mais vous pouvez expliquer clairement SQL, API REST, langage Python, AWS).

    - Vous êtes très orienté mise en oeuvre et boucle rapide de retour/analyse/évolution, en cherchant à progresser et en approfondissant chaque sujet.

    - Vous êtes à l’aise avec de nombreux outils (e.g. Hubspot, Canva, Mailchimp, Zapier, Asana…).

    - Vous êtes très attentif à votre productivité (e.g. pas de réunion sans objectifs clairs, analyse rapide des pros/cons sur un sujet, recherche/test de nouveaux outils).

    Vous avez un bon niveau écrit en français / anglais.

    En pratique 🇫🇷

    - Nous sommes une société en full-remote (i.e. sans bureaux). Les échanges se font via email / GoogleDrive / Slack / Zoom. Cela correspond à notre philosophie de respecter au mieux l’équilibre vie privée / vie perso (ce fonctionnement est idéal pour vie avec enfants, pratique d'une activité type sport/musique…).

    - Mais donc pas de localisation, vous travaillez d’où vous le souhaitez.

    - Le bon fonctionnement de ce mode passe par la confiance, la transparence et une autonomie importante.

    - Temps partiel possible.

    - Package : fixe + commissions sur ventes.

    🇫🇷 Candidature à [email protected] - merci !


  • Kalepa is looking for Data Scientists to lead efforts at the intersection of machine learning and big data engineering in order to solve some of the biggest problems in commercial insurance.

    Data scientists at Kalepa will be turning vast amounts of structured and unstructured data from many sources (web data, geolocation, satellite imaging, etc.) into novel insights about behavior and risk. You will be working closely with a small team in designing, building, and deploying machine learning models to tackle our customers’ questions.

    Kalepa is a New York based, VC backed, startup building software to transform and disrupt commercial insurance. Nearly one trillion ($1T) dollars are spent globally each year on commercial insurance across small, medium, and large enterprises. However, the process for estimating the risk associated with a given business across various perils (e.g. fire, injury, malpractice) is still reliant on inefficient and inaccurate manual forms or outdated and sparse databases. This information asymmetry leads to a broken set of economic incentives and a poor experience for both businesses and insurers alike. By combining cutting edge data science, enterprise software, and insurance expertise, Kalepa is delivering precision underwriting at scale – empowering every commercial insurance underwriter to be as effective and efficient as possible. Kalepa is turning real-world data into a complete understanding of risk.

    Kalepa is led by a strong team with experiences from Facebook, APT (acquired by Mastercard for $600M in 2015), the Israel Defense Forces, MIT, Berkeley, and UPenn.

    About you:

    ● You want to design a flexible analytics, data science, and AI framework to transform the insurance industry

    ● You have demonstrated success in delivering analytical projects, including structuring and conducting analyses to generate business insights and recommendations

    ● You have in-depth understanding of applied machine learning algorithms and statistics

    ● You are experienced in Python and its major data science libraries, and have deployed models and algorithms in production

    ● You have a good understanding of SQL and non-SQL databases

    ● You value open, frank, and respectful communication

    ● You are a proactive and collaborative problem solver with a “can do” attitude

    ● You have a sincere interest in working at a startup and scaling with the company as we grow

    As a plus:

    • You have experience in NLP and/or computer vision

    • You have familiarity with Spark, Hadoop, or Scala

    • You have experience working with AWS tools

    What you’ll get

    ● Work with an ambitious, smart, and fun team to transform a $1T global industry

    ● Ground floor opportunity – opportunity to build the foundations for the product, team, and culture alongside the founding team

    ● Wide-ranging intellectual challenges working with large and diverse data sets, as well as with a modern technology stack

    ● Competitive compensation package with a significant equity component

    ● Full benefits package, including excellent medical, dental, and vision insurance

    ● Unlimited vacation and flexible remote work policies

    ● Continuing education credits and a healthy living / gym monthly stipend

    [IMPORTANT NOTE]: Salary ranges are for New York based employees. Compensation for remote roles will be adjusted according to the cost of living and market in the specific geography.

  • Description
    Students enroll in Thinkful courses to gain the valuable technical and professional skills needed to take them from curious learners to employed software developers. As an Immersive Course Instructor, you will deliver high quality live workshop content based on existing curriculum, preparing students to successfully transition careers. As an instructor for the specializations part of program, you will deliver high quality live workshop and help to students on topics such as Time Series Analysis, NLP, Deep Learning (using Tensorflow and Keras), or Big Data (using spark, AWS, and hadoop).

    In addition to working directly with students, Instructors are expected to maintain an environment of regular, candid feedback with the Educator Experience team, and to stay on top of important updates via meetings, email, and Slack. Ideal candidates for this team are highly coachable, display genuine student advocacy, and are comfortable working in a complex, rapidly changing environment. 

    Responsibilities:

    • Delivers high quality workshops based on the curriculum materials, and provides live coding demos when appropriate, to supplement written materials and content to provide students with the skills and knowledge to get their first developer job 

    • Maintains and updates the daily and weekly student syllabus which outlines the required homework and assignments, and deadlines for assessments and projects 

    • Provides up to 2 hours daily of on-demand video and chat support for students as they move through the program assignments 

    • Spends up to 4 hours a day prepping for workshops and updating course materials

    • Works with the other Format Leads for engagement formats (Mentor Sessions, Group Sessions, Grading, Technical Coaching, Mock Interviews/Assessments) to ensure that consistent experience is happening for students in immersive courses 

    • Provide constructive feedback to the Instructional Design team on improvements to the course materials and curriculum based on student experience with the materials 
    Requirements:

    • Strong expertise with Time Series Analysis, NLP, Deep Learning (using Tensorflow and Keras), or Big Data (using spark, AWS, and hadoop).

    • Strong expertise with Python, SQL, statistics, supervised and unsupervised learning

    • Expertise with Data Structures and Algorithms, and comfort explaining these topics 

    • Ability to run capstone projects and help students during these weeks from idea to implementation

    • Ability to explain complicated topics clearly and without jargon

    • Strong written and verbal communication skills

    • High level of detail orientation and an exceptional work ethic

    • Enjoy working with people, not just putting your head down and working

    • Must have a reliable, high-speed Internet connection

    • Minimum 3-4 years of professional data science experience

    • Teaching experience, especially in a remote or online class, is a plus

Remotive can help!

Not sure how to apply properly to this job? Watch our live webinar « 3 Mistakes to Avoid When Looking For A Remote Startup Job (And What To Do Instead) ».

Interested to chat with Remote workers? Join our community!