Data Engineer

Brave Software


2 weeks ago

10/06/2019 10:21:22

Job type: Full-time

Hiring from: US or Canada

Category: All others


We are looking for an experienced data engineer to be responsible for the design, development and maintenance of large data collection and processing systems at Brave. This position will focus on scaling our backend systems to handle the increased load of data collection, processing and presentation.

Responsibilities:

  • The position would primarily work with the statistics and engineering teams, along with day to day communication with the product and marketing team.

  • Design and develop AWS systems to collect and manage business events at scale.

  • Respond to request from the product, engineering and marketing team for timely data analysis.

  • Participate in multi-team planning for future data collection needs.

Requirements:

  • Strong AWS skills with extensive knowledge of the data collection, storage and processing systems that AWS provides. Experience building ETL pipelines at scale.

  • Familiarity with Redshift, Kinesis, Athena, Glue, Data Pipeline, MongoDB, Postgresql.

  • Strong JavaScript and SQL skills.

  • Extensive knowledge of modern open source software develop practices and tools i.e. Git, Github, Unit and integration testing, pull requests, code review etc…

Additional qualities:

  • Strives to deliver clean, maintainable and testable code

  • Has a passion for user privacy

  • Is open to learning new languages and frameworks

  • Has a soft spot for JavaScript

 Benefits

  • Competitive salary

  • 4 weeks (20 days) of paid vacation per year

  • Excellent medical coverage

  • Generous 401k plan

  • Stock option grant

  • Travel and conference budgets

  • Commuters benefit (On­site only)

  • Hip office in the SoMA neighborhood of SF

Candidates must be legally authorized to work in the United States or Canada.

Please mention that you come from Remotive when applying for this job.

Help us maintain Remotive! If this link is broken, please just click to report dead link!

similar jobs

  • Snowplow Analytics (US only)
    Yesterday

    Ideally located in Eastern or Central Time

    It’s a hugely exciting time here at Snowplow. Over the last 7 years, we’ve grown to a brilliant 50 person team that is spread out over 14 countries with nearly 150 customers and many, many more open source users, and we’re heading for even bigger and better things, fast. 

    At Snowplow, we want to empower people to do transformative things using data. We work with companies from around the world to help them better understand their customers and products, and develop a truly data-driven culture. 

    With an ever-expanding US customer base, we are now looking for a US-based Data Consultant to join our Data Strategy team. You’ll work remotely with the opportunity to travel onsite to our customers, as well as to Europe to meet with the rest of the team & company.

    We would love to hear from you if you are excited about working at the interface of technology and people, translating business requirements into technical specifications and then implementing these. 

    The Opportunity

    As a Data Consultant, you’ll be working with companies of all sizes from across different industries on building out their data capability. Through workshops and co-development sessions you’ll understand their use case and business model, design a Snowplow instrumentation that fits it and then work with their data team to implement it.

    The role combines a mixture of tactical tasks, such as writing data models and implementing tracking, with more strategic responsibilities, such as helping companies understand what data they need and how they should consume it. As you gain experience and progress within the role, your focus as part of the Data Strategy team broadens and you’ll provide value to our customers throughout their entire Snowplow journey.

    The environment you’ll be working in

    Our company values are Transparency, Honesty, Ownership, Inclusivity, Empowerment, Customer-centricity, Growth and Technical Excellence. These aren’t just words we plucked out of thin air, we came up with them together as a company and are continually looking to find new ways to weave these into our day to day operations. From flexible hours and working locations to the way we give feedback, we’re passionate about building a company that supports both company and individual development.

    What you’ll be doing

    Helping companies with their data strategy. You’ll help companies architect their cloud data stack and make the organisational changes to become data-driven business. You’ll also work with them to discover new data use cases, and plan out how to action them.

    Mapping business requirements to data solutions. You’ll work with companies on determining what data they need to collect, and how it will be used. Through holding workshops (both onsite and remote), you’ll gain a deep understanding of our customers’ industries and business models, and will be able to best advise them on how to utilise our technology.

    Delivering excellent technical implementations. From designing tracking to developing data models, you’ll work with our customers to ensure they are getting the most value out of our technology. 

    Be a Snowplow expert. Through working with the Snowplow team and our customers, as well as attending conferences and events, you’ll become a true Snowplow expert, and develop a thorough understanding of the wider data landscape.

    What you bring to the team

    • Analysis is your thing. Ideally, you will possess a degree in an analytical field such as Maths, Economics, Engineering, Computer Science or similar, or have relevant commercial experience.

    • You’re not right at the start of your journey. You have experience within consulting, data analysis, data engineering, or similar. 

    • Learning is COOL. You love working with data and are excited to learn new tools and technologies.

    • Time IS your friend. You are highly organised and able to manage your time effectively. Working on implementations for multiple customers at any given time, you are able to coordinate customer interaction and technical delivery. 

    • You love to explain ideas. You are a clear and confident communicator and are able to explain complex technical principles to people with various technical abilities. 

    • There is no ‘i’ in team. You have some experience in managing complex projects with multiple stakeholders and dependencies on various other teams.

    • You have strong technical skills. You are proficient at performing data analysis in spreadsheets or relational databases. Prior experience with SQL is valuable but not necessary.

    • Process, process, process. You have a mature attitude to security, documentation and process. Our clients trust us with their data. This is a huge responsibility and informs everything we do.

    • You’re a bit of a jet setter. You are willing and able to travel internationally (including to Europe) for work (up to 25% of the time). 

    What you’ll get in return

    • A competitive package including share options

    • 23 days of holiday a year (plus bank holidays)

    • 401(k) and health insurance 

    • MacBook or Dell XPS 13/15

    • Two fantastic company Away Weeks in a different European city each year (the next one will be in November 2019!)

    • Work alongside a supportive and talented team with the opportunity to work on cutting edge technology and challenging problems

    • Grow and develop in a fast-moving, collaborative organisation

    Snowplow is dedicated to building and supporting a brilliant, diverse and hugely inclusive team. We don't discriminate against gender, race, religion or belief, disability, age, marital status or sexual orientation. Whatever your background may be, we welcome anyone with talent, drive and emotional intelligence.

  • 1 week ago

    Here's to the crazy ones. The hackers. The doers. The passionate geeks in a world of corporate drones.

    A cool, fully-remote startup is looking for a Senior Data Engineer… preferably one that does NOT suck! You must be speaking Python better than your mother tongue, and an expert when it comes to data pipelines.

    First, let's get one thing out of the way. We know that our salaries are low, at least for the exceptional talent we're looking for. We plan to dramatically increase salaries by the end of the year when we raise a seed round. We have already attracted serious interest from top investors, but we're intentionally bootstrapping on our own till our public launch in time for Black Friday.

    On top of the salary, you'll get generous stock options, performance-based bonuses, and annual profit share, as well as extensive training and mentoring, BUT…

    You must be a perfectionist — you're simply too passionate about your work to call something "done" when it's not near perfect yet!

    Do you remember how "Monica" from F.R.I.E.N.D.S was obsessed with the little details? Now, imagine if she became a software engineer somehow… Do you think this is you?

    Okay, we want to hire you if you...

    • have 3+ years of data engineering hands-on experience with exposure to a wide array of BigData tools like task queues, message brokers, stream-processing frameworks, and AWS' data stack

    • have experience managing large-scale data pipelines in production

    • are proficient with Python2.7 and Python3 alike

    • have expert-level proficiency with ETL and data modeling

    • know your way around AWS, for real. [SQS, S3, Lambda, EC2, etc.]

    • are familiar with AWS Lambda and Serverless architecture

    • are skillful with web scraping using tools like Selenium and Scrapy

    • have ninja-level skills with SQL (PostgreSQL/MySQL)

    • are proficient with UNIX and Shell Scripting

    • are comfortable working with petabyte-scale, billion-row datasets

    It would be "nice" if you...

    • are skillful with backend development using Python, Django, and DRF (HUGE PLUS)

    • have experience with applied Machine Learning and recommender systems

    • have experience with any of these technologies: ElasticSearch, Golang, SQLAlchemy, Apache Kafka, or React Native

    • have DevOps experience and are familiar with either Terraform, CloudFormation, or AWS CDK

    • are not afraid of frontend work; React.js, Next.js, JavaScript, HTML, or CSS

    On top of that, you...

    • are passionate about making an impact in an early-stage startup with a kickass product

    • are productive, attentive, and self-driven

    • are familiar with Agile methodologies and CI/CD

    • have strong communication skills and fluency in English

    • can work in a fully remote environment

    • document and test your code

    • have Sherlock Holmes-like detective skills; you know how to dive deep into data investigations to identify unknown problems and debug data anomalies.

    As a senior data engineer, you will...

    • architect, build, and maintain batch/real-time data processing pipelines

    • integrate a wide variety of data sources; third-party APIs, data feeds, etc.

    • build advanced web scrapers and crawlbots on top of AWS Lambda

    • take part in managing our Big Data infrastructure on AWS

    • develop creative solutions to our data problems with robust, production-ready code

    • build tools to automate data cleaning and ingestion

    • architect and implement a robust infrastructure for optimal ETL and distributed storage

    Are you the real deal? Let's talk!

    There's plenty of cool things to do; data processing pipelines, lots of integrations and APIs, infrastructure, and even a possibility of working with ML-powered deal recommendation engine.

    If any of these things fall into your area of expertise and you're up for a challenge building a 10x product alongside a team of A-player hackers, now is the time to apply. You'll be joining us at the perfect time.

    Need proof that we're building a kickass product? Check out our AngelList profile for a quick overview and some screenshots.

    We'll hire the best engineer for the job regardless of your location. Timezones are not a problem as long as you're able to overlap with us for 3 hours a day. Our process is super fast, and you'll know our final decision within ten days max, so let's talk!

  • Legalist (US & Europe)
    2 weeks ago

    Legalist is breaking new ground in FinTech and LegalTech. Data Science is one of the pillars of Legalist's continuous innovation, and we're looking for someone who can lead the charge on that front.

    You will get to..

    • Use Python, PyCharm, Jupyter to build our products

    • Use AWS, GCP, Kubernetes, Docker, Jenkins to scale our infrastructure

    • Learn at the bleeding edge of web and machine learning technologies

    • Work with a ton of legal data to build analytical tools that support the business team

    Legalist is an investment firm that uses tech to invest in lawsuits. We graduated YCombinator as part of their Summer 2016 batch, and have since garnered international press for our pioneering work. You can read about us in NYT, WSJ, The Guardian, Le Monde, The Economist, and many others.

    If you're interested in the intersection of finance, technology, and law, then you'll find the problems we work on highly interesting. We scrape millions of court records and build technology that streamlines the process of investing in legal assets, while running investment funds that generate high returns for our investors.

    YOUR MISSION

    Ideally you will be interested in learning, be proactive, and enjoy using bleeding edge technologies. Formal 'experience' not necessary but demonstration of capability required. It is a well paid role, with salary based on capability. You will be working alongside the CTO & co-founder and 4 talented engineers.

    Currently, our platform is based around a backend microservices architecture for different use cases.

    We're really looking for people who would love to join a fast growing startup with great financial projections, paying clients, strong investors, and an awesome team where you can always be learning. Our work is multi-disciplinary, and we're looking for engineers with an interest in business as well.

    RESPONSIBILITIES

    • Autonomy over a core product and data pipeline

    • Collaborate with engineers to develop and ship features

    • Write efficient, modular, and reusable libraries and abstractions

    • Identify key drivers & insights to improve our analytics engines

    • Participate in code reviews

    QUALIFICATIONS

    Applicants are not expected to show advanced understanding of all of the below, but must show willingness, ability, and interest in keeping up with cutting edge technologies and frameworks.

    • 4+ years of experience with machine learning and data science techniques

    • Degree in Computer Science, Statistics, Mathematics or equivalent field

    • Ability to implement best practices

    • Ability to identify key insights and technologies

    • Comfort with independently building out MVPs which can then be built out and supported by the engineering team

    • Experience working with modern data stores such as NoSQL/Postgres, S3, Cassandra or similar

    • Experience working with Cloud Computing technologies (e.g. AWS, Azure, GCP)

    • Ability to communicate technical specifications both verbal and written

Remotive can help!

Not sure how to apply properly to this job? Watch our live webinar « 3 Mistakes to Avoid When Looking For A Remote Startup Job (And What To Do Instead) ».

Interested to chat with Remote workers? Join our community!