Remote kafka Jobs in February 2020

15 Remote kafka Jobs in February 2020

Post a job
  • Software Development (10)

    • NS8 is a fraud prevention hub with industry-leading time to value that empowers eCommerce merchants to reduce their total cost of fraud through data orchestration and early-stage detection tools that filter out malicious activity before it starts.

      Why Join Us?

      • We're blowing up and need talented engineers and leaders to keep up with our explosive business growth.

      • We've got funding and our revenue is killing it too. Our numbers are outperforming the projections in our last pitch deck - and we all know how lofty those can be.

      • Our CEO is a developer of over 20 years and has additional founder and CEO experience with fast paced startups, so he gets the engineering side and the business under commits so development can over deliver.

      • Our CTO has decades of technical expertise, running large development organizations with resources in every corner of the globe, deploying products that generate hundreds of millions of dollars annually across diverse and highly regulated markets.

      • Our CSA has over 20 years development experience with both Fortune 20 companies and founding many startups in the platform space, including several large exits.

      Our Engineers:

      • Excel in a flat hierarchy and spend almost 100% of their time writing code.

      • Love working in our agile, continuous integration and deployment environment.

      • Conceive, design, develop, deploy and operate the code they write.

      • Deploy maintainable, instrumented, predictable and reliable distributed systems on a steady cadence.

      • Write tooling for automation, diagnostics, debugging.

      • Participate in on-call rotation for their services.

      • Build with a security mindset and are up to date on industry best practices.

      • Design from the start with multi-tenancy and high availability as requirements.

      • Have developed their remote engineering muscles and are highly engaged via Slack.

      Our Stack:

      • TypeScript, React, Node.js

      • AWS technologies

      • Kubernetes

        • Concourse + Helm3 for CI/CD

        • Prometheus

        • Grafana

      • Kafka

      • ProtoBuf3

      • Mongo

      • MySQL

      Your Role:

      The Director of Cloud Infrastructure is an experienced infrastructure technologist and leader who is passionate about DevOps: leading, mentoring, and scaling teams responsible for NS8’s software development delivery pipeline, cloud infrastructure, and production services.

      Responsibilities:

      • As Director of Cloud Infrastructure, you will collaborate with the CTO, Chief Architect and entire Engineering organization to roll out and maintain DevOps best practices to enable rapid software development through a robust and secure infrastructure.

      • Manage a plan for how to move towards best practice, and communicate progress to relevant stakeholders

      • The Director of Cloud Infrastructure is responsible for NS8’s development, test, and production infrastructures.

      • Support the engineering teams with infrastructure and tools to automatically build, deploy and run applications maximizing the use of automation and observability

      • The ideal candidate will have considerable knowledge of cloud computing and AWS with experience building environments that meet high availability, scalability, and reliability criteria.

      • Experience with continuous integration, continuous delivery and continuous deployment.

      • Experience with container architecture and container orchestration tools (Kubernetes)

      • Experience managing and maintaining Kafka

      • Experience deploying, managing and monitoring production services, as well as the supporting infrastructure such as CI/CD pipelines and container orchestration (Concourse, Istio)

      • Responsible for configuration, management and orchestrating response using alerting tools such as New Relic, Honeycomb and PagerDuty. Streamline incident management and escalation process to provide 24/7 support for production services

      • Managing technical people and engineering leads, including performance management, career management, and conflict resolution

      • An ability to build teams while keeping engineers and leads engaged

      Qualifications:

      • Bachelor’s or Master’s degree in Computer Science or similar.

      • 5+ years developing software in a professional environment

      • 5+ year in DevOps

      Our Benefits:

      • Work from home or on-site in Las Vegas

      • Competitive salaries

      • Equity

      • Medical

      • Dental

      • Vision

      • FSA

      • Fully stocked kitchen for on-site employees

      Our Culture:


      • Vibrant is an understatement, company events are always first class and exciting – axe-throwing, luchador wrestling, fancy dinners, charity events, game shows.

      • Value diversity, transparency, and encourage everyone to be their authentic self.

      • Supportive, learning culture; where engineers are encouraged to present Lunch and Learns on any topics they are passionate about.



      Physical Demands:


      While performing the duties of this job, the employee routinely is required to sit; walk; talk and hear; use hands to keyboard, finger, handle, and feel; stoop, kneel, crouch, twist, crawl, reach, and stretch. - The employee is occasionally required to move around the office.



      NS8 Inc provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.


      This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training.

    • Medium (US only)
      Yesterday
      At Medium, words matter. We are building the best place for reading and writing on the internet—a place where today’s smartest writers, thinkers, experts, and storytellers can share big, interesting ideas; a place where ideas are judged on the value they provide to readers, not the fleeting attention they can attract for advertisers.

      We are looking for a Senior Data Engineer that will help build, maintain, and scale our business critical Data Platform. In this role, you will help define a long-term vision for the Data Platform architecture and implement new technologies to help us scale our platform over time. You'll also lead development of both transactional and data warehouse designs, mentoring our team of cross functional engineers and Data Scientists.

      At Medium, we are proud of our product, our team, and our culture. Medium’s website and mobile apps are accessed by millions of users every day. Our mission is to move thinking forward by providing a place where individuals, along with publishers, can share stories and their perspectives. Behind this beautifully-crafted platform is our engineering team who works seamlessly together. From frontend to API, from data collection to product science, Medium engineers work multi-functionally with open communication and feedback

      What Will You Do!
      • Work on high impact projects that improve data availability and quality, and provide reliable access to data for the rest of the business.
      • Drive the evolution of Medium's data platform to support near real-time data processing and new event sources, and to scale with our fast-growing business.
      • Help define the team strategy and technical direction, advocate for best practices, investigate new technologies, and mentor other engineers.
      • Design, architect, and support new and existing ETL pipelines, and recommend improvements and modifications.
      • Be responsible for ingesting data into our data warehouse and providing frameworks and services for operating on that data including the use of Spark.
      • Analyze, debug and maintain critical data pipelines.
      • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Spark and AWS technologies.
      Who You Are!
      • You have 7+ years of software engineering experience.
      • You have 3+ years of experience writing and optimizing complex SQL and ETL processes, preferably in connection with Hadoop or Spark.
      • You have outstanding coding and design skills, particularly in Java/Scala and Python.
      • You have helped define the architecture, tooling, and strategy for a large-scale data processing system.
      • You have hands-on experience with AWS and services like EC2, SQS, SNS, RDS, Cache etc or equivalent technologies.
      • You have a BS in Computer Science / Software Engineering or equivalent experience.
      • You have knowledge of Apache Spark, Spark streaming, Kafka, Scala, Python, and similar technology stacks.
      • You have a strong understanding & usage of algorithms and data structures.


      Nice To Have!
      • Snowflake knowledge and experience
      • Looker knowledge and experience
      • Dimensional modeling skills
      At Medium, we foster an inclusive, supportive, fun yet challenging team environment. We value having a team that is made up of a diverse set of backgrounds and respect the healthy expression of diverse opinions. We embrace experimentation and the examination of all kinds of ideas through reasoning and testing. Come join us as we continue to change the world of digital media. Medium is an equal opportunity employer.

      Interested? We'd love to hear from you.
    • 3 days ago
      Job Description

      About Pluralsight Flow, powered by GitPrime 

      GitPrime is now a Pluralsight company, an entire functional department of our new parent company. We are pioneering data-driven engineering. We report on the work patterns and the people-side of software development so engineering leaders can advocate for resources and demonstrate that they're driving business value based on objective data. We have strong product-market fit with hundreds of happy customers and we are growing rapidly.  

      Working at Pluralsight 

      Founded in 2004 and trusted by Fortune 500 companies, Pluralsight is the technology learning platform organizations and individuals in 150+ countries count on to innovate faster and create progress for the world.  

      At Pluralsight, we believe everyone should have the opportunity to create progress through technology. That everyone should have access to the skills of tomorrow. That technology can make the world a better place. Through the work we do everyday, we empower the people who power our world.  

      And we don’t let fear, egos or drama distract us from our mission. We’re adults, and we treat each other that way. We have the autonomy to do our jobs, transparency to eliminate office politics and trust each other to do the right thing. We thrive in an environment with creativity around every corner, challenges that keep us on our toes, and peers who inspire us to be the best we can be. We bring different viewpoints, backgrounds and experiences, and united by our mission, we are one.  

      The Opportunity  

      As a DevOps Engineer at Pluralsight, you will partner with the DevOps Manager to curate Developer self-service tools and systems to empower our continuous deployment environment. You will keep Pluralsight’s finger on the pulse of the DevOps community by continually researching, testing, and developing solutions to better enable our Software Engineers through automation and self-service. As an embedded member of remote development teams, you will be the subject matter expert on how and when to utilize the tools built and deployed by DevOps, as well as an influential partner in delivering incredible end user experiences.Pluralsight is a leader in the tech education space, and as such, our engineers are a driving force in developing and promoting industry best practices while continually synthesizing new ideas. You will help set the bar for DevOps teams across the industry while building a product that creates the innovators of tomorrow through technical education.

      Who you are: 

      • You are an experienced DevOps professional that enjoys being in the middle of the development lifecycle
      • You love exploring new technologies and keeping your own technical skills sharp while exhibiting responsibility and caution
      • You have a passion for innovation, learning, and excellence
      • You elevate the technical abilities of those around you
      • You are an amazing communicator and effective influencer within the remote teams you are on
      • You have a track record of being analytical, methodical, and quality-driven

      What you’ll own: 

      As a DevOps Engineer with a knack for automation, troubleshooting, and problem-solving, you will be responsible for monitoring our environments, servers, and applications for health, performance, and security. You will work with our talented team of Software Engineers to decide how to best create meaningful outcomes for our end users.

      Infrastructure:

      • Develop a flexible infrastructure to promote Developer self-service, while promoting continuity across our overall environment.
      • Development of tools and systems to support Developer self-service
      • Continuous environment monitoring for application health, performance, and security
      • Maintaining a pulse on emerging technologies and discovering hidden opportunities in our environment
      • Use technical expertise and experience to evaluate industry technologies and assess practice relevance
      • Collaborate with Software Developers to research and address technical needs and to roadmap and develop new solutions
      • Maintain and improve standards of Operational Excellence
      • Ensure redundancy and resilience of infrastructure and services
      • Reliability and Performance
      • Championing of continual improvement in the areas of reliability and performance
      • Help design and implement secure environments and servers
      • Forecast and assess reliability risks
      • Ensure all infrastructure is configuration managed
      • Development Support
      • Support DevOps Manager
      • Collaborate with the Ops and DevOps teams, as well as Security, IT, and Software Engineers

      Experience you’ll need: 

      • A successful candidate will be well experienced in key areas such as AWS, Saltstack, and Terraform (or similar)
      • Experience with Kubernetes and containerization to be able to support existing teams
      • Ability to quickly analyze and comprehend new or unfamiliar technologies or ideas
      • Track record of progressive DevOps engineering experience including the following:
      • Strong systems administration skills in both Linux
      • Experience in automation and the development of automation tools
      • Strong background in continuous integration and deployment methodologies/pipelines
      • Strong administration of HAproxy, RabbitMQ, Redis
      • Strong knowledge of network security and performance
      • Knowledge of compliance frameworks (PCI, SOX, SOC 2, ISO 27001)
      • Powershell, Bash, and Python scripting
      • Database administration background in Postgres or similar
      • Experience with Kafka a plus
      • Strong understanding of DevOps mentality and tools

      Technologies and tools you’ll use and interact with here:

      • Linux - Ubuntu LTS, RHEL, CentOS 7, Fedora Core
      • Tools - Github, New Relic, TeamCity, Octopus Deploy, Saltstack, OpsGenie, ELK, Terraform
      • Services - Haproxy, Nginx, IIS, RabbitMQ, Kafka, Zookeeper
      • AWS - EC2, RDS, ECS, VPC, Route53, ELB, ALB, Lambda, Elasticache, Cloudfront, Service Catalog, Cloudwatch, CloudFormation, IAM, Certificate Manager, Directory Service, WAF & Shield, SQS, SNS
      • Data Stores - Cassandra, Postgres, MySQL, MSSQL, Redis, BigQuery, Hadoop, Elasticsearch
      • Other - Cloudflare, Salesforce.comwpengine.com, Zuora, Adobe AEM, Adobe Search and Promote
      • Languages in use here that you may help support: Python, Node.js, Ruby, Java
      Qualifications

      Additional Information

      Be Yourself. Pluralsight is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.

    • 6 days ago
      Who are we?

      Flyt is a technology platform designed to connect the world’s top food delivery companies with the world’s largest restaurant brands. If you have ordered food on your phone you have touched our technology.

      Today, we’re a global company, with our technology being deployed across Europe, North America and Australia, and with team members in six countries. Yet we’re still human-scale: everyone can get to know everyone, and we are structured to ensure every team has a strong sense of community and autonomy on how to hit their goals.

      Flyt is organised in small, cross-functional, autonomous teams we call squads. Each one of our squads owns an area of the product end-to-end and is responsible for meeting a business goal. Same principles as the Spotify model, but customised to what works for us.

      The Role

      We’re looking for a Senior Go Developer to join one of our teams. You'll be helping big restaurant brands like McDonald’s, Burger King or KFC connect their systems to our platform and enable new mobile and web experiences.

      Requirements

      What you'll be doing

      Here’s what your day-to-day looks like:

      • Building new microservices on an event-based architecture. Golang, Kafka, AWS.
      • Engaging directly with customers and partners to connect their services to our platform
      • Sharing new standards and best practices with the wider engineering team
      • Helping decide the best next thing to do so you meet your quarterly team goals
      • Taking full ownership of your microservices: from coding and testing them to running them once they’re live
      Working with us

      You’ll love working here if:

      • You can’t wait to roll up your sleeves and build a great product with a dedicated team
      • You love having a goal, and having the autonomy to decide the best way to go for it
      • You obsess over personal growth. Feedback, Coaching, Learning, Teaching.
      • You like to communicate transparently (all our #slack channels are public!), and are willing to listen to your peers, earn trust and show up curious

      Benefits

      • Salary: £75,000 to £95,000 pa
      • Flexibility to truly work remotely
      • Access to coaches on leadership, product, tech and sales
      • Top spec Macbook Pro to enable you to do your job well
      • The training budget you need to help you level-up
      • A quarterly £££ bonus pool
      • 25 days holiday pa
      • Quarterly meets at different locations around the world where we all get together, plan for the next quarter and have some fun!
      • The opportunity to work in a fast-growing company with global expansion plans and operations spanning Europe, North America and Australia.
    • We at Redox understand that we are all patients at some point, so our mission is to make healthcare data useful and every patient experience better. Our values represent the basis of our culture of trust, transparency, and personal growth, and define how we want to interact with each other and the world.

      Redox’s full-service integration platform accelerates the development and distribution of healthcare software solutions by securely and efficiently exchanging healthcare data. With just one connection, data can be transmitted across a growing network of more than 500 healthcare delivery organizations and more than 200 independent software vendors. Members of the Redox Network exchange more than seven million patient records per day, leveraging a single data standard compatible with more than 40 electronic health record systems. We are on a path to double our number of client connections over the next year and need to build infrastructure that scales accordingly.

      Your Team

      Redox’s Infrastructure Engineering team builds and manages the AWS cloud platform that provides a safe, reliable, and flexible foundation for Redox’s products and our growing customer base. In production, we run nearly 1000 docker containers, have 20Tb of relational data, 8Tb of ES data, 6Tb of Kafka data, and handle about 400 client-controlled VPN connections. Expertise on the team includes:

      • Data technologies: RDS (Postgres, MySQL), Kafka, Redis, ElastiCache, Elasticsearch

      • Deployment automation for EC2, S3, RDS, VPCs, IAM, Terraform, CloudFormation, Kubernetes, Rancher, infrastructure as code in Python 

      • Logging, metrics, monitoring, and alerting: Sumologic, Telegraf, Statsd, InfluxDB, Grafana, Icinga, Pagerduty

      • Networking/VPN: LibreSwan, IPtables, IPSec 

      • Linux system tuning and debugging 

      Your Opportunities and Impact

      As a senior member of the Infrastructure Engineering team, you’ll play a key role in ensuring that our product engineers are empowered and effective by providing design and debugging expertise, technical leadership and mentorship, and tools for deployment automation. You’ll work closely with members of our Product and Security teams to understand their needs and translate them into well-scoped, actionable projects and tasks. In this role, you will:


      • Lead from a technical, architectural, and project perspective, shepherding technical decisions from inception to completion.

      • Evaluate our existing systems, inform stakeholders of emerging technologies and industry changes, and choose tools and develop software that follows industry best practices.

      • Design, implement, and debug robust, secure, scalable, abstracted systems that allow us to rapidly iterate on and scale our systems with the needs of our customers.

      • Ensure that our infrastructure deployments are automated, frequent, secure, and without noticeable user impact.

      • Provide mentoring and coaching to other team members in your areas of expertise.

      • Support the infrastructure for our existing test and production environments and participate in an on-call rotation to assist other engineers in resolving production related issues regarding monitoring, databases, container orchestration services, load balancing, networking, etc. 

      • Identify and collect metrics that allow us to pinpoint areas for improvement and measure change.

      About You

      At Redox we hire based on our values as well as technical competency. How you accomplish your work is just as important as getting the work done. We’re looking for people who are:


      • Passionate about solving complex problems that improve the state of the world.

      • Enthusiastic about creating an elegant and delightful user experience.

      • Intellectually curious with a desire to learn.

      • Respectful and inclusive, soliciting and incorporating input from others.

      • Biased towards action and creating positive impact.

      The particular technical focus we’re searching for in this role is around deployment automation, monitoring, and Linux debugging skills. Your work experience should include:


      • Deploying and managing large-scale production containers using Terraform and Kubernetes.

      • Expertise in securing AWS services in regulated environments.

      • Detailed Linux debugging knowledge.

      • Familiarity configuring and using metrics and monitoring tooling.

      • Familiarity with a high-level scripting language like Python.

      Any other areas of overlap between your work experience and the Infrastructure team’s areas of expertise are added bonuses. Please apply even if you are not sure you meet all these criteria. If you are interested in the role and think you could be a fit, we'd like to hear from you.

    • The data volume for our largest customers has increased more than 4x in the last year, and we’re scaling Heap to meet the demand. Our backend supports an expressive set of queries that need to come back with sub-second latencies and reflect up-to-the-minute data. To make this possible, we’re working on a novel distributed infrastructure.
       
      We’re looking to bring on an Infrastructure Engineer to lead the DevOps side of this challenge. Help us de-risk our stack, add more 9s to our availability, and incorporate open source tooling.
       
      You’ll own the design and development of our DevOps toolchain. You’ll build deploy pipelines and manage configurations. You’ll set up performance metrics and act on them to scale a complex distributed system.
       
      We’re looking for stronger software engineering skills than a typical DevOps role requires. This is a building role, which happens to focus on stability, operability, and tooling, not an ops role. You’ll need to be able to understand our codebase and debug issues as they come up. Some example projects:
      • Determine how we'll do service discovery and incorporate it into a codebase that assumes static locations for services.
      • Figure out how we’ll package our code and deploy it from test → stage → production, and eventually eliminate manual deploys on production machines.
      • Where will new instances come from? Pre-baked AMIs? Terraform with vanilla images and a bunch of salt? Containers? Something else?
      • Determine how we’ll monitor, backup, and otherwise operate our Kafka cluster.
      • Much, much more.
       
      We’d like to get to know you if:
      • You have experience with modern DevOps tooling - infrastructure as code, configuration management, CI pipelines, package managers - we need someone who has deep knowledge of the full DevOps ecosystem.
      • You communicate with clarity and precision. We care about this almost as much as your technical ability.
      • You’re self-directed with a strong sense for relative priority. There is low-hanging fruit everywhere. We need someone who has a good sense for which projects will have the most impact.
      • You have experience with AWS, especially EC2, VPC, and ELB.
      • (Bonus) You have experience with our stack. Heap runs on Kafka, ZooKeeper, PostgreSQL, Redis, and Node.js, with Terraform, SaltStack, and CircleCI for orchestration, configuration, and continuous integration.

       

      Our HQ is in SF, and we have an office in New York, but a large part of our engineering team is remote. We cover relocation costs, and can sponsor visas. We'd love to hear from you, no matter where you are!
       
      Heap has raised $95M in funding from NEA, Y Combinator, Menlo Ventures, SVAngel, Sam Altman, Garry Tan, Alexis Ohanian, Harj Taggar, Ram Shriram, and others.
       

      People are what make Heap awesome. Regardless of age, education, ethnicity, gender, sexual orientation, or any personal characteristics, we want everyone to feel welcome. We are committed to building a diverse and inclusive equal opportunity workplace everyone can call home.
       

       

    • Summary


      Wikimedia’s Site Reliability Engineering team is principally responsible for ensuring our global top-10 web site, our public facing services and underlying infrastructure are healthy and developing further in support of Wikimedia’s mission. The SRE team comprises over 30 creative and talented staff members that are globally distributed and organized into 6 teams each with their own scope and focus area. We are strengthening the team and looking for several Engineering Managers to help our staff and teams achieve our goals.


      As an Engineering Manager, you will support engineers developing services and infrastructure, deploying and building new features, products, and services used by hundreds of millions of people around the world. This is an opportunity to do good while improving one of the best known sites in the world. 


      Your Responsibilities:


      • Manage one to two globally distributed teams within Site Reliability Engineering

      • Recruit, hire, and help onboard new team members

      • Work with team members to set individual performance goals, and support them in meeting and evolving their goals and career path

      • Triage incoming workload, maintain focus on priorities, and set realistic expectations for both peers and team members

      • Coordinate and communicate with other members of the Wikimedia engineering teams on relevant projects, and contribute to the organizational strategy

      • Continuously develop the roadmap of the team in alignment with other SRE and Technology teams, and help draft and execute the team’s annual and quarterly plans

      • Project manage new and existing initiatives

      • Lead the definition, refinement, and execution of the processes through which the team manages and performs work.

      • Lead incident response, diagnosis, and follow-up on system alerts and outages across Wikimedia’s production infrastructure

      • Facilitate the definition and establishment of Service Level Indicators and Objectives with service owners and stakeholders

      • Share our values and work in accordance with them

      Skills & Experience:


      • Prior experience managing teams

      • Strong technical background, including 5+ years experience as part of an SRE, TechOps or software engineering team

      • Experience working with or applying one or more project management methodologies to site reliability engineering work

      • Aptitude for automation and streamlining of tasks

      • Communicate effectively in both spoken and written English

      • Ability to work independently, as an effective part of a globally distributed team

      • Willing and able to travel several times a year for occasional in-person meetings

      • B.S. or M.S. in Computer Science or the equivalent in related work experience

      Additionally, we would love it if you have:


      • Experience working in a distributed, largely remote environment

      • Experience contributing to open source projects

      Teams


      • Service Operations: Build and improve our new Kubernetes based Deployment pipeline and help our teams, service owners and developers across the organization test and deploy our existing application platform as well as new applications/features.

      • Data Persistence: Store, query and protect the sum of all human knowledge! Work together with our engineers to ensure existing and new data needs are met in an efficient and reliable manner, using the most appropriate boring and exciting open source technologies: MySQL, Cassandra, OpenStack Swift, Ceph.

      • Observability: Work across SRE and Technology to provide teams with tools, platforms, and insights into how systems and services are performing. Leverage exciting technologies such as Prometheus, AlertManager, Grafana, Logstash, Kibana, Kafka and more. Research emerging tools, trends and methodologies and work with the open source community to contribute back that knowledge to the commons.

      The Wikimedia Foundation is... 


      ...the nonprofit organization that hosts and operates Wikipedia and the other Wikimedia free knowledge projects. Our vision is a world in which every single human can freely share in the sum of all knowledge. We believe that everyone has the potential to contribute something to our shared knowledge, and that everyone should be able to access that knowledge, free of interference. We host the Wikimedia projects, build software experiences for reading, contributing, and sharing Wikimedia content, support the volunteer communities and partners who make Wikimedia possible, and advocate for policies that enable Wikimedia and free knowledge to thrive. The Wikimedia Foundation is a charitable, not-for-profit organization that relies on donations. We receive financial support from millions of individuals around the world, with an average donation of about $15. We also receive donations through institutional grants and gifts. The Wikimedia Foundation is a United States 501(c)(3) tax-exempt organization with offices in San Francisco, California, USA.


      The Wikimedia Foundation is an equal opportunity employer, and we encourage people with a diverse range of backgrounds to apply.


      U.S. Benefits & Perks*


      • Fully paid medical, dental and vision coverage for employees and their eligible families (yes, fully paid premiums!)

      • The Wellness Program provides reimbursement for mind, body and soul activities such as fitness memberships, baby sitting, continuing education and much more

      • The 401(k) retirement plan offers matched contributions at 4% of annual salary

      • Flexible and generous time off - vacation, sick and volunteer days, plus 19 paid holidays - including the last week of the year.

      • Family friendly! 100% paid new parent leave for seven weeks plus an additional five weeks for pregnancy, flexible options to phase back in after leave, fully equipped lactation room.

      • For those emergency moments - long and short term disability, life insurance (2x salary) and an employee assistance program

      • Pre-tax savings plans for health care, child care, elder care, public transportation and parking expenses

      • Telecommuting and flexible work schedules available

      • Appropriate fuel for thinking and coding (aka, a pantry full of treats) and monthly massages to help staff relax

      • Great colleagues - diverse staff and contractors speaking dozens of languages from around the world, fantastic intellectual discourse, mission-driven and intensely passionate people

      *Eligible international workers' benefits are specific to their location and dependent on their employer of record



    • 2 months ago

      About the role

      This position will help lead the growth of our analytics and monitoring platform. You will be a lead Django engineer working closely with our team.

      What we are looking for:

      • Experience in writing web applications using Python, databases and message queues like Kafka, AMQP, SQS.

      • Experience with popular third-party libraries and frameworks (e.g. Django, SQLAlchemy, Flask).

      • Skilled in automation work (CI/CD) and infrastructure management (Google Cloud, Kubernetes).

      • Familiarity with current standards and technologies (RESTful, Python 3.6+, PEP8).

      • Experience working with a distributed team.

      What you will do:

      • Design future-proof software architectures aligned with cloud platforms.

      • Design and implement back-end systems and APIs using state-of-the-art technologies and practices.

      • Develop back-end components of user-facing web applications.

      • Communicate with teammates on a daily basis.

      • Learn new skills and technologies as you go.

      • Be responsible for your code. Ensure your code is testable and tested.

      Who you are:

      • Experience in writing web applications using Python, databases and message queues like Kafka, AMQP, SQS.

      • Experience with popular third-party libraries and frameworks (e.g. Django, SQLAlchemy, Flask).

      • Skilled in automation work (CI/CD) and infrastructure management (Google Cloud, Kubernetes).

      • Familiarity with current standards and technologies (RESTful, Python 3.6+, PEP8).

      • Experience working with a distributed team.

      Bonus:

      • have worked with Google Analytics / other Google APIs

      • have worked in data science

      • familiar with machine learning

      Benefits:

      • flexible work schedule

      • Company provided computer and equipment

      • Working with a group of talented, passionate, and motivated team members across all disciplines

      Interested in applying?

      Send us an email at [email protected] with more details about yourself.

    • ReifyHealth (US only)
      2 months ago
      At Reify Health, we are building a more creative healthcare system. We envision a world where every potential therapy, if safe and effective, is available to the patients who can benefit.

      Our healthcare system relies on clinical trials to develop new, potentially life-saving treatments for patients. But clinical trials continue to be slow, unpredictable, and expensive. Reify Health’s product helps both the research leaders driving forward clinical trials and the doctors and nurses who care for the patient participants. As we continue scaling the adoption of our product, we accelerate world-class clinical research and unlock innovation.

      DevOps at Reify aims to be an engineering team with a focus on building out the process, tooling, and infrastructure as a platform that enables product engineering to release, monitor, and manage our applications with high velocity and efficiency. We value automation, self-service, and empowerment of product engineering to manage their code from development to production. By joining our team, you will play a significant role in supporting our growing architecture and our culture of impact and empathy.


      Your Responsibilities
      • Architect and design AWS solutions to meet product/company needs
      • Collaborate with team leads to develop infrastructure requirements
      • Develop tools and processes to streamline the automated deployment of our code
      • Enhance and maintain continuous integration tools that support the product engineering team
      • Ensure the product is operational and provide support in case of emergency


      Your Skills & Qualifications
      • Strong core knowledge of Linux operating system and computer networking
      • Experience managing AWS resources, specifically CloudFront, IAM, Route 53, S3, RDS, and DynamoDB
      • Experience working with container technology, such as Docker
      • Experience building, running, and maintaining a service orchestration framework such as Kubernetes, Mesos, or Triton
      • Experience with infrastructure as code tooling such as Terraform or CloudFormation
      • Experience monitoring data architectures (e.g. Kafka, Spark, etc.)
      • Experience with deploying and configuring monitoring services, such as New Relic and Datadog
      • Managed multiple AWS accounts across multiple AWS regions
      • Embody infrastructure-as-code philosophy
      • 5+ years of DevOps experience


      What Will Make You Stand Out
      • Experience in managing and deploying a cloud-based infrastructure compliant with regulatory regimes such as HIPAA and GDPR
      • Experience implementing security controls for AWS environments, including setting up a VPN and secrets management system
      • Experience working in Aptible/Heroku environment
      • Experience with ELK stack or similar solutions to intelligently manage system logs
      • Experience configuring error tracking systems, such as Sentry
      • Previous experience with functional programming languages/philosophy (or existing Clojure chops!)
      • Experience in a startup environment (as a remote employee using video/chat collaboration tools, if you’d like to work remotely)
      • Relevant experience in a healthcare/health-tech company


      Compensation & Benefits
      • Competitive Salary and Stock Options: Compensation varies from mid-level to very senior and is commensurate with your experience.
      • Comprehensive Health and Wellness Coverage: 100% premium coverage for you (and >50% for your dependents) for: a top-tier health plan covering you in all 50 states (with option of HSA for medical expenses and as investment vehicle) dental, vision, disability (short-term and long-term), and basic term life insurance (for your entire tenure at Reify). We enable 24/7 access to doctor by phone or online via telemedicine coverage.
      • Retirement Plan: 401(k) with company match
      • Company-provided Workstation: You will receive a brand new MacBook Pro laptop
      • Location Flexibility & Transportation: For those working out of Boston, we provide: a free monthly public transportation pass (and are located 2-3 minutes from Downtown Crossing); unlimited coffee, infused water, and more (provided by WeWork); flexibility to work from home as needed. For those working remotely: you can work from anywhere in the U.S. compatible with an EST work schedule. Additionally, we’ll fly remoters in for our quarterly “remoters’ week”, filled with fun activities, good food, and many opportunities to get to know your colleagues better.
      • Vacation and Holiday Flexibility: Generous paid-time-off policy that accrues with your tenure at Reify which includes holiday flexibility and parental leave
      We value diversity and believe the unique contributions each of us brings drives our success. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

      We also completely eschew the “bro culture” sometimes found at startups.

      Note: We are currently only considering US citizens or Green Card holders. Thanks!
    • 2 months ago
      To our future Senior Embedded Engineer
      At Density, we build one of the most advanced people sensing systems in the world. The product and infrastructure is nuanced and one-of-a-kind. Building this product for scale has been an exercise in patience, creativity, remarkable engineering, laser physics, global logistics, and grit. The team is thoughtful, driven, and world-class.


      Why this is an important role
      Last week our DPUs detected a million humans walking through doors. A number that increases every week.

      As engineers, we think it's pretty cool to be capturing events at this volume. Especially when it's done anonymously, accurately, and in real-time. Although counting people is a DPU's top priority, it needs to do so much more.

      Our system must efficiently and reliably:
      - Receive and process improvements through command and control functionality
      - Seamlessly interact with our network of partner products (building automation and security integrations)
      - Push diagnostic data to allow remote monitoring and troubleshooting.

      Our DPU has a growing set of responsibilities. We need an experienced hand to help us imagine, build, and maintain these mission critical systems and functionalities. Are you up for the job?

      This role reports to our Director of Software Engineering.


      Requirements:
      • Deep understanding of modern C++
      • Exceptional comfort with networking, specifically in enterprise environments (this is big for us)
      • Strong experience with the Linux system level APIs, placing an emphasis on designing AsyncIO/event loop based embedded user space daemons.
      • Experience integrating with custom hardware via standard Linux interfaces.
      • Strong understanding of TLS based communication (ideally using OpenSSL).
      • Experience interfacing with large scale cloud based backends.
      • Experience with an embedded Linux build system (Yocto, Buildroot, Linux from Scratch)


      • Bonus Points for:
      • Computer Vision
      • Machine learning and machine learning hardware
      • Experience with building automation. Specifically Bacnet protocol.
      • Python 3 AsyncIO
      • AWS lambda
      • Nomad/Terraform
      • ZMQ
      • Kafka
      • BLE
      • 802.11 
      • CDP or LLDP
      While we have offices in Syracuse (NY), San Francisco, and NYC, we embrace and have built a culture around remote work.

  • Product (2)

    • D2iQ is looking for an experienced Product Manager that can lead some of the strategic initiatives around Kubernetes and data services. You will collaborate with customers, the open-source community, partners, engineering, marketing and other functions to build a great product and make it successful in the market. If you’re passionate about product, can identify patterns from customer needs, and create well-defined requirements and user stories to help engineers deliver fantastic software solutions, come join us!

      Our headquarters is in San Francisco, CA but we're open to remote candidates in the United States or Germany.

      Job Responsibilities
      • Define strategy and drive execution of cloud operations capabilities for D2iQ's strategic Kubernetes initiatives and existing product.
      • Own and prioritize the backlog; participate with engineering in sprint planning to represent customer requirements to ensure we build the right solution
      • Work closely with customers to understand their needs and validate product direction
      • Define features, user stories, requirements and acceptance criteria
      • Deploy, use and test your own product to accept and provide early feedback within the development lifecycle
      • Work with all other functions to enable, market and advocate your product.
      Skills & Requirements
      • Experience working with two or more of the following open-source technologies: Kafka, Cassandra, Spark, HDFS, Elasticsearch, Tensorflow, Jupyter, Kubernetes
      • Knowledge of the datacenter infrastructure market and current trends
      • Strong understanding of Distributed Systems: Install, Upgrade, Backup / Restore, Compatibility Matrix, OS Support, Logging, Metrics, UI, CLI, Telemetry, etc...
      • Strong understanding of the Cloud Service Provider and marketplace offering integration: AWS, Azure, GCP
      • Technical understanding in one or more of: containerization, virtualization, networking, storage, security, operating systems
      • Proven track record of shipping successful enterprise software products is a must
      • Master of lean product development methods and worked with Jira before
      • Data-driven decision maker
      • Detail oriented and passionate about a great user experience combined with the ability to back proposed decisions by data
      • Superb communication and presentation skills
      • Minimum 3-5 years of experience as a Product Manager
      • Preference for candidates based in the San Francisco Bay Area but remote applications in the US will be considered
      About D2iQ - Your Partner in the Cloud Native Journey

      On your journey to the cloud, you need to make numerous choices—from the technologies you select, to the frameworks you decide on, to the management tools you’ll use. What you need is a trusted guide that’s been down this path before. That’s where D2iQ can help.

      D2iQ eases these decisions and operational efforts. Rather than inhibiting your choices, we guide you with opinionated technologies, services, training, and support, so you can work smarter, not harder. No matter where you are in your journey, we’ll make sure you’re well equipped for the road ahead.

      Backed by T. Rowe Price, Andreessen Horowitz, Khosla Ventures, Microsoft, HPE, Data Collective, and Fuel Capital, D2iQ is headquartered in San Francisco with offices in Hamburg, London, New York, and Beijing.

    • 2 months ago

      D2iQ (formerly Mesosphere) is looking for an experienced Product Manager that can lead some of the strategic initiatives around Kubernetes and data services, as well as own the core of D2iQ's DC/OS platform. You will collaborate with customers, the open-source community, partners, engineering, marketing and other functions to build a great product and make it successful in the market. If you’re passionate about product, can identify patterns from customer needs, and create well-defined requirements and user stories to help engineers deliver fantastic software solutions, come join us!

      Our headquarters is in San Francisco, CA but we're open to remote candidates in the United States or Germany.

      Responsibilities
      • Define strategy and drive execution of cloud operations capabilities for D2iQ's strategic Kubernetes initiatives and existing product.
      • Own and prioritize the backlog; participate with engineering in sprint planning to represent customer requirements to ensure we build the right solution
      • Work closely with customers to understand their needs and validate product direction
      • Define features, user stories, requirements and acceptance criteria
      • Deploy, use and test your own product to accept and provide early feedback within the development lifecycle
      • Work with all other functions to enable, market and advocate your product.
      Requirements
      • Experience working with two or more of the following open-source technologies: Kafka, Cassandra, Spark, HDFS, Elasticsearch, Tensorflow, Jupyter, Kubernetes
      • Knowledge of the datacenter infrastructure market and current trends
      • Strong understanding of Distributed Systems: Install, Upgrade, Backup / Restore, Compatibility Matrix, OS Support, Logging, Metrics, UI, CLI, Telemetry, etc...
      • Strong understanding of the Cloud Service Provider and marketplace offering integration: AWS, Azure, GCP
      • Technical understanding in one or more of: containerization, virtualization, networking, storage, security, operating systems
      • Proven track record of shipping successful enterprise software products is a must
      • Master of lean product development methods and worked with Jira before
      • Data-driven decision maker
      • Detail oriented and passionate about a great user experience combined with the ability to back proposed decisions by data
      • Superb communication and presentation skills
      • Minimum 3-5 years of experience as a Product Manager
      • Preference for candidates based in the San Francisco Bay Area but remote applications in the US will be considered
      D2iQ - Your Partner in the Cloud Native Journey

      On your journey to the cloud, you need to make numerous choices—from the technologies you select, to the frameworks you decide on, to the management tools you’ll use. What you need is a trusted guide that’s been down this path before. That’s where D2iQ can help.

      D2iQ eases these decisions and operational efforts. Rather than inhibiting your choices, we guide you with opinionated technologies, services, training, and support, so you can work smarter, not harder. No matter where you are in your journey, we’ll make sure you’re well equipped for the road ahead.

      Backed by T. Rowe Price, Andreessen Horowitz, Khosla Ventures, Microsoft, HPE, Data Collective, and Fuel Capital, D2iQ is headquartered in San Francisco with offices in Hamburg, London, New York, and Beijing.

  • All others (3)

    • 1 week ago

      BALANCE FOR BETTER  At Xapo, we embrace our differences and actively foster an inclusive environment where we all can thrive. We’re a flexible, family-friendly environment, and we recognize that everyone has commitments outside of work. We have a goal of reaching gender parity and strongly encourage women to apply to our open positions. Diversity is not a tagline at Xapo; it is our foundation.

      RESPONSIBILITIES

      • Design and build data structures on MPP platform like AWS RedShift and or Druid.io.

      • Design and build highly scalable data pipelines using AWS tools like Glue (Spark based), Data Pipeline, Lambda.

      • Translate complex business requirements into scalable technical solutions.

      • Strong understanding of analytics needs.

      • Collaborate with the team on building dashboards, using Self-Service tools like Apache Superset or Tableau, and data analysis to support business.

      • Collaborate with multiple cross-functional teams and work on solutions which have a larger impact on Xapo business.

      REQUIREMENTS

      • In-depth understanding of data structures and algorithms.

      • Experience in designing and building dimensional data models to improve accessibility, efficiency, and quality of data.

      • Experience in designing and developing ETL data pipelines.

      • Proficient in writing Advanced SQLs, Expertise in performance tuning of SQLs.

      • Programming experience in building high-quality software. Skills with Python or Scala preferred.

      • Strong analytical and communication skills.

      NICE TO HAVE SKILLS

      • Work/project experience with big data and advanced programming languages.

      • Experience using Java, Spark, Hive, Oozie, Kafka, and Map Reduce.

      • Work experience with AWS tools to process data (Glue, Pipeline, Kinesis, Lambda, etc).

      • Experience with or advanced courses on data science and machine learning.

      OTHER REQUIREMENTS

        A dedicated workspace. A reliable internet connection with the fastest speed possible in your area.Devices and other essential equipment that meet minimal technical specifications.Alignment with Our Values.

      WHY WORK FOR XAPO? 

      Shape the Future:  Improve lives through cutting-edge technology, work remotely from anywhere in the world

      Own Your Success: Receive attractive remuneration, enjoy an autonomous work culture and flexible hours, apply your expertise to meaningful work every day

      Expect Excellence: Collaborate, learn, and grow with a high-performance team.

    • About us 

      Beat is one of the most exciting companies to ever come out of the ride-hailing space. One city at a time, all across the globe we make transportation affordable, convenient, and safe for everyone. We also help hundreds of thousands of people earn extra income as drivers. 

      Today we are the fastest-growing ride-hailing service in Latin America. But serving millions of rides every day pales in comparison to what lies ahead. Our plans for expansion are limitless. Our stellar engineering team operates across a number of European capitals where, right now, some of the world’s most ambitious and talented engineers are changing how cities will move in the future.

      Beat is currently available in Greece, Peru, Chile, Colombia, Mexico and Argentina. 

      About the role

      Our Big Data team is an essential ingredient in Beat's aggressive growth plan and vision for the future.

      As a Senior Big Data Software Engineer in our teams, you will tackle some of the hardest problems and your work will impact the entire Beat experience, from making sure drivers are always available for all our passengers, to helping our drivers utilise their working hours. Our team moves very fast, so you'll have the opportunity to make an immediate difference.

      With the various tools and communication technologies we're using, you'll feel connected to your team from wherever you are in the world. Our remote workforce always has the option to travel to our headquarters for meetings, events, and team bonding—or they can join virtually. Whatever works best for you and your work style. 

      What you'll do day in day out:

      • Work with the data science and engineering teams in translating complex models and algorithms into production-grade software systems.

      • Develop components that will analyse, process and react to operational feeds in near real-time, optimizing driver allocation, service pricing and preventing fraudulent use of our services in near real-time.

      • Being agile both within and across teams, bridging software engineering and data science.

      What you need to have:

      • At least one Master's degree in Math, Physics, Computer Science or Engineering. Higher degrees are a significant bonus as is considerable experience with Big Data Analytics and Statistical Analysis in the industry.

      • At least 5 years of experience in developing production-grade software using either Data Warehousing or Big Data frameworks in order to solve real-world problems.

      • Experience in developing with Scala at an idiomatic, expert level is required. Knowledge of advanced Java or C++ is a bonus. We would favour candidates with an exceptionally strong engineering background.

      • At least 6 years of hands-on experience with SQL and NoSQL databases.

      • Proven hands-on experience with Apache Hadoop, Kafka, Spark or Flink.

      • Exposure to designing streaming and batch data pipelines.

      • Applied knowledge in Machine Learning algorithms and their application to vast datasets is considered as a plus.

      • A strong sense of ownership in your work.

      • Excellent numerical and analytical skills with an excellent eye for detail working with qualitative and quantitative data.

      • The desire to build, launch and iterate on quality products on time with minimal technical compromises under a loosely-managed working environment.

      What's in it for you:

      • Competitive salary package

      • Flexible working hours

      • High tech equipment and top line tools

      • A great opportunity to grow and work with the most amazing people in the industry

      • Being part of an environment that gives engineers large goals, autonomy, mentoring and creates incredible opportunities both for you and the company

      • Please note that you will be working as contractor.

      • As part of our dedication to the diversity of our workforce, Beat is committed to Equal Employment Opportunity without regard for race, color, national origin, ethnicity, gender, disability, sexual orientation, gender identity, or religion.

    • 2 months ago

      Summary 

      Wikipedia is where the world turns to understand almost any topic — The Wikimedia Foundation is the nonprofit that operates Wikipedia with a small staff.  We are looking for a great data architect who wants to modernize the infrastructure underlying Wikipedia with distributed storage, services and REST interfaces.  If this excites you, we welcome you to join us.

      Description

      • Collaborate with Product Owners, Engineers and stakeholders on product discovery and improvements of our existing systems
      • Design and implement effective data storage solutions and models
      • Articulate the flow of data across our diverse range of systems
      • Ensure reusable clear service design and  documentation
      • Defining and aligning the forms and sources of data to facilitate WMF initiatives
      • Ensure monitoring system performance and identify, define and implement internal process improvements and SLOs
      • Work with Site Reliability and Operations Engineers to analyse and determine service discoverability, capacity plans and high availability
      • Recommend solutions to improve new and existing data storage and delivery systems
      • Change the world for more than half a billion people every month ;) 

      Skills and Experience

      • 3+ years experience in a Data Architect role as part of a team
      • You have a track record of leading data architecture initiatives to completion
      • You have experience analysing, reasoning about, optimising and implementing complex data systems
      • You have expertise in data handling approaches and technologies with good understanding of system development lifecycles and modern data architectures(Data Lakes, Data Warehouse)
      • You are comfortable modeling complex systems using approaches such as Domain Driven Design, eventual consistency, stream processing
      • You have experience with a diverse set of data storage and persistence frameworks and have a strong understanding of core data modelling concepts:
        • Relational & distributed databases (e.g. MySQL, Cassandra, Neo4j, Riak, HBase, DynamoDB, Elasticsearch)
        • Consistency trade-offs and transactional algorithms in distributed systems
        • Principles of fault tolerance and robustness
      • Use the best available tools & languages for each task. Currently we work a lot with Node.js but also use other tools and languages like Go, Python, Java, C, C++ and PHP where it makes sense. 
      • You have experience working with data streaming and pipelining systems(Hadoop, Kafka, Druid)
      • You have experience working with an engineering team, and communicate effectively with other stakeholders.
      • You have a track record of combining a solid long-term architectural strategy with short-term progress.
      • With freedom comes responsibility. You direct your own work and are pro-active in asking for input.
      • You have a scientific mindset and empirically test your hypotheses.
      • BS, MS, or PhD in Computer Science or equivalent work experience

      Pluses

      • Experience working with microservice architectures
      • Experience with open source technology and free culture, and have contributed to open source projects
      • Experience working remotely
      • You know what it means to be a volunteer or to coordinate the work of volunteers
      • Big ups if you are a contributor to Wikipedia
      • Please provide us with information you feel would be useful to us in gaining a better understanding of your technical background and accomplishments

      Show us your stuff! If you have any existing open source software that you've developed (these could be your own software or patches to other packages), please share the URLs for the source. Links to GitHub, etc. are exceptionally useful. 

      The Wikimedia Foundation is... 

      ...the nonprofit organization that hosts and operates Wikipedia and the other Wikimedia free knowledge projects. Our vision is a world in which every single human can freely share in the sum of all knowledge. We believe that everyone has the potential to contribute something to our shared knowledge, and that everyone should be able to access that knowledge, free of interference. We host the Wikimedia projects, build software experiences for reading, contributing, and sharing Wikimedia content, support the volunteer communities and partners who make Wikimedia possible, and advocate for policies that enable Wikimedia and free knowledge to thrive. The Wikimedia Foundation is a charitable, not-for-profit organization that relies on donations. We receive financial support from millions of individuals around the world, with an average donation of about $15. We also receive donations through institutional grants and gifts. The Wikimedia Foundation is a United States 501(c)(3) tax-exempt organization with offices in San Francisco, California, USA.

      The Wikimedia Foundation is an equal opportunity employer, and we encourage people with a diverse range of backgrounds to apply.

      U.S. Benefits & Perks*

      • Fully paid medical, dental and vision coverage for employees and their eligible families (yes, fully paid premiums!)
      • The Wellness Program provides reimbursement for mind, body and soul activities such as fitness memberships, baby sitting, continuing education and much more
      • The 401(k) retirement plan offers matched contributions at 4% of annual salary
      • Flexible and generous time off - vacation, sick and volunteer days, plus 19 paid holidays - including the last week of the year.
      • Family friendly! 100% paid new parent leave for seven weeks plus an additional five weeks for pregnancy, flexible options to phase back in after leave, fully equipped lactation room.
      • For those emergency moments - long and short term disability, life insurance (2x salary) and an employee assistance program
      • Pre-tax savings plans for health care, child care, elder care, public transportation and parking expenses
      • Telecommuting and flexible work schedules available
      • Appropriate fuel for thinking and coding (aka, a pantry full of treats) and monthly massages to help staff relax
      • Great colleagues - diverse staff and contractors speaking dozens of languages from around the world, fantastic intellectual discourse, mission-driven and intensely passionate people

      *Eligible international workers' benefits are specific to their location and dependent on their employer of record

      More information

      Wikimedia Foundation
      Blog
      Wikimedia 2030
      Wikimedia Medium Term Plan
      Diversity and inclusion information for Wikimedia workers, by the numbers
      Wikimania 2019
      Annual Report - 2017 

      This is Wikimedia Foundation 
      Facts Matter
      Our Projects
      Fundraising Report