Sr. Data Engineer

Eezy


1 month ago

08/19/2019 10:21:23

Job type: Full-time

Category: All others


Eezy is looking for a Senior Data Engineer to become the foundation of our data team and build world-class data solutions and applications. We are looking for an open-minded, structured, thinker who is passionate about building systems at scale.

As an early member of the data team, you will be leading the development process, driving architectural decisions, and incorporating business and technology strategy for our data. You will be earning the trust of other developers in the team and then coaching and influencing them into the right behaviors to build the ultimate analytical data model and pipelines. In this role, you would be the go-to person for understanding data and knowing how to find it. This person will manage the process to make data useful for analytics and develop creative business solutions based on the data collected.

Who We Are:

Eezy, Inc is a 10-year-old, rapidly growing, graphic resources company, whose mission is to make beautiful graphic resources available to everyone, everywhere. We currently have millions of monthly active users, from 175+ countries, and support 7 languages across our portfolio of web properties, vecteezy.com, videezy.com, and brusheezy.com. We are seeking individuals who incorporate our G.A.R.D.E.N core values into their daily lives and work. Genuine, Ambitious, Reliable, Detailed, Enthusiastic, and Nimble.

Responsibilities:

  • Lead the architecture of our data collection infrastructure.

  • Build the data models and ETL processes to provide this data for business use.

  • Build reporting platforms and data visualization; as the data domain expert, you will. be partnering with our engineering teams, and data scientists across various initiatives.

  • Be the authoritative source for all things data in the organization. When people have questions, you have answers, or at least know where to look.

  • Build and maintain the data warehouse infrastructure.

Desired Experiences and Qualifications:

  • At least 3 years of professional experience as a data engineer or similar role.

  • Strong programming skills (some combination of Python, Ruby, Java, and Scala preferred).

  • Experienced in data warehousing (redshift, or snowflake), building data pipelines (using Spark/Hadoop), writing SQL (MySql/Postgresql), and working with real-time streaming applications.

  • Working knowledge of the following: AWS (specifically EMR, Kinesis), Tableau or other visualization tools, as well as version control systems like Git or Subversion.

  • Strongly preferred: experience not only collecting and modeling data but also working knowledge of data analytics and interpretation.

Please mention that you come from Remotive when applying for this job.

Help us maintain Remotive! If this link is broken, please just click to report dead link!

similar jobs

  • Zignal Labs (US only)
    6 days ago

    About Zignal Labs

    Zignal Labs is the world’s leading media analytics company, helping companies build and protect their most valuable asset: their brand. With unparalleled data veracity, speed to surface insights and a holistic view of the traditional and new media landscape, Zignal empowers the most innovative communications and marketing teams across the Fortune 1000 to measure the conversation around their brands in real-time, rapidly identify and mitigate reputational risks and inform strategic decision-making to achieve mission-critical business outcomes. Headquartered in San Francisco with offices in New York City and Washington D.C., Zignal serves customers around the world, including Expedia, GoPro, DaVita, Under Armour, Synchrony, Prudential, DTE Energy, The Public Goods Project and Uber. To learn more, visit: www.zignallabs.com.

    About the Role

    As a Data Scientist on our Labs and Data Science team, you will work on data analytics and machine learning projects and be involved in solutions from ideation through research and prototyping up to feature delivery-including data quality measurements and improvements. You will rely on your Scala and Python coding skills to analyze a large amount of media data and build machine learning models. You will use Spark, SageMaker, S3, and ElasticSearch along with your machine learning and NLP skills and apply it to social media, news, blogs, broadcast, and other media sources to empower our users with key insights based on real-time analysis.

    In this role, you will have the opportunity to:

    • Mine and analyze media data from various data sources to create and improve analytical insights, product and application features

    • Measure and improve the effectiveness and accuracy of new and existing features

    • Develop custom data models and algorithms to apply to data sets

    • Extend company quality measurement frameworks and test and improve model quality

    • Coordinate with different functional teams to implement models and monitor outcomes

    • Develop processes and tools to monitor and analyze model performance and data accuracy

    Tech Stack:

    • Scala, Python

    • Spark / Databricks

    • Nice to have: S3, Elasticsearch, Amazon Sagemaker, Amazon Mechanical Turk

    In order to be successful in this role, you will need:

    • Master's degree in Computer Science, Mathematics or equivalent field with 5+ years work experience, or

    • Bachelor's degree with 8+ years relevant experience

    • Strong problem-solving skills with an emphasis on product development

    • Excellent written and verbal communication skills for cross-team collaboration

    • Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks

    • A passion to learn and master new technologies and techniques

    • Coding knowledge and experience with Scala and/or Python

    • Experience with one or more of the following distributed data/computing tools: Spark, Map/Reduce, Hadoop, Hive

    • Familiarity with data quality measurement techniques, and processes

    Plusses:

    • Experience with various natural language processing techniques: parts of speech tagging, shallow parsing, constituency, and dependency parsing, named entity recognition, emotion and sentiment analysis

    • Experience building and using advanced machine learning algorithms and statistics: regression, simulation, scenario analysis, modeling, clustering, decision trees, neural networks, etc.

    • Intellectual curiosity around our business & tech challenge

    Why Join Zignal?

    • Competitive salary based on the work you do

    • 100% employer-paid Medical, Dental, and Vision insurance

    • Flexible time off – work with your manager to take the time you need

    • Subsidized commuter benefits

    • Up to 16 hours of paid time off to volunteer in your local community

    • Learning environment where we value professional and personal development

    • Catered lunches 3 times a week and fully stocked kitchen

    • Our office is located in the Financial District just blocks away from BART

    Applicants must be authorized to work in the United States for any employer. No sponsorship is available for this position now or in the future and or visa transfers now or in the future. Remote option available within U.S.

  • Job description

    Lead the evolution of our next-gen Data Warehouse to enable our Data Science & Analytics teams in developing insights that genuinely drive our business, and that of our international clients. What is waiting for you here is a company that moves fast and where data is a top priority. No red tape to cut through. If you have a vision and you want to use your business intelligence skills to apply the newest technologies, this is your chance! 

    Your job

    You will deal with plenty of complexity since we work with a multitude of clients for whom we need to integrate many data in very distinct ways. Along with two colleagues, you make sure that all these data are effectively fed into our data warehouse to enable state of the art reporting & analytics. As an architect, we'll further challenge you by bringing everything we do to the cloud. You also ensure that our data warehouse can keep up with the rapid growth of our business. And, you will strategize on the inclusion of non-operational data sources that will enable exciting insights for our clients. Imagine being able to combine our data, with game usage data, to predict in-game spending, for example. 

    Our current platform is modern and mostly built in the cloud (Azure). It is a microservices based, large distributed system. We believe in continuously evaluating our stack, and we have a smooth process of suggesting and adopting new technologies. Our set of technologies: 

    • SQL server, analysis services tabular, power BI 

    • R, Python 

    • Azure Functions, Azure ML, Azure VM's 

    • What you bring!

    Requirements

    To do this job, you should be able to: 

    • Architect and own the technological innovation;

    • Advise on and implement new technologies and future improvements;

    • Conceive, design, develop and deploy the data architecture and data models from scratch;

    • Collaborate across business units (Operations, Reporting and Analysis, Data Science) to craft a vision, strategy, and roadmap;

    • Train and mentor the team in data modeling and data quality related processes and standards;

    • Communicate fluently in English, allowing you to operate comfortably in a highly international organization;

    • Adapt easily within an environment where things move fast.

    How to apply

    Feel eager to apply? Let's get to know each other! Please help us to understand how you see yourself matching up. A list of technologies is great, yet what we are especially excited to learn about is your ability to lead with vision and fulfil the role of an actual architect. So, don't hold back to tell us more about that! For most vacancies, an online assessment is part of the application process.

    About us

    We are 5CA. For the past 20 years, we've used our expertise to help our clients build  their CX & support strategy. Focused on three industries: video games, consumer electronics, and eCommerce, we provide omnichannel support in a wide variety of languages, always using the latest technological innovations.    

    We’re headquartered in Utrecht, The Netherlands, with offices in Los Angeles, Buenos Aires, and Hongkong. For our contact services, we use a mix of onsite and remote support specialists. A highly flexible and dynamic model, by which we help our clients deal with challenging situations.    

    5CA offers a fast-paced, dynamic workplace where every day is different, and developments take place in days, not months. Our culture is shaped by a spirited workforce, hailing from all corners of the globe. We all share a thirst for new and exciting technology and gaming as the binding factor. 5CA has a flat hierarchy, where you are encouraged to think big, dream big, and live up to your full potential. 

  • Mammoth Growth (US or Canada)
    3 weeks ago

    Mammoth Growth is seeking a Data Engineer with extensive experience in building data pipelines, ETL scripts, and data warehouses in a modern cloud environment. We are a fast-paced, rapidly growing growth data analytics consultancy helping businesses build cutting edge analytics environments.

    As a data engineer in a rapidly growing team, you will work with a variety of exciting high growth businesses building their future data environment. This is an excellent opportunity to sharpen and broaden your skills in a fast-paced, challenging environment.

    This is a remote position.

    Responsibilities

    • Build custom integrations with 3rd party APIs

    • Building ETLs to move and transform data

    • Put together an end-to-end data pipeline using cutting edge tools and techniques

    • Designing data warehouses and data lakes

    • Use your knowledge and experience to help shape new process

    Skills

    • Python

    • AWS Lambda

    • SQL

    • Spark / Databricks / AWS Glue

    • Database experience (Redshift, Snowflake or Data Lake a plus)

    Qualities

    • Independently organized; self-starter; ability to work with minimal direction

    • Enjoy learning new tools to better solve new challenges

    • Attention to detail and ability to ask the right questions

    • Good communication / client facing skills

    • Can switch between simultaneous projects easily

    If you think you are a good fit for the role send us a quick note on why and include the sum of 18 and 22 (bonus points for creativity).

Remotive can help!

Not sure how to apply properly to this job? Watch our live webinar « 3 Mistakes to Avoid When Looking For A Remote Startup Job (And What To Do Instead) ».

Interested to chat with Remote workers? Join our community!