Big Data Architect
2 weeks ago
Job type: Full-time
Hiring from: USA Only
Big Data Architect - Kafka
A Big Data Architect with Kafka (primary focus) and Hadoop skill sets to work on an exciting Streaming / Data Engineering team (7+ years of total experience)
- Responsible for technical design and implementation in the areas of: big data engineering mainly Kafka
- Develop scalable and reliable data solutions to move data across systems from multiple sources in real time as well as batch modes (Kafka)
- Build Producer and Consumer applications on Kafka, and appropriate Kafka configurations
- Designing, writing, and operationalizing new Kafka Connectors using the framework
- Accelerate adoption of the Kafka ecosystem by creating a framework for leveraging technologies such as Kafka Connect, KStreams/KSQL, Schema Registry, and other streaming-oriented technology
- Implement Stream processing using Kafka Streams / KSQL / Spark Jobs along with Kafka
- Develop both deployment architecture and scripts for automated system deployment in an On-Premise as well as Cloud (AWS)
- Bring forward ideas to experiment and work in teams to transform ideas to reality
- Architect data structures that meet the reporting timelines
- Work directly with engineering teams for design and build their development requirements
- Maintain high standards of software quality by establishing good practices and habits within the development team while delivering solutions on time and on budget.
- Facilitate the agile development process through daily scrum, sprint planning, sprint demo, and retrospective meetings.
- Participate in peer-reviews of solution designs and related code
- Analyze and resolve technical and application problems
- Proven communication skills, both written and oral
- Demonstrated ability to quickly learn new tools and paradigms to deploy cutting edge solutions
- Create large scale deployments using newly conceptualized methodologies
- Proven hands-on experience with Kafka is a must.
- Proven hands-on experience with Hadoop stack (HDFS, Map Reduce, Spark).
- Core development experience in one or more of these languages: Java, Python / PySpark, Scala etc.
- Good experience in in developing Producers and Consumers for Kafka as well as custom Connectors for Kafka
- 3 plus years of developing applications using Kafka (Architecture), Kafka Producer and Consumer APIs, Real-time Data pipelines/Streaming
- 2 plus years of experience performing Configuration and fine-tuning of Kafka for optimal production performance
- Experience in using Kafka APIs to build producer and consumer applications, along with expertise in implementing KStreams components. Have developed KStreams pipelines, as well as deployed KStreams clusters
- Strong knowledge of the Kafka Connect framework, with experience using several connector types: HTTP REST proxy, JMS, File, SFTP, JDBC, Splunk, Salesforce, and how to support wire-format translations. Knowledge of connectors available from Confluent and the community
- Experience with developing KSQL queries and best practices of using KSQL vs KStreams will be an added advantage
- Deep understanding of different messaging paradigms (pub/sub, queuing), as well as delivery models, quality-of-service, and fault-tolerance architectures
- Expertise with Hadoop ecosystem, primarily Spark, Kafka, Nifi etc.
- Experience with integration of data from multiple data sources
- Experience with stream-processing systems: Storm, Spark-Streaming, etc. will be ad advantage
- Experience with relational SQL and NoSQL databases, one or more of DBs like Postgres, Cassandra, HBase, Cassandra, MongoDB etc.
- Experience with AWS cloud services like S3, EC2, EMR, RDS, Redshift will be an added advantage
- Excellent in Data structures & algorithms and good in analytical skills
- Strong communication skills
- Ability to work with and collaborate across the team
- A good "can do" attitude.
Before you apply, please check if any restrictions apply in terms of time zone or country.
This job has a geo-restriction in place: USA Only.
Please mention that you come from Remotive when applying for this job.
Does this job need an edit? 🙈