Remote deep learning Jobs in February 2020
6 Remote deep learning Jobs in February 2020Post a job
In this position you will be:
Developing Gstreamer pipelines written in C to connect to and stream real-time video
Implementing and managing pre-processing of video to optimize performance of deep learning models
Working with state of the art deep learning frameworks to deploy object detection models for inference
Testing and optimizing GStreamer pipelines to maximize hardware performance
Helping to build, automate and customize deep learning training and inference environments
Working with Nvidia’s Deepstream SDK, TensorRT, Nvidia Docker, Kubernetes and Helm Charts
Writing and optimizing GStreamer plugins to manipulate video and image data Requirements Skills, Experience, Certifications
C, C++, Python, Cmake
RTSP streaming, OpenCV, GStreamer, Deepstream SDK, TensorRT, Nvidia Docker, Kubernetes and Helm Charts
*Ubuntu. RedHat/CentOS and Windows Server a plus
Experience with AWS, GCP, Azure, etc a plus
Experience with Scrum/Agile development methodologies
Experience with Putty/Termius, Powershell, Linux/Windows command line tools
Installing Nvidia drivers, Cuda, cuDNN
Knowledge of GPU hardware and performance testing
Experience with VMS and DVR/NVR
C#, Go, Postgres DB, ASP.NET Core a plus
Minimum 5 years of related experience or equivalent blend of education and experience.
Up to 25% travel
At Writefull, we use AI to help people write academic texts. We have a stand-alone writing editor for researchers, and also offer our feedback services to clients like publishers and educational institutions. Most of the feedback that Writefull gives is based on Deep Learning models.
We are looking for the best of the best frontends. If you are interested in building tools used by researchers worldwide, email us at [email protected], and please include a link to your GitHub account or portfolio.
NOTE: we will ignore e-mails by agencies.
At Writefull, we use the latest NLP techniques to help people with their academic writing. We target researchers, publishers, and institutions - and develop writing tools for each of their needs.
We launched the first version of our product in 2014. In 2018, we got investment from Digital Science, a big player in research-focused startups, and we've been one of their portfolio companies ever since.
Over the last few years, we've advanced our main product to an editor that uses AI to give automatic feedback on texts. We're an absolute frontrunner in how we use the latest NLP techniques to give feedback on academic writing.
We recently started developing tools besides the editor, so that users can get our language feedback in different ways. We're integrating Writefull into existing tools, as well as developing platforms for specific customers.
For these challenges, we're looking for developers to join our team!
Working with us means diving into the areas of science, language, and AI. It also means being part of a passionate and highly skilled team, giving you the opportunity to learn and grow. Our work culture is flexible in terms of time and location, but we do take our deadlines seriously. Our current team members work remotely from The Netherlands, Spain, and England.
Flexible working hours
Budget for travel and training
Equipment costs covered
We are looking for a talented Data Scientist to join our team at Prominent Edge. We are a small company of 24+ developers and designers who put themselves in the shoes of our customers and make sure we deliver strong solutions. Our projects and the needs of our customers vary greatly; therefore, we always choose the technology stack and approach that best suits the particular problem and the goals of our customers. As a result, we want developers who do high-quality work, stay current, and are up for learning and applying new technologies when appropriate. We want engineers who have an in-depth knowledge of Amazon Web Services and are up for using other infrastructures when needed. We understand that for our team to perform at its best, everyone needs to work on tasks that they enjoy. Most of our projects are web applications which and often have a geospatial aspect to them. We also really take care of our employees as demonstrated in our exceptional benefits package. Check out our website at http://prominentedge.com for more information and apply through http://prominentedge.com/careers.Ideal candidates are those who can find value out of data. Such a person proactively fetches information from various sources and analyzes it for a better understanding of the problem, and may even build AI/ML tools to make insights. The ideal candidate is adept at using large datasets to find the right needle in a pile of needles and uses models to test the effectiveness of different courses of action. Candidates must have strong experience using a variety of data mining/data analysis methods, using a variety of data tools, building and implementing models, using/creating algorithms and creating/running simulations. They must have a proven ability to drive results with their data-based insights. They must be comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion for discovering solutions hidden in large datasets and working with stakeholders to improve mission outcomes. A successful candidate will have experience in many (if not all) of the following technical competencies including: statistics and machine learning, coding languages, databases, and reporting technologies.Required Skills
- Bachelor's Degree in Computer Science, Information Systems, Engineering or other related scientific or technical discipline.
- Proficient in data preparation, exploration, and statistical analysis
- Experience with batch scripting and data processing
- Experience with Machine Learning libraries and frameworks such as TensorFlow/Pytorch or Bayesian Analysis using SAS/R Studio.
- Experience with databases such as Postgres, Elasticsearch, MongoDB, or Redis
- Master's degree in Computer Science or related technical discipline.
- Experience with natural language processing, computer vision, or deep learning
- Experience working with geospatial data
- Experience with statistical techniques
- Experience as either back-end or front-end/visualization developer
- Experience with visualization and reporting technologies such as Kibana or Tableau
- Not only you get to join our team of awesome playful ninjas, we also have great benefits:
- Six weeks paid time off per year (PTO+Holidays).
- Six percent 401k matching, vested immediately.
- Free PPO/POS healthcare for the entire family.
- We pay you for every hour you work. Need something extra? Give yourself a raise by doing more hours when you can.
- Want to take time off without using vacation time? Shuffle your hours around in any pay period.
- Want a new MacBook Pro laptop? We'll get you one. If you like your MacBook Pro, we’ll buy you the new version whenever you want.
- Want some training or to travel to a conference that is relevant to your job? We offer that too!
- This organization participates in E-Verify.
At phData, we believe that we are witnessing a new age of digitization and decision automation. To compete, even traditional companies must deliver machine learning and analytics-enabled data platforms and data products. Examples include a medical device manufacturer building machine learning models to guide therapies or a global industrial manufacturer using deep learning models to identify product defects. This means that the data problems that used to apply only to Google and Facebook, now apply to a filter manufacturer, a medical device manufacturer, or a healthcare provider.
phData matches machine learning and analytics to the business goals of the world's largest and most successful companies. phData is one of the fastest-growing companies in the U.S. and demand for our services and software has skyrocketed. This has resulted in quality growth and an expanded presence at our company headquarters located in Downtown Minneapolis, as well as in Bangalore, India, and across the U.S. We’ve been recognized as one of the Best Places to Work in Minneapolis for three consecutive years and we were listed as the 48th fastest growing private company in the U.S. on the Inc. 5000 list.
We pride ourselves on offering employees phenomenal growth and learning opportunities in addition to competitive compensation, health insurance, generous PTO and excellent perks including extensive training and paid certifications.
As a Data Scientist, your responsibilities include:
- Design, implement, and train algorithms and models that improve and streamline business processes
- Design, analyze, and interpret the results of experiments
- Author reports, visualizations, and presentations to communicate your process and results
- Work closely with the development teams including data engineers and machine learning engineers to build lasting solutions
- Individually engage with customers and take responsibility for implementing data science best practice in their work
- Act as a thought leader; give talks, contribute to open source projects, and advanced data science practices on a global scale
- Advanced degree or evidence of exceptional ability in engineering, computer science, mathematics, physics, chemistry, or operations research
- 3-5+ years hands-on experience in building models and developing algorithms for real-world applications and data-driven optimizations
- Depth of knowledge in advanced mathematics, machine learning, and statistics
- Strong computer science fundamentals: data structures, algorithms, distributed systems
- Mastery of one or more data analysis languages such as R, Python, SQL, Matlab and/or SAS.
- Experience with data science tools including Python scripting, numpy, scipy, matplotlib, scikit-learn, jupyter notebooks, bash scripting, Linux environment
- Experience with at least one mainstream deep learning frameworks, e.g. TensorFlow, PyTorch, Caffe(2), MXNet.
- Exemplary verbal and written communication skills
- Able to work under pressure while managing competing demands and tight deadlines
- Well organized with meticulous attention to detail
- Design, implement, and train algorithms and models that improve and streamline business processes
As a data scientist you will push the boundaries of deep learning & NLP by conducting research in cutting-edge AI and apply that research to production scale data. Our data includes a large corpora of global patents in multiple languages and a wealth of metadata. The focus of your work will be to analyze, visualize, and interpret large text corpora as well as more traditional structured data. This includes recommending and implementing ML-based product features. We also expect to publish primary research and contribute to FOSS that we use.
This is an exciting opportunity to part of a startup that is applying deep learning to a real-world problem at global scale. We’re always looking for leaders and there is room to grow for the right person to take increasing responsibility working as part of a small and dynamic team.
Tokyo, Melbourne, San Francisco (willing to negotiate remote work as well)
Architect and implement software libraries for batch processing, API-based predictions, and static analyses
Rapidly iterate on the design, implementation and evaluation of machine learning algorithms for document corpora
Report and present software developments including status and results clearly and efficiently, verbally and in writing
Participate in strategy discussions about technology roadmap, solution architecture, and product design
Strict adherence to clean code paradigms
Minimum Qualifications and Education Requirements:
BSc/BEng degree in computer science, mathematics, machine learning, computational linguistics or equivalent (MSc/MEng preferable)
Experience with implementing statistical methods and data visualization
Good knowledge of computer science principles underpinning the implementation of machine learning algorithms
Experience with deep learning approaches to NLP, particularly RNNs
Experience implementing deep learning models in TensorFlow or PyTorch
Contributions to open source projects
Passion for new developments in AI
Experience with GCP or AWS
A track record of machine learning code that is:
To apply, please contact us at [email protected] with your CV.