ABOUT SPARKLINE:
At Sparkline, we’re not interested in being ‘the smartest guy in the room’! We want to be a trusted partner, strategic asset and integral part of our client’s business. We are a small team and at our strongest when we leverage each other’s strengths. We roll up our sleeves and move at lightning pace to get the job done!
OUR DNA:
Passion for digital and data, and the value it brings to both consumers and businesses
Dedication to become a value based partner, delivering above and beyond client expectations
Humility to work integrity, good intent and transparency
Curiosity to explore the possibilities and the opportunities from the information you are given
Creativity to see beyond the answer and use insights to effectively reach potential customers
Fail Fast, Learn Faster it’s about progress not perfection
Entrepreneurial in attitude and action, effortlessly innovating and adapting
Embrace Change - ‘It’s not the strongest or the most intelligent who will survive, but those that can best manage change’....that’s Charles Darwin by the way!
THE ROLE:
You will work with our valued client base and peers across the Regional team base to help to design, implement, and manage data pipelines from end-to-end.
This will include:
Gather and process raw data at scale (including writing ETL scripts, scraping webpages, calling APIs, writing SQL queries, etc.).
Explain in layman terms how the data flows between processes and systems.
Work closely with clients' engineering teams to integrate your innovations and algorithms into their production systems.
Take ownership of systems' up-time, including resolving application, performance, and systems incidents and errors as quickly as possible.
Support business decisions with ad hoc analysis as needed.
THE PERSON
You should be a creative problem solver, resourceful in getting things done, and productive when working independently or collaboratively. You should also have the ability to undertake new initiatives to deliver the best work. You will be hands on and technically savvy and have the passion to support varied points of contact internally and externally to achieve success through effective delivery.
YOU WILL BRING
2-5 years of experience in data application development, including building machine learning projects from start to end.
Google Cloud, especially with Kubernetes, BigQuery and other data analytics services provided by Google Cloud.
Python: 3 years, especially with libraries like numpy, scikit, Flask and pandas.
Development tools and frameworks: Docker, Node.js + Gulp, D3.js.
Required: Please provide publicly accessible links to your sample code of your data analytics and machine learning applications.