Data / ML engineer
Job Location: hybrid in Bucharest
Role description:
The key result area of this position is teams ability to deliver high quality code, which is testable, maintainable and meets all the business requirements.
Preferable experience with Hadoop ecosystem, performance optimization, development, deployment and maintaining code in production.
We`ll trust you to:
As part of a development team, collaborate with other team members to understand requirements, analyze, and refine user stories, design solutions, implement them, test them, and support them in production
Develop and deploy, within a team, big data
- fraud detection solutions and models using Python and Py
Spark; ability/willingness to learn how to improve them with machine learning algorithms and models
Write code and write it well. Use test driven development, write clean code, and refactor constantly
Develop or being willing to learn big data machine learning solutions
Collaborate closely with product owners, analysts, developers, and testers. Make sure we are building the right thing
Ensure that the software you build is reliable and easy to maintain in production
Help your team build, test and release software with short lead times and a minimum of waste.
Work to develop and maintain a highly automated Continuous Delivery pipeline
Help create a culture of learning and continuous improvement within your team and beyond
Actively support the business strategy, plan, and value, contributing to the achievement of a
- performance culture
Take ownership for own career management, seeking opportunities for continuous development of personal capability and improved performance contribution.
We`d love you to bring:
Good developer skills utilizing Python and Py
Spark for processing large data volumes
Relevant experience with Pandas, Num
Py, Scikit-learn, Tensor
Flow and other relevant Python and ML libraries
Good knowledge of working with Databases (e. g. SQL, Oracle, Hive, Impala)
Understanding of distributed computing principles and working with different data file types
Good knowledge of Unix/Linux environments
Familiarity with Big Data and Hadoop Ecosystem: Spark (Spark SQL, Dataframes, Py
Spark), HDFS, Hive, YARN, Kerberos
Awareness about Dev
Ops practices like CI/CD
Experience using version control systems such as Git
Hands on knowledge of Shell Scripting
Bachelors degree from an accredited college or university with a concentration in Computer Science or IT-related discipline (or equivalent work experience/diploma/certification)
Nice-to-Have Skills Description:
Experience working in an Agile setup, practicing Scrum or Kanban
Familiarity with Atlassian stack: Jira, Confluence, Bitbucket
Experience using Control-M Workload Automation tool
Have an aptitude for data mining, analytical in the approach to tasks and have a business focus
Understand statistical modelling and can apply modelling techniques to analyze data
Experience with Google Cloud Platform
Profile :
Info super relevante
Fii primul, care se va înregistra la oferta de muncă respectivă!
-
De ce să cauți de muncă pe Lucrezi.ro?
În fiecare zi oferte noi de muncă Puteți alege dintr-o gamă largă de locuri de muncă: Scopul nostru este de a oferi o gamă cât mai largă de opțiuni Lasă să-ți fie trimise noile oferte prin e-mail Fii primul care răspunde la noile oferte de muncă Toate ofertele de muncă într-un singur loc (de la angajatori, agenții și alte portaluri) Toate serviciile pentru persoanele aflate în căutarea unui loc de muncă sunt gratuite Vă vom ajuta să găsiți un nou loc de muncă