Эта вакансия уже завершена
A position for data engineers that involves working on a variety of large-scale data-related projects in the cloud and on-premise. You will be able to join a team of specialists and demonstrate your talent and technical skills in designing processes for data preparation, processing, validation, and migration across diverse sources.
MAIN GOALS AND RESPONSIBILITIES:
- collaborate with product owners and team leads to identify, design, and implement new features to support the growing data needs;
- monitor and anticipate trends in data engineering, and propose changes in alignment with organizational goals and needs;
- share knowledge with other teams on various data engineering or project-related topics;
- develop and maintain code and documentation for data integration projects and procedures;
- solve challenging problems in a fast-paced and evolving environment while maintaining uncompromising quality;
- contribute to the core design of data architecture and implementation plan, define risks;
- design, build and maintain optimal data pipeline architecture for extraction, transformation, and loading of data from a wide variety of data sources, including external apis, data streams, and data lakes;
- be able to refactor the existing code to improve quality and productivity, and fix the possible errors;
- identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.;
- collaborate with the team to decide on which tools and strategies to use within specific data integration scenarios
REQUIRED SKILLS:
- 1+ years of experience with software development or data engineering tasks;
- software engineering skills (python, scala preferred) in massively parallel processing implementations;
- solid sql knowledge;
- experience with python libraries for data manipulation and processing (pandas, numpy) is a plus point;
- proficient with relational and non-relational data modeling;
- proficient with stream processing using the current industry standards;
- etl and elt principle knowledge;
- good understanding of data warehousing principles and modeling concepts (knowledge of data model types and terminology including oltp/olap, mpp, (de)normalization, “star”/”snowflake” schemas, graph/nosql);
- capable of building conceptual, logical, and physical data models;
- ability to tune and optimize queries;
- a team player with excellent collaboration skills;
- english level intermediate+.
NICE TO HAVE SKILLS:
- bsc or master degree in cs/mathematics/statistics/physics/ee or equivalent experience;
- experience with cloud services (GCP, Azure, AWS);
- experience with MS SQL server tools: SSAS, SSRS, SSIS;
- experience with data streaming frameworks (Kafka);
- experience with Orchestration of data flows (Apache Airflow);
- experience with NoSQL storages (Cassandra, MongoDB, Elasticsearch);
- experience with containerized (Docker, ECS, Kubernetes) or serverless (Lambda) deployment;
- experience with requirements management tools, in particular Jira and Confluence.
WHAT WE OFFER:
- Large-scale projects
- Professional, friendly, and supportive team
- Prospects for career development in a team with a 27-year history;
- Opportunities for professional growth that include participation in thematic events, English courses, certifications, paid participation in the largest industry conferences in the world;
- Comfortable space in the city center, regular team buildings, holidays, and legendary corporate parties;
- Flexible compensation review system;
- Medical, sports and accounting support programs
If you would like to join our team, please fill out our online CV form. We look forward to meeting you!
Ксения
NIX is a software development and system integration service provider.
больше 500 сотрудников
с 1994 года на рынке
- Офис в центре
- Корпоративные мероприятия
- Корпоративное обучение