Consultant – Analytics Data Engineer

职位号: 14749
地点: Mumbai, Maharashtra

总览及职责

General Mills is seeking Analytic Data Engrs to join the Data Science and Analytic Engineering team (DSAS) in the Enterprise Data Capabilities function. This team builds enterprise level scalable and sustainable data pipelines to serve the analytic purpose. These data pipelining solutions are further leveraged to provide advanced analytical solutions around demand forecasting, text classification, operational analytics, and machine learning to name just a few. It is also responsible for curating a community of practice to determine the best standards and practices around analytics data solutions at General Mills.

• Analyze raw data and identify trends
• Develop automated processes for large scale data analyses, model development, validation, and implementation
• Develop, test, optimize, and maintain data architecture and pipelines that adhere to technical principles and business goals
• Participate in evaluation, implementation and deployment of emerging tools & processes to improve team productivity
• Accurately apply technical job knowledge and skills to complete all work in a timely manner in accordance with policies, procedures and regulatory requirements
• Continually invest in your own knowledge and skillset through formal training, reading, and attending conferences and meetups
Supervision
• Works on 1-2 projects at a time with concrete definition and clear direction
• Demonstrates teamworking skills
• Work in compliance with data standards & policies when provided with clear direction
• Strong interpersonal skills and ability to interact with teammates to successfully understand and solve problems
• Demonstrate problem-solving and analytical skills
• Able to implement architecture designs with clear direction from senior Team members
• Participates in resolving operational issues
Consultation
• Experienced in proposing sustainable, scalable data pipelining solutions
• Collaborate with technical teams like data scientist, data developers, development and platform.
• Ability to research and devise best practices on new technologies, platforms and components.

资格

Education

Minimum Degree Requirements: Bachelors
Preferred Major Area of Study Computer Information / Computer Science
Min Preferred Professional Certifications Data Engineer/ Data warehouse / Platforms / Cloud

Experience

Minimum years of related experience required: 3 years
Preferred years of experience: 5 years

Specific Job Experience or Skills Needed
Skillset needed in Data Engineering include Data Analysis, Data Platforms, Data Security, Data Quality, Version Control and BI. Understanding / experience of CPG industry is desiable.
Data Engineering background is required
• Expertise in SQL
• Proficiency in –
Hadoop, Spark, Google Big Query, other Database Platforms (Snowflake, Mongo, etc.),
Logging, alerting, scheduling & monitoring
Scripting language- Python / Scala
Cloud exposure: Google Cloud Platform preferred / Azure / AWS

• Familiarity of-
Semantic layer – AtScale
Catalog tools- Alation
BI tools (Tableau, BOBJ, Looker)
Version Control (Git)
Data Integration Tools

• Agile Process:
Understanding of Agile norms, principles, and key concepts
Active in defining and maintaining stories, and tasks

• Innovation Mindset:
Demonstrates learning agility and ability to apply to work
Leverage inner or open source innovation

总览及职责

General Mills is seeking Analytic Data Engrs to join the Data Science and Analytic Engineering team (DSAS) in the Enterprise Data Capabilities function. This team builds enterprise level scalable and sustainable data pipelines to serve the analytic purpose. These data pipelining solutions are further leveraged to provide advanced analytical solutions around demand forecasting, text classification, operational analytics, and machine learning to name just a few. It is also responsible for curating a community of practice to determine the best standards and practices around analytics data solutions at General Mills.

• Analyze raw data and identify trends
• Develop automated processes for large scale data analyses, model development, validation, and implementation
• Develop, test, optimize, and maintain data architecture and pipelines that adhere to technical principles and business goals
• Participate in evaluation, implementation and deployment of emerging tools & processes to improve team productivity
• Accurately apply technical job knowledge and skills to complete all work in a timely manner in accordance with policies, procedures and regulatory requirements
• Continually invest in your own knowledge and skillset through formal training, reading, and attending conferences and meetups
Supervision
• Works on 1-2 projects at a time with concrete definition and clear direction
• Demonstrates teamworking skills
• Work in compliance with data standards & policies when provided with clear direction
• Strong interpersonal skills and ability to interact with teammates to successfully understand and solve problems
• Demonstrate problem-solving and analytical skills
• Able to implement architecture designs with clear direction from senior Team members
• Participates in resolving operational issues
Consultation
• Experienced in proposing sustainable, scalable data pipelining solutions
• Collaborate with technical teams like data scientist, data developers, development and platform.
• Ability to research and devise best practices on new technologies, platforms and components.

资格

Education

Minimum Degree Requirements: Bachelors
Preferred Major Area of Study Computer Information / Computer Science
Min Preferred Professional Certifications Data Engineer/ Data warehouse / Platforms / Cloud

Experience

Minimum years of related experience required: 3 years
Preferred years of experience: 5 years

Specific Job Experience or Skills Needed
Skillset needed in Data Engineering include Data Analysis, Data Platforms, Data Security, Data Quality, Version Control and BI. Understanding / experience of CPG industry is desiable.
Data Engineering background is required
• Expertise in SQL
• Proficiency in –
Hadoop, Spark, Google Big Query, other Database Platforms (Snowflake, Mongo, etc.),
Logging, alerting, scheduling & monitoring
Scripting language- Python / Scala
Cloud exposure: Google Cloud Platform preferred / Azure / AWS

• Familiarity of-
Semantic layer – AtScale
Catalog tools- Alation
BI tools (Tableau, BOBJ, Looker)
Version Control (Git)
Data Integration Tools

• Agile Process:
Understanding of Agile norms, principles, and key concepts
Active in defining and maintaining stories, and tasks

• Innovation Mindset:
Demonstrates learning agility and ability to apply to work
Leverage inner or open source innovation

返回求职者

登录 页面或 创建个人资料 是否需更新您的申请?登录个人资料查看状态。
加入我们的人才社区