hero



Data Engineer

General Mills

General Mills

Software Engineering, Data Science
powai, mumbai, maharashtra, india
Posted on May 14, 2022

Job Description

India is among the top ten priority markets for General Mills, and hosts our Global Shared Services Centre. This is the Global Shared Services arm of General Mills Inc., which supports its operations worldwide. With over 1,300 employees in Mumbai, the center has capabilities in the areas of Supply Chain, Finance, HR, Digital and Technology, Sales Capabilities, Consumer Insights, ITQ (R&D & Quality), and Enterprise Business Services. Learning and capacity-building is a key ingredient of our success.

Job Overview

The Enterprise Data Development team is responsible for designing & architecting solutions to integrate & transform business data into Data Lake to deliver data layer for the Enterprise using cutting edge technologies like Big Data - Hadoop. We design solutions to meet the expanding need for more and more internal/external information to be integrated with existing sources; research, implement and leverage new technologies to deliver more actionable insights to the enterprise. We integrate solutions that combine process, technology landscapes and business information from the core enterprise data sources that form our corporate information factory to provide end to end solutions for the business.
This position will develop solutions for the Enterprise Data Lake & Data Warehouse. You will be responsible for developing data lake solutions for business intelligence and data mining.

 

Job Responsibilities

70% of time Create, code, and support a variety of Hadoop, ETL & SQL solutions
Experience with agile techniques or methods
Work effectively in a distributed global team environment.
Works on pipelines of moderate scope & complexity
Effective technical & business communication with good influencing skills
Analyze existing processes and user development requirements to ensure maximum efficiency
Participates in the implementation and deployment of emerging tools and processes in the big data space
Turn information into insight by consulting with architects, solution managers, and analysts to understand the business needs & deliver solutions
20% of time Support existing Data warehouses & related jobs.
Job Scheduling experience (Tidal, Airflow, Linux)
10% of time Proactive research into up to date technology or techniques for development
Should have automation mindset to embrace a Continuous Improvement mentality to streamline & eliminate waste in all processes.

 

Desired Profile

Education:
Minimum Degree Requirements: Bachelors
Preferred Degree Requirements: Bachelors
Preferred Major Area of Study: Engineering

 

Experience:
Minimum years of Hadoop experience required: 2 years
Preferred years of Data Lake/Data warehouse experience: 2-4+ years

Total Experience required : 4-5 years

 

Specific Job Experience or Skills Needed 

 

Skills Level: Beginner  Intermediate Expert  Advance

HDFS, Map reduce

Beginner

Hive, Impala & Kudu

Beginner

Python

Beginner

SQL, PLSQL

Proficient

Data Warehousing Concepts

Beginner


Other Competencies:
- Demonstrate learning agility & inquisitiveness towards latest technology
- Seeks to learn new skills via experienced team members, documented processes, and formal training
- Ability to deliver projects with minimal supervision
- Delivers assigned work within given parameter of time and quality
- Self-motivated team player and should have ability to overcome challenges and achieve desired results