π Data Engineer II – Supply Chain Analytics | Amazon AWS (Bangalore)
π’ Company: Amazon Web Services (AWS)
New Hiring
π Job Description
Amazon Web Services (AWS) is hiring Data Engineer II professionals for its Supply Chain Analytics team in Bangalore. The role involves designing scalable data pipelines, building automation solutions, enabling business intelligence, and supporting AWS infrastructure supply chain operations through data-driven insights.
π Key Responsibilities
- Design, develop, and maintain scalable analytical tools and automated data pipelines
- Build ETL/ELT workflows using SQL, Python, and AWS technologies
- Analyze business requirements and identify automation opportunities
- Collaborate with global stakeholders, data scientists, and business intelligence teams
- Develop predictive and prescriptive analytics solutions where applicable
- Optimize production operations, deployments, and infrastructure scalability
- Create documentation and maintain best practices for data integration solutions
- Handle multiple projects while ensuring high delivery standards
π Eligibility Criteria
- 5+ years of experience in Data Engineering
- Strong SQL expertise with large-scale data handling experience
- Experience with ETL/ELT processes and analytical systems
- Knowledge of Python, Java, Scala, or NodeJS
- Hands-on experience with AWS technologies like Redshift, S3, Glue, EMR, Lambda, and Kinesis preferred
- Strong analytical, automation, and problem-solving skills required
⭐ Key Skills
- Data Engineering
- AWS Cloud Technologies
- SQL & Python
- ETL / ELT Pipelines
- Supply Chain Analytics
π° Benefits
- Opportunity to work with AWS global infrastructure teams
- Exposure to large-scale cloud and supply chain analytics systems
- Career growth in cloud computing and data engineering
- Inclusive work culture with mentorship and learning programs
⭐ Why Apply?
- Work on high-impact AWS infrastructure and analytics projects
- Gain hands-on experience with modern AWS big data technologies
- Collaborate with global engineering and analytics teams
- Strong career growth opportunities in cloud and data engineering domain
π§ Core Skills Required
- Data Engineering: Ability to design, build, and maintain scalable data pipelines and analytics systems.
- AWS Technologies: Strong understanding of AWS services such as Redshift, S3, Glue, Lambda, EMR, and Kinesis.
- SQL: Expertise in querying, transforming, and managing large-scale structured datasets efficiently.
- Python Programming: Skills to develop automation scripts, ETL processes, and backend data workflows.
- ETL / ELT Processes: Ability to extract, transform, and load data from multiple data sources reliably.
- Business Intelligence: Understanding of analytical reporting, KPI tracking, and data-driven decision-making.
- Problem-Solving: Strong analytical thinking and troubleshooting ability for complex business and technical challenges.
π€ How to Prepare for the Interview
- Revise SQL concepts including joins, window functions, optimization, and query performance tuning.
- Practice Python coding for data processing, automation, and ETL pipeline development.
- Review AWS services such as S3, Redshift, Glue, Lambda, EMR, and Kinesis thoroughly.
- Prepare examples of data engineering projects, pipeline automation, or cloud migration work handled previously.
- Practice system design and scenario-based interview questions related to scalable data architecture.
- Understand Amazon Leadership Principles and be ready for behavioral interview questions.
π© How to Apply
Interested candidates can apply using the official Amazon career link below.