Work With Us
Interview Rounds: 4
As an AWS Engineer, you will be a key contributor to our team, responsible for hands-on implementation and management of AWS services, especially focusing on big data and data lake solutions. Your role will involve not only technical expertise but also effective communication with business stakeholders.
- Utilize real-world, hands-on experience with AWS services, including API Gateway, Lambda, Redshift, EMR, CloudWatch, EC2, S3, and Aurora – PostgreSQL.
- Demonstrate above-average Python skills for scripting and automation tasks.
- Apply expertise in big data and data lake technologies.
- Act as the primary contributor to a significant big data lake implementation project.
- Serve as the face of the big data lake implementation, interacting with business stakeholders.
- Be comfortable dealing with diverse business teams and demonstrating a self-motivated approach.
- Take a hands-on role, utilizing keyboard skills for loading and data ingestion using our data lake framework.
- Potentially work with Apache Airflow for orchestrating data workflows.
- Apply solid data modeling experience to design and optimize data structures.
- Demonstrate proficiency in Git for version control, ensuring effective collaboration and code management.
- Proven real-world experience with AWS services, especially in the context of big data solutions.
- Above-average skills in Python for scripting and automation tasks.
- Hands-on experience with big data and data lake technologies.
- Ability to effectively communicate and engage with business stakeholders.
- Proficiency in data modeling to optimize data structures.
- Familiarity with Apache Airflow for orchestrating data workflows is a plus.
- Solid understanding of Git for version control.
- Self-motivated with the ability to take ownership of projects.
Bachelor’s degree in computer science, Engineering, or a related field.