Work With Us
Azure Data Engineer
Interview Rounds: 4
- Extract the data from various source systems and land it in ADLS (landing layer).
- Convert the raw data in a unified file format in staging layer.
- Read the data from staging layer using databricks notebooks.
- Apply various business logics using pyspark.
- Implement optimized file format Delta.
- Apply various optimizations while implementing the transformation logic.
- Write back the transformed data to the processed layer(ADLS).
- Orchestrate all pyspark jobs using ADF.
- Enable the proper alerting mechanism, and data quality checks in ADF.
- Handle exceptions when we are dealing with huge amount of data.
- Enable the logging mechanism.
- Prepare Unit test cases.
- Involve in CI/CD pipeline process.
- Prepare sprint planning and achieved the goals.
Bachelor’s degree in computer science, computer information systems, information technology, or advanced educational background equating to the U.S. equivalent of a Bachelor’s degree in one of the aforementioned subjects.