This job has expired.
- Hands on experience in installing, Configuring and using MS Azure Data bricks and Hadoop ecosystem components like DBFS, Parquet, Delta Tables, HDFS, Map Reduce programming, Kafka, Spark & Event Hub. 2. In depth understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, RDD caching, Spark MLib.
- Hands on experience in Scripting languages like Scala & Python.
- Hands on experience in Analysis, Design, Coding & Testing phases of SDLC with best practices. 5. Expertise in using Spark SQL with various data sources like JSON, Parquet and Key Value Pair.
- Experience in creating tables, partitioning, bucketing, loading and aggregating data using Spark SQL/Scala. 7. Migrating the code from Traditional DW Environments to Apache Spark and Scala using Spark SQL, RDD.
- Experience in transferring data from RDBMS/BLOB/ADLS to Data bricks using ADF.
- Experience in Azure Database (PaaS) OR Azure SQL Data warehouse.
- Experience in orchestrating
Job Types: Full-time, Contract
- spark: 2 years (Preferred)
- azure: 3 years (Preferred)