This job is restricted to tax residents of , but we detected your IP as outside of the country. Please only apply if you are a tax resident.
Job details
Job Type
Contract
Number of hires for this role
2 to 4
Full Job Description
Job Description :
Build and maintain the infrastructure and data pipelines required for optimal extraction, transformation, and loading of data from a wide variety of data sources to automate high volume data
ETL processes on AWS Profile data sources, create dimensional models, implement ETL, and load into data warehouse and data lake.
Assemble large, complex data sets that meet business requirements
Ensure data quality and establish best practices across data infrastructure.
Continuously integrate and ship code into cloud Production environments work directly with Product Owners and customers to deliver data products in a collaborative and agile environment
Work with stakeholders including the Engineering and Analytic teams to assist with data related technical issues and support their data infrastructure needs
Skills :
6 to 8 years of IT experience and 3 years of experience building data services on cloud architecture with AWS
3 years coding on Python for ETLs and data pipelines
2 years of experience on Glue, Athena, Kinesis and S3 (have knowledge about bucket tagging etc).
Experience building and optimizing AWS data pipelines, architectures and data sets.
Experience in automation and provisioning services on AWS
Must be a team player with strong attention to detail and able to work independently
Proven track record at delivering timely and accurate information in a fast paced environment
Excellent critical thinking, problem solving, and mathematical skills, and sound judgment