This job is restricted to tax residents of , but we detected your IP as outside of the country. Please only apply if you are a tax resident.
Job Title: ETL Developer (Remote till Covid Restriction) # DE1JP00000839 Location : Richmond, VA Duration : 11 months contract
Job Description
As a core member of team of Data Engineers/ETL Developers, responsibilities include but not limited to Design, Develop and maintain secure, consistent and reliable ETL solutions supporting critical business processes across the various Business Units.
Ensure data solutions are compliant with enterprise security standards
Work in complex multi-platform environments on multiple project assignments.
Develop and perform tests and validate data flows and prepare ETL processes to meet complex business requirements, including designing and implementing of complex end-to-end solutions using BI platforms.
Coordinate with analysts and developers to ensure jobs designed and developed meet minimum support standards and best practices before migration into the production environment.
Partner with various infrastructure teams, application teams, and architects to generate process designs and complex transformations to various data elements to provide the Business with insights into their business processes.
Uses strategies such as Indexing and partitioning to fine tune the data warehouse and big data environments to improve the query response time and scalability.
Define and capture metadata and rules associated with ETL processes.
Assist production support team in providing resolutions to production job failures, data issues and performance tuning
ETL Development and Process Support, may require weekend/off business hours work.
Required Skills
Strong understanding of Data warehousing (Dimensional Modeling, ETL etc.) and RDBMS concepts
Minimum 5 years working experience with ETL tools such as Talend, Informatica, Data Stage etc
Minimum 5 years working experience in SQL, Stored Procedures and Table Design
Minimum 5 years working experience in SQL Query optimization and ETL Data loading Performance
Experience as a Data Engineer in Hadoop Platforms on components like HIVE, KAFKA, NiFi, Spark etc is a big plus.
Minimum 2 years working experience in shell scripting
Experience in real time streaming technologies is preferred
Experience deploying machine learning models and automating processes in production is a plus
Experience with cloud technologies(AWS, Azure, GCP) is big plus