This job has expired.
DAtec
Expired

BIG DATA ENGINEER

Reston, VA (On-site)

Location restricted
This job is restricted to tax residents of , but we detected your IP as outside of the country. Please only apply if you are a tax resident.

Job Title: Big DataDeveloper

Location : Dallas, TX

Year of exp: 5 - 10years

We are looking for someone who isgood in Kafka Pre-processing using Hive, Impala, Kerberos , LDAP Spark, Pig, SQL,Shell scripting (bash,korn), python and Bigdata-HDFS

Hands on experiencewith Hadoop, Hive, Pig, Impala, and Spark

H1's and wework on C2C as well

Responsibilitiesinclude:
Translate complex functional and technical requirements into detailed design.
Design for now and future success
Hadoop technical development and implementation.
Loading from disparate data sets. by leveraging various big datatechnology e.g. Kafka
Pre-processing using Hive, Impala, Spark, and Pig
Design and implement data modeling
Maintain security and data privacy in an environment secured using Kerberos andLDAP
High-speed querying using in-memory technologies such as Spark.
Following and contributing best engineering practice for source control,release management, deployment etc
Production support, job scheduling/monitoring, ETL data quality, datafreshness reporting

*Skills Required:
Strong SQL scripting background required, *5-8 years of Python orJava/J2EE development experience is a plus

3+ years of demonstrated technical proficiency with Hadoop and big data projects
5-8 years of demonstrated experience and success in data modeling
Fluent in writing shell scripts [bash, korn]
Writing high-performance, reliable and maintainable code.
Ability to write MapReduce jobs
Ability to setup, maintain, and implement Kafka topics and processes
Understanding and implementation of Flume processes
Good knowledge of database structures, theories, principles, and practices.
Understand how to develop code in an environment secured using a local KDC andOpenLDAP.
Familiarity with and implementation knowledge of loading data using Sqoop.
Knowledge and ability to implement workflow/schedulers within Oozie
Experience working with AWS components [EC2, S3, SNS, SQS]
Analytical and problem solving skills, applied to Big Data domain
Proven understanding and hands on experience with Hadoop, Hive, Pig, Impala,and Spark
Good aptitude in multi-threading and concurrency concepts.
B.S. or M.S. in Computer Science or Engineering

Job Type: Contract

Experience:

  • relevant: 1 year (Preferred)

Other Big Data contracts

Remote
0
USD
/hr

0 outside IR35 Big Data contracts