Save this contract

Mind providing us with your email so we can save this contract? We promise we won't spam you, and you can unsubscribe any time.
Saved this job! You can see it in saved.
Saved this job! You can see it in saved.


1 year ago
Reston, VADAtec


· Creating and maintaining optimal data pipeline
· Assemble large, complex data sets that meet
functional/non-functional business requirements.
· Identify , design, and implement internal process
improvements: automating manual processes. Optimizing data delivery,
re-designing code and infrastructure for greater scalability.
· Build re-usable data pipelines to ingest, standardize,
shape data from various zones in Hadoop data lake.
· Build analytic tools that utilize datapipeline to provide
actionable insights into customer acquisition, revenue management,
Digital and marketing areas for operational efficiency and KPI's.
· Design and build BI API's on established enterprise
Architecture patterns , for data sharing from various sources.
· Design and integrate data using big data tools - Spark,
Scala , Hive etc.
· Helping manage the library of all deployed Application
Programming Interface (API)s
· Supporting API documentation of classes, methods scenarios,
code, design rationales, and contracts.
· Design/build , maintain small set of highly flexible and
scalable models linked to client's specific business needs.

Required Qualifications:

· 5+ years experience in data engineering /data integrationrole.
· 5+ years Advanced working SQL knowledge and experience working with
relational databases, query authoring (SQL) as well as working
familiarity with a variety of databases.
· Experience building and optimizing 'big data' data
pipelines, architecture and data sets.
· Build programs/processes supporting data transformation,
data structures, metadata, dependency and workload management.
· Experience in Data warehouse , Data Mart ETL
implementations using big data technologies.
· Working knowledge of message queuing , stream processing
and scalable data stores.
· Experience with relational SQL and NoSQL databases, Graph
databases (preferred).
· Strong experience with object oriented programming - Java,
C++ , Scala (preferred)
· Experience with AWS cloud services: strom, spark-streaming
· Experience with API, web services design and development

Preferred Qualifications:

· Functional experience in hospitality
· End-to-end experience in building data flow process (from
ingestion to consumption layer)
· Solid working experience with surrounding and supporting
disciplines (data quality, data operations, BI/Data WH/Data Lake)
· Effective communicator, collaborator, influencer, solution
seeker across variety of opinions
· Self-starter, well organized, extremely detail-oriented and
an assertive team player, willing to take ownership of
responsibilities, and possess a high level of positive energy and
· Excellent time management and organizational skills
· Ability to manage multiple priorities, work well under
pressure and effectively handle concurrent demands to prioritize

Job Type: Contract

Other Data Modelling contracts


0 remote Data Modelling contracts