Hadoop contract jobs near you / remote

Hadoop

Enterprise Solutions Architect F33294

4 hours ago
$65 - $75/hour (Estimated)Reston, VA 20190SyApps LLC

SyApps is a technology and management consulting services firm focused on providing solutions in strategy, processes, technology and management. As a diverse end-to-end IT and management solution provider, SyApps offers a range of technical expertise to help customers compete successfully in an ever-changing marketplace.

Please visit us at www.syapps.com.

Tax Terms: Contract


Specialized Knowledge & Skills

  • Ability to interact, build working relationship with stakeholders to develop an architecture strategy, processes and deliver IT assets to meet the business objectives
  • 6-8 years of experience with solution architecture and solution experience across multiple architecture domains such as business, data, integration, and application
  • 6-8 years of experience with data integration patterns and Integration of data and business architectures
  • 5+ years of experience in leading end-to-end business process design, business events derivation and associated data objects and entity modeling.
  • 5+ years of experience in driving the enterprise architecture strategy, methodology and standards that delivers enterprise data model, associated meta model, common business vocabulary and taxonomies
  • Hands on experience in applying advanced enterprise architecture/design concepts such as cloud computing, data lake and big data.
  • Exposure to large-scale/enterprise-level data centric projects is a must
  • Good understanding of NoSQL data stores, Hadoop, MDM, data modeling across relational, Star/Dimensional or other emerging approaches
  • In depth knowledge of Advanced Analytics concepts/data science – exposure to applying machine learning/statistical models on large/enterprise-grade data
  • Working knowledge/conceptual understanding of AWS/cloud based managed services such as S3, EMR, RedShift etc. is considered essential
  • Experience reviewing designs for robustness and quality.
  • Strong understanding of SOA concepts, design patterns, tools, techniques and best practices.
  • TOGAF or an equivalent architecture framework certification is preferred
  • AWS or Cloud based architecture/equivalent certification is preferred

Desired Skills

  • 4+ years of design and development (Enterprise-grade applications) experience using ETL tools, relational data stores
  • Excellent verbal and written communication skills
  • Ability to communicate across diverse audience of business analysts, architects, business users and technical project staff
  • Demonstrate ability to influence others and advocate point of view when appropriate
  • Ability to work in a teaming environment; strong organization and interpersonal skills

Location: Reston, VA

SyApps LLC is a Winner of the SmartCEO/Grant Thornton Future 50 Award for being recognized as one of 50 fastest growing companies in the Greater Washington Area.

We are proud of our diverse environment, Equal Opportunity Employer. SyApps is committed ta policy of equal employment opportunity. SyApps participates in E-Verify.

This Week

Get new remote Hadoop contracts sent to you every week.
Subscribed to weekly Hadoop alerts! 🎉 You can see it in your dashboard.

Big Data Developer

4 days ago
$60 - $65/hourAshburn, VARPD Systems

Responsibilities and Duties
1. Understanding business requirements & helping in assessing them with the development teams.
2. Creating high quality documentation supporting the design/coding tasks.
3. Participate in architecture / design discussions and Develop the ETL/ELT using PySpark and
SparkSql
4. Conduct code and design reviews and provide review feedback.
5. Identify areas of improvement in framework and processes and strive them to make better.
Key Skills
Desired Skill:   1. Airflow  2. Understanding Object oriented programming  3. Devops implementation knowledge  4. Git Commands  5. Python Sphinx, Pandas, SQL Alchemy , Mccabe, Unitest etc.. Modules
Required Experience and Qualifications
Qualifications:
1. At least 3 years working experience in a Big Data Environment
2. Knowledge on design and development best practices in datawarehouse environments
3. Experience developing large scale distributed computing systems
4. Knowledge of Hadoop ecosystem and its components – HBase, Pig, Hive, Sqoop, Flume, Oozie,
etc.
5. Experience of Pyspark & Spark SQL
6. Experience with integration of data from multiple data sources
7. Implement ETL process in Hadoop (Develop big data ETL jobs that ingest, integrate, and export
data.) Converting Teradata SQL to PySpark SQL.
8. Experience in Presto, Kafka, Nifi.

Job Type: Contract

Salary: $60.00 to $65.00 /hour

Experience:

  • Data Developer: 4 years (Preferred)

Last 90 Days

Hadoop Developer

12 days ago
Reston, VADAtec Solutions

Requirements:

5+ years with Java , Python/Scala programming languages

Advanced Level experience (3+ years ) in Hadoop, Yarn, HDFS, MapReduce, Sqoop, Oozie , Hive, Spark and other related Big Data technologies Experience tuning Hadoop/Spark parameters for optimal performance Experience with Big Data querying tools including Impala Advanced experience with SQL and at least one major RDBMS (Oracle, DB2).

Job Type: Contract

Scala Developer

12 days ago
$65 - $75/hour (Estimated)Reston, VADAtec Solutions

Looking for a Scala Engineer.

8+ years IT experience

Experience on SCALA, Spark

Experience / Knowledge of Hadoop ecosystem

Design and Developer ETL for their reporting.

Responsibilities:

  • Building ETL for Data & analytics team.
  • Lead the development capabilities and manage the team.

Job Type: Contract

DATA MODELLING / ARCHITECT

16 days ago
Reston, VADAtec

Responsibilities

· Creating and maintaining optimal data pipeline
architecture.
· Assemble large, complex data sets that meet
functional/non-functional business requirements.
· Identify , design, and implement internal process
improvements: automating manual processes. Optimizing data delivery,
re-designing code and infrastructure for greater scalability.
· Build re-usable data pipelines to ingest, standardize,
shape data from various zones in Hadoop data lake.
· Build analytic tools that utilize datapipeline to provide
actionable insights into customer acquisition, revenue management,
Digital and marketing areas for operational efficiency and KPI's.
· Design and build BI API's on established enterprise
Architecture patterns , for data sharing from various sources.
· Design and integrate data using big data tools - Spark,
Scala , Hive etc.
· Helping manage the library of all deployed Application
Programming Interface (API)s
· Supporting API documentation of classes, methods scenarios,
code, design rationales, and contracts.
· Design/build , maintain small set of highly flexible and
scalable models linked to client's specific business needs.

Required Qualifications:

· 5+ years experience in data engineering /data integrationrole.
· 5+ years Advanced working SQL knowledge and experience working with
relational databases, query authoring (SQL) as well as working
familiarity with a variety of databases.
· Experience building and optimizing 'big data' data
pipelines, architecture and data sets.
· Build programs/processes supporting data transformation,
data structures, metadata, dependency and workload management.
· Experience in Data warehouse , Data Mart ETL
implementations using big data technologies.
· Working knowledge of message queuing , stream processing
and scalable data stores.
· Experience with relational SQL and NoSQL databases, Graph
databases (preferred).
· Strong experience with object oriented programming - Java,
C++ , Scala (preferred)
· Experience with AWS cloud services: strom, spark-streaming
etc.
· Experience with API, web services design and development

Preferred Qualifications:

· Functional experience in hospitality
· End-to-end experience in building data flow process (from
ingestion to consumption layer)
· Solid working experience with surrounding and supporting
disciplines (data quality, data operations, BI/Data WH/Data Lake)
· Effective communicator, collaborator, influencer, solution
seeker across variety of opinions
· Self-starter, well organized, extremely detail-oriented and
an assertive team player, willing to take ownership of
responsibilities, and possess a high level of positive energy and
drive.
· Excellent time management and organizational skills
· Ability to manage multiple priorities, work well under
pressure and effectively handle concurrent demands to prioritize
responsibilities

Job Type: Contract

Big Data Engineer

17 days ago
Chantilly, VAGeneral Dynamics Mission Systems
Basic Qualifications
Bachelor's degree in software engineering or a related technical field is required (or equivalent experience), plus a minimum of 5 years of relevant experience; or Master's degree plus a minimum of 3 years of relevant experience. Agile experience preferred.

KEY SKILLS
  • Minimum three (3) years’ experience in designing, developing, building, and implementing Big Data solutions or developing automated solutions to solve complex problems, a thoughtful ability to solve problems could outweigh years of experience.
  • Ability to identify and implement a data solution strategy
  • Demonstrates intellectual curiosity in exploring new technologies and finding creative ways to solve data management problems
  • Experience developing solutions with Python/Javascript/PERL
  • Experience/knowledge of Spark, Impala, Hadoop, Streamsets, Kafka, Rest APIs
  • Experience in SQL-based and NoSQL-based technologies
  • Experience in Linux administration/troubleshooting
A TS/SCI security clearance with the ability to obtain a Polygraph is required at time of hire. Candidate must be able to obtain the Polygraph within a reasonable amount of time from date of hire. Applicants selected will be subject to a U.S. Government security investigation and must meet eligibility requirements for access to classified information. Due to the nature of work performed within our facilities, U.S. citizenship is required.
Responsibilities for this Position
General Dynamics Mission Systems (GDMS) is seeking motivated candidates to join our insider threat detection, systems integration team. Our mission oriented team is responsible for the design, testing, deployment, maintenance, operation, and evolution of the systems directly supporting the insider threat detection program of a large government customer in the United States Intelligence Community (USIC). GDMS has an immediate opening on the team for a motivated Big Data Engineer with a self-starter mindset who is up to date with the latest tools and techniques. The position will focus on the integration of new data management technologies and software performance tuning and troubleshooting. This is a challenging yet rewarding position that provides an opportunity to leverage cutting edge technologies in pursuit of a vital mission that protects people, sensitive information/technologies, and the national security posture of the USIC.

The majority of work will be performed in Chantilly, Virginia, which is located approximately 25 miles west of Washington D.C., near the Dulles International Airport. The selected Big Data Engineer will support a 6+ year contract that General Dynamics recently secured.

CORE RESPONSIBILITIES:
  • Assist in the development and delivering of large scale data pipelines
  • Develop and implement automated tests for data transformations and data migrations
  • Research and apply big data solution technologies to complex datasets; make recommendations to data science team on new technologies
#CJ3
#CB
Company Overview
General Dynamics Mission Systems (GDMS) engineers a diverse portfolio of high technology solutions, products and services that enable customers to successfully execute missions across all domains of operation. With a global team of 13,000+ top professionals, we partner with the best in industry to expand the bounds of innovation in the defense and scientific arenas. Given the nature of our work and who we are, we value trust, honesty, alignment and transparency. We offer highly competitive benefits and pride ourselves in being a great place to work with a shared sense of purpose. You will also enjoy a flexible work environment where contributions are recognized and rewarded. If who we are and what we do resonates with you, we invite you to join our high performance team!

Oracle Database Administrator III

18 days ago
$50 - $65/hour (Estimated)Reston, VAWisetek Providers, Inc.

ONLY US CITIZENS AND GREEN CARD HOLDERS AS PER CLIENT'S REQUEST!!!!

Job Description:

The client is seeking a Database Administrator III to provide database support on a variety of hardware and software platforms and peripherals.

  • Minimum of 10+ years overall IT experience; 5+ years of end to end database administration with Oracle (System DBA as well as application DBA)
  • Expert level skills using SQL, skills tuning queries
  • Experience managing 30+ TB of databases.
  • Strong shell scripting skills – Korn shell, Python
  • Minimum 1 year hands on extensive experiences in AWS

Preferred but not required:

  • Automation tools such as Puppet, Flyway or Ansible
  • EMR (Elastic Map Reduce aka Hadoop), RDS (Relational Database Service), Lambda, Elastic Beanstalk, Redshift
  • Migration from op-premise databases to AWS Cloud
  • 2+ years of experience in Postgres

Location: Reston, VA

Duration: 6 months (with possible extension)

Job Type: Contract

Experience:

  • Postgres: 2 years (Preferred)

Azure Big Data Developer

20 days ago
Reston, VADAtec Solutions

Essential Job Functions:

  • Production experience in large-scale SQL, NoSQL data infrastructures such as Cosmos DB, Cassandra, MongoDB, HBase, CouchDB, Apache Spark etc.
  • Application experience with SQL databases such as Azure SQL Data Warehouse, MS SQL, Oracle, PostgreSQL, etc.
  • Proficient understanding of code versioning tools {such as Git, CVS or SVN}
  • Strong debugging skills with the ability to reach out and work with peers to solve complex problems
  • Ability to quickly learn, adapt, and implement Open Source technologies.
  • Familiarity with continuous integration (DevOps)
  • Proven ability to design, implement and document high-quality code in a timely manner.
  • Excellent interpersonal and communication skills, both written and oral.

Educational Qualifications and Experience:

  • Role Specific Experience: 2+ years of experience in Big Data platform development.

Certification Requirements (desired):

Azure Designing and Implementing Big Data Analytics Solutions

Required Skills/Abilities:

  • Experience with NoSQL databases, such as HBase, Cassandra or MongoDB.
  • Proficient in designing efficient and robust ETL/ELT using Data Factory, workflows, schedulers, and event-based triggers.
  • 1+ experience with SQL databases (Oracle, MS SQL, PostgreSQL, etc.).
  • 1+ years of hands on experience with data lake implementations, core modernization and data ingestion.
  • 3+ years of Visual Studio C# or core Java.
  • Experience at least in one of the following programming languages: R, Scala, Python, Clojure, F#.
  • 1+ years of experience in Spark systems.
  • Good understanding of multi-temperature data management solution.
  • Practical knowledge in design patterns.
  • In depth knowledge of developing large distributed systems.
  • Good understanding of DevOps tools and automation framework.

Desired Skills/Abilities (not required but a plus):

  • Experience in designing and implementing scalable, distributed systems leveraging cloud computing technologies.
  • Experience with Data Integration on traditional and Hadoop environments.
  • Experience with Azure Time Series Insights.
  • Some knowledge of machine learning tools and libraries such as Tensor flow, Turi, H2O, Spark ML lib, and Carrot (R).
  • Understanding of AWS data storage and integration with Azure.
  • Some knowledge of graph database.

Job Type: Contract

Java Fullstack Developer

20 days ago
$55 - $70/hour (Estimated)Reston, VAPricesenz

Location: Bentonville, AR and Reston. VA

Essential skills: Java, Microservices, Kafka, Cassandra, React.JS, Angular.JS, Node.JS Desired skills:

Python, Hadoop

Job Type: Contract

Big Data Engineer

22 days ago
£600 - £650/dayRemoteHarnham

Big Data Engineer
£600-£650 per day
Initial 3 month contract
London/ Sweden

As a Big Data Engineer you will be working heavily with Kafka for streaming purposes alongside Scala for programming

THE COMPANY:

You will be working for a leading consultancy who who specialise in Data Engineering, DevOps and Data Science. As a Big Data Engineer you will be assisting a Scandinavian client in helping them introduce Kafka and therefore your time will be split 50/50 between being based in Sweden and London - with the option to work remotely. You will be working in an agile environment alongside a Data Architect and other Data Engineers.

THE ROLE:

As a Big Data Engineer, you will be heavily involved in introducing Kafka as a technology. Therefore it is imperative that you have extensive experience with Kafka, ideally Confluent Kafka. As a Big Data Engineer you will be coding in Scala primarily and some Java and will be working both in an on premise environment as well as cloud environments. It is most valuable if you have have good exposure to either AWS or Azure as a platform. Though Kafka will be the main technology you will be focusing on introducing as a Big Data Engineer, experience with Spark and exposure to big data platforms such as Hadoop is highly valuable.

YOUR SKILLS AND EXPERIENCE:

The successful Big Data Engineer will have the following skills and experience:

  • Extensive experience with Kafka
  • Good knowledge of either AWS or Azure
  • Good experience coding in Scala and Java
  • Previous experience using Big Data technologies such as Spark/ Hadoop
  • The ideal candidate will also have commercial experience in Confluent Kafka

THE BENEFITS:

  • A competitive day rate of up to £650 per day

HOW TO APPLY:

Please register your interest by sending your CV to Anna Greenhill via the Apply link on this page.

Big Data Architect

25 days ago
£600 - £700/dayRemoteHarnham

Big Data Architect
£600-£700 per day
Initial 3 month contract
London/ Sweden

As a Big Data Architect you will be introducing Kafka as a streaming technology for a financal client - from roadmapping to deployment.

THE COMPANY:

You will be working for a leading consultancy who specialise in Data Engineering, DevOps and Data Science. As a Big Data Architect you will be assisting a Scandinavian client in helping them introduce Kafka and therefore your time will be split 50/50 between being based in Sweden and London - with the option to work remotely. You will be working in an agile environment alongside a Data Architect and other Data Engineers.

THE ROLE:

As a Big Data Architect, you will be heavily involved in introducing Kafka as a technology. Therefore it is imperative that you have extensive experience with Kafka, ideally Confluent Kafka. As a Big Data Architect you will be working in the Kafka roadmap, both working in an on premise environment as well as cloud environments. It is most valuable if you have have good exposure to either AWS or Azure as a platform. Though Kafka will be the main technology you will be focusing on introducing as a Big Data Architect, experience with Spark and exposure to big data platforms such as Hadoop is highly valuable. Though this will be a client facing role you must be also be prepared to be hands on when neccessary and therefore experience with Scala and Java is desirable.

YOUR SKILLS AND EXPERIENCE:

The successful Big Data Architect will have the following skills and experience:

  • Extensive experience with Kafka
  • Good knowledge of either AWS or Azure
  • Good experience coding in Scala and Java
  • Previous experience using Big Data technologies such as Spark/ Hadoop
  • The ideal candidate will also have commercial experience in Confluent Kafka

THE BENEFITS:

  • A competitive day rate of up to £700 per day

HOW TO APPLY:

Please register your interest by sending your CV to Anna Greenhill via the Apply link on this page.

Big Data Engineer

25 days ago
£600 - £700/dayRemoteHarnham

Big Data Engineer
£600-£700 per day
Initial 3 month contract
London/ Sweden

As a Big Data Engineer you will be working heavily with Kafka for streaming purposes alongside Scala for programming

THE COMPANY:

You will be working for a leading consultancy who who specialise in Data Engineering, DevOps and Data Science. As a Big Data Engineer you will be assisting a Scandinavian client in helping them introduce Kafka and therefore your time will be split 50/50 between being based in Sweden and London - with the option to work remotely. You will be working in an agile environment alongside a Data Architect and other Data Engineers.

THE ROLE:

As a Big Data Engineer, you will be heavily involved in introducing Kafka as a technology. Therefore it is imperative that you have extensive experience with Kafka, ideally Confluent Kafka. As a Big Data Engineer you will be coding in Scala primarily and some Java and will be working both in an on premise environment as well as cloud environments. It is most valuable if you have have good exposure to either AWS or Azure as a platform. Though Kafka will be the main technology you will be focusing on introducing as a Big Data Engineer, experience with Spark and exposure to big data platforms such as Hadoop is highly valuable.

YOUR SKILLS AND EXPERIENCE:

The successful Big Data Engineer will have the following skills and experience:

  • Extensive experience with Kafka
  • Good knowledge of either AWS or Azure
  • Good experience coding in Scala and Java
  • Previous experience using Big Data technologies such as Spark/ Hadoop
  • The ideal candidate will also have commercial experience in Confluent Kafka

THE BENEFITS:

  • A competitive day rate of up to £700 per day

HOW TO APPLY:

Please register your interest by sending your CV to Anna Greenhill via the Apply link on this page.

Sr. Informatica Administrator

26 days ago
Reston, VAAlchemy Software Solutions

Any visa is fine for this role except OPT

Primary Skills:

  • Must have hands-on experience in development, engineering and operational product platform support in Informatica tools.
  • 8 + Years of strong hands on Informatica Powercenter and Data quality Administration (Versions 9.6 / 10.x). Solid experience in development, administration, architecting, engineering and setting up Informatica Environments.
  • 7 + Years of hands-on experience in configuration, development, performance benchmarks, monitoring, best practices, design patterns and use cases of Power center, Data Quality , Informatica Developer component and Data Virtualization,
  • 7 + years of hands-on experience in planning and upgrading large scale ETL(Informatica Powercenter / Data quality) environments and Enterprise Data Warehouse applications.
  • 7 + Years of hands-on Informatica process Automation experience using Perl and Unix shell scripting.
  • Experience with Informatica Intelligent Cloud service
  • 7 + Year of hands-on Oracle and SQL experience.
  • 5 + years of experience with XML, CSS and HTML.
  • 5 + years’ experience with Web service development.
  • Strong analytical skills and ability to troubleshoot and resolve problems.
  • Must have strong communication skills, both oral and written. Including ability to work with all management levels.
  • Must have proven ability to manage priorities and timelines.
  • Must have high integrity and accountability, willing to do what it takes to make the team and project for the enterprise successful.
  • Must be extremely responsive, able to work under pressure and with a strong sense of urgency.
  • Ability to share technical knowledge and clearly communicate technical concepts.
  • Strong leadership skills with ability to articulate ideas and issues in a team environment.
  • Should have AWS basic awareness, experience with Cloud offered ETL services is a plus
  • Hadoop, Big Data, Python, Ab Initio, Netezza is a plus.

Job Type: Contract

Data Scientist

1 month ago
$55 - $70/hour (Estimated)RemoteStage 19 Management

MOST IMPORTANT PART ABOUT THE GIG: We believe in making yourself rich not just your company rich. That's why for this position you and your team (3-6 people) will be given 33.3% of all distributed profits in addition to an employee stock option program. There is no salary or hourly rate for this position. You can work remotely and part time when you have the time.

Responsibilities:

  • Design and implement data science systems (ML, AI, optimization, and statistics)
  • Improve the scale and efficiency of data science approaches
  • Pair with data engineers to design deployments and pipelines
  • Elevate team technical craft with peer reviews, paper reading, and tech prototyping
  • Identify new product opportunities and collaboratively develop them with partners
  • Represent data products to non-technical partners and collaborators

Desired Qualifications:

  • 5+ years data science industry experience
  • 3+ years "productionalizing" code into live systems
  • 2+ years hands on experience with petabyte-processing-capable technologies such as Spark or Hadoop
  • Experience setting up and interacting with cloud infrastructure (e.g., AWS)
  • Professional proficiency in Python and SQL
  • An MS or PhD in Engineering, Computer Science, Physics, Statistics, or related field

Applicable Technologies/Skills:

  • Understand trade-offs between data infrastructure and database systems
  • Familiarity with iterative development methodologies like Scrum, Agile, etc.
  • Familiarity with Java, Scala, C++
  • Familiarity with git

Job Type: Contract

Experience:

  • Data Scientist: 1 year (Required)

Education:

  • Master's (Preferred)

Kafka Data Engineer

1 month ago
£600 - £700/dayRemoteHarnham

Kafka Data Engineer
£600-£700 per day
Initial 3 month contract
London/ Sweden

As a Kafka Data Engineer you will be working heavily with Kafka for streaming purposes alongside Scala for programming

THE COMPANY:

You will be working for a leading consultancy who who specialise in Data Engineering, DevOps and Data Science. As a Data Engineer you will be assisting a Scandinavian client in helping them introduce Kafka and therefore your time will be split 50/50 between being based in Sweden and London - with the option to work remotely. You will be working in an agile environment alongside a Data Architect and other Data Engineers.

THE ROLE:

As a Kafka Data Engineer, you will be heavily involved in introducing Kafka as a technology. Therefore it is imperative that you have extensive experience with Kafka, ideally Confluent Kafka. As a Kafka Data Engineer you will be coding in Scala primarily and some Java and will be working both in an on premise environment as well as cloud environments. It is most valuable if you have have good exposure to either AWS or Azure as a platform. Though Kafka will be the main technology you will be focusing on introducing as a Kafka Data Engineer, experience with Spark and exposure to big data platforms such as Hadoop is highly valuable.

YOUR SKILLS AND EXPERIENCE:

The successful Kafka Data Engineer will have the following skills and experience:

  • Extensive experience with Kafka
  • Good knowledge of either AWS or Azure
  • Good experience coding in Scala and Java
  • Previous experience using Big Data technologies such as Spark/ Hadoop
  • The ideal candidate will also have commercial experience in Confluent Kafka

THE BENEFITS:

  • A competitive day rate of up to £700 per day

HOW TO APPLY:

Please register your interest by sending your CV to Anna Greenhill via the Apply link on this page.

Big Data Architect

1 month ago
£650 - £700/dayRemoteHarnham

Big Data Architect
£650-£700 per day
Initial 3 month contract
London/ Sweden

As a Big Data Architect you will be helping to create the Kafka architecture and outline the strategy for migration to the cloud!

THE COMPANY:

You will be working for a leading consultancy who who specialise in Data Engineering, DevOps and Data Science. As a Big Data Architect you will be assisting a Scandinavian client in helping them introduce Kafka and therefore your time will be split 50/50 between being based in Sweden and London - with the option to work remotely. You will be working in an agile environment alongside Data Engineers.

THE ROLE:

As a Big Data Architect, your main responsability will be creating the Kafka architecture from design to implementation.. Therefore it is imperative that you have extensive experience with Kafka for large implementations, ideally Confluent Kafka. As a Big Data Architect yit is essential you have a good understanding of technologies such as Spark and Hadoop as you will helping to implement these. You will be working in both an on premise environment as well as cloud environmentsand so it is valuable if you have worked in either AWS or Azure as a platform. Though you will heavily be involved in the planning and writing of roadmaps you must be prepared to be hands in and therefore previous experience programming in Scala/ Java is valuable. As you will be working for a consultancy it is essential that you are confident speaking with non technical people as this role will be very client facing.

YOUR SKILLS AND EXPERIENCE:

The successful Big Data Architect will have the following skills and experience:

  • Extensive experience implementing strategies using Kafka
  • Good knowledge of either AWS or Azure
  • Good experience coding in Scala and Java
  • Previous experience using Big Data technologies such as Spark/ Hadoop
  • Experience speaking to stakeholders
  • The ideal candidate will also have commercial experience in Confluent Kafka

THE BENEFITS:

  • A competitive day rate of up to £700 per day

HOW TO APPLY:

Please register your interest by sending your CV to Anna Greenhill via the Apply link on this page.

Data Engineer

1 month ago
£600 - £700/dayRemoteHarnham

Data Engineer
£600-£700 per day
Initial 3 month contract
London/ Sweden

As a Data Engineer you will be working heavily with Kafka for streaming purposes alongside Scala for programming

THE COMPANY:

You will be working for a leading consultancy who who specialise in Data Engineering, DevOps and Data Science. As a Data Engineer you will be assisting a Scandinavian client in helping them introduce Kafka and therefore your time will be split 50/50 between being based in Sweden and London - with the option to work remotely. You will be working in an agile environment alongside a Data Architect and other Data Engineers.

THE ROLE:

As a Data Engineer, you will be heavily involved in introducing Kafka as a technology. Therefore it is imperative that you have extensive experience with Kafka, ideally Confluent Kafka. As a Data Engineer you will be coding in Scala primarily and some Java and will be working both in an on premise environment as well as cloud environments. It is most valuable if you have have good exposure to either AWS or Azure as a platform. Though Kafka will be the main technology you will be focusing on introducing as a Data Engineer, experience with Spark and exposure to big data platforms such as Hadoop is highly valuable.

YOUR SKILLS AND EXPERIENCE:

The successful Data Engineer will have the following skills and experience:

  • Extensive experience with Kafka
  • Good knowledge of either AWS or Azure
  • Good experience coding in Scala and Java
  • Previous experience using Big Data technologies such as Spark/ Hadoop
  • The ideal candidate will also have commercial experience in Confluent Kafka

THE BENEFITS:

  • A competitive day rate of up to £700 per day

HOW TO APPLY:

Please register your interest by sending your CV to Anna Greenhill via the Apply link on this page.

Big Data Architect

1 month ago
£650 - £700/dayRemoteHarnham

Big Data Architect
£650-£700 per day
Initial 3 month contract
London/ Sweden

As a Big Data Architect you will be helping to create the Kafka architecture and outline the strategy for migration to the cloud!

THE COMPANY:

You will be working for a leading consultancy who who specialise in Data Engineering, DevOps and Data Science. As a Big Data Architect you will be assisting a Scandinavian client in helping them introduce Kafka and therefore your time will be split 50/50 between being based in Sweden and London - with the option to work remotely. You will be working in an agile environment alongside Data Engineers.

THE ROLE:

As a Big Data Architect, your main responsability will be creating the Kafka architecture from design to implementation.. Therefore it is imperative that you have extensive experience with Kafka for large implementations, ideally Confluent Kafka. As a Big Data Architect yit is essential you have a good understanding of technologies such as Spark and Hadoop as you will helping to implement these. You will be working in both an on premise environment as well as cloud environmentsand so it is valuable if you have worked in either AWS or Azure as a platform. Though you will heavily be involved in the planning and writing of roadmaps you must be prepared to be hands in and therefore previous experience programming in Scala/ Java is valuable. As you will be working for a consultancy it is essential that you are confident speaking with non technical people as this role will be very client facing.

YOUR SKILLS AND EXPERIENCE:

The successful Big Data Architect will have the following skills and experience:

  • Extensive experience implementing strategies using Kafka
  • Good knowledge of either AWS or Azure
  • Good experience coding in Scala and Java
  • Previous experience using Big Data technologies such as Spark/ Hadoop
  • Experience speaking to stakeholders
  • The ideal candidate will also have commercial experience in Confluent Kafka

THE BENEFITS:

  • A competitive day rate of up to £700 per day

HOW TO APPLY:

Please register your interest by sending your CV to Anna Greenhill via the Apply link on this page.

Data Engineer

1 month ago
£600 - £700/dayRemoteHarnham

Data Engineer
£600-£700 per day
Initial 3 month contract
London/ Sweden

As a Data Engineer you will be working heavily with Kafka for streaming purposes alongside Scala for programming

THE COMPANY:

You will be working for a leading consultancy who who specialise in Data Engineering, DevOps and Data Science. As a Data Engineer you will be assisting a Scandinavian client in helping them introduce Kafka and therefore your time will be split 50/50 between being based in Sweden and London - with the option to work remotely. You will be working in an agile environment alongside a Data Architect and other Data Engineers.

THE ROLE:

As a Data Engineer, you will be heavily involved in introducing Kafka as a technology. Therefore it is imperative that you have extensive experience with Kafka, ideally Confluent Kafka. As a Data Engineer you will be coding in Scala primarily and some Java and will be working both in an on premise environment as well as cloud environments. It is most valuable if you have have good exposure to either AWS or Azure as a platform. Though Kafka will be the main technology you will be focusing on introducing as a Data Engineer, experience with Spark and exposure to big data platforms such as Hadoop is highly valuable.

YOUR SKILLS AND EXPERIENCE:

The successful Data Engineer will have the following skills and experience:

  • Extensive experience with Kafka
  • Good knowledge of either AWS or Azure
  • Good experience coding in Scala and Java
  • Previous experience using Big Data technologies such as Spark/ Hadoop
  • The ideal candidate will also have commercial experience in Confluent Kafka

THE BENEFITS:

  • A competitive day rate of up to £700 per day

HOW TO APPLY:

Please register your interest by sending your CV to Anna Greenhill via the Apply link on this page.