HBase Contract Jobs

Hiring a contractor? Reach 1000s of UK-based and remote candidates. Money-back guarantee.

This Week

Big Data Developer

15 hours ago
Outsource UK - Bristol, UK

The successful Big Data Developer Professional will work within a team of highly skilled technical experts who support key business requirements for the bank using cutting edge technologies in Big Data Technology Stack.

Key skills required for the Big Data Developer - Banking

  • Expertise coding in Java or Scala
  • Experience with multiple open source tool sets in the Big Data space.
  • Experience with both traditional waterfall and agile release methodologies.
  • Experience in maintaining, optimization and issue resolution of Hadoop clusters, supporting Business users and Batch/Streaming processes.
  • Experience configuring and setting up Hadoop clusters and provide support for - aggregation, lookup & fact table creation criteria's, Map Reduce tuning, Spark job tuning, data node setup, NN recovery, HA, Sentry security, etc.
  • Experience in Linux / Unix OS Services, Administration, Shell, awk scripting.
  • Experience in building and scalable applications for Hadoop ecosystem.
  • Experience in Core Java, CLI tools, Mesos or Yarn, Spark, Hadoop ecosystem (Map Reduce, Hive, Pig, HDFS, H-Catalog, Beeline, Zookeeper, Oozie, Hbase, Flume and Kafka).
  • Hands-on Experience in SQL (Oracle Pl/SQL) and No SQL Databases (HBASE/Cassandra/Mongo DB).
  • Experience in building large scale real-world backend and middle-tier systems in Java and Hadoop ecosystems.
  • Experience in tool Integration, automation, configuration management in GIT, Nexus, Jira platforms
  • Excellent oral and written communication and presentation skills, analytical and problem solving skills
  • Experience in tool Integration, automation, configuration management in GIT, Jira platforms

If you are a Big Data Developer looking for a new contract either apply online or if would like to find out about other IT/Financial Services opportunities please contact Jamie Rogers on [email protected] or 01793 430021. Outsource UK is one of the country's largest and most successful independent recruitment consultancies, specialising in the IT, Digital, Financial Services and Engineering sectors. Do you know anyone who might be looking for a new role? You could benefit from our candidate referral scheme. Contact me on the above contact details for more information. Outsource. Our People. Your Success. We recruit talented people for contract and permanent opportunities, offer a consultative approach and have a reputation for providing a superior service to both clients and candidates.

Hadoop Administrator

23 hours ago
Harnham - London, UK

Hadoop Administrator
Central London
6-month Contract
£550 per day

As a Hadoop Administrator you are expected to look after big data projects for a Global Bank's Hortonworks platform. You will be setting-up and configuring 5 eco-systems across several internal departments.

THE COMPANY:
You will be representing the Advanced Analytics team for a world leading Investment Bank. The team provide support to all major areas of the bank and are currently working internally with the Risk and Compliance team. Since they have a relationship with Hortonworks, they are expanding the platform to allow other teams to store data more easily. You will the be implementing Hadoop eco-systems and ensuring they run efficiently once setup.

THE ROLE:
As a Hadoop Administrator, you will be providing L1/2/3 support of production users. You will be setting-up and upgrading current Hadoop eco-systems, so it is vital you are aware of the main components such as HDFS, HBase, Tarn, Pig etc. These are based on the bank's Google Cloud platform so it would help if you have a good understanding on this service. You will be dealing with 5 separate eco-systems and of up to 160 nodes.

YOUR SKILLS AND EXPERIENCE:
The ideal Hadoop Administrator will have:

  • Experience working on cloud-based platforms such as Google Cloud
  • Strong knowledge of the Hadoop eco-system
  • An understanding of L1/2/3 support services
  • Knowledge of big data techs like Spark

HOW TO APPLY:
Please register your interest by sending your CV via the Apply link on this page.

Hadoop Administrator

1 day ago
Harnham - London, UK

Hadoop Administrator
Central London
6-month Contract
£550 per day

As a Hadoop Administrator you are expected to look after big data projects for a Global Bank's Hortonworks platform. You will be setting-up and configuring 5 eco-systems across several internal departments.

THE COMPANY:
You will be representing the Advanced Analytics team for a world leading Investment Bank. The team provide support to all major areas of the bank and are currently working internally with the Risk and Compliance team. Since they have a relationship with Hortonworks, they are expanding the platform to allow other teams to store data more easily. You will the be implementing Hadoop eco-systems and ensuring they run efficiently once setup.

THE ROLE:
As a Hadoop Administrator, you will be providing L1/2/3 support of production users. You will be setting-up and upgrading current Hadoop eco-systems, so it is vital you are aware of the main components such as HDFS, HBase, Tarn, Pig etc. These are based on the bank's Google Cloud platform so it would help if you have a good understanding on this service. You will be dealing with 5 separate eco-systems and of up to 160 nodes.

YOUR SKILLS AND EXPERIENCE:
The ideal Hadoop Administrator will have:

  • Experience working on cloud-based platforms such as Google Cloud
  • Strong knowledge of the Hadoop eco-system
  • An understanding of L1/2/3 support services
  • Knowledge of big data techs like Spark

HOW TO APPLY:
Please register your interest by sending your CV via the Apply link on this page.

High rate for HBase in London, UK

Big Data Engineer HBase Phoenix

1 day ago
Client Server - London, UK

Big Data Engineer (HBase Phoenix Hadoop) sought by a successful software house to carry out a full audit of their HBase environment with 4PB of data. You'll assess HBase / Phoenix performance; recommend areas for improvement; provide monitoring and metrics; carry out hotspotting and table design and use Spark jobs for cluster utilisation.

You'll provide subject matter expertise, sharing knowledge in a collaborative environment. Offices based in a vibrant area of Central London.

Requirements:
  • Indepth knowledge of HBase, ideally including Phoenix
  • Good knowledge of Spark
  • Good knowledge of Linux OS
  • Scripting with Python and / or bash
  • Collaborative team member with good communication skills

Apply now or call to find out more about this Big Data Engineer (HBase Phoenix Hadoop) contract opportunity.

Job Role: Big Data Engineer (HBase Phoenix Hadoop); Location: London; Rate: £600 to £700 p/day; Term: 3 months; Start: ASAP

High rate for HBase in London, UK

Big Data Engineer HBase Phoenix

1 day ago
Client Server Ltd. - London, UK
Big Data Engineer (HBase Phoenix Hadoop) sought by a successful software house to carry out a full audit of their HBase environment with 4PB of data. You'll assess HBase / Phoenix performance; recommend areas for improvement; provide monitoring and metrics; carry out hotspotting and table design and use Spark jobs for cluster utilisation.

You'll provide subject matter expertise, sharing knowledge in a collaborative environment. Offices based in a vibrant area of Central London.

Requirements:
*Indepth knowledge of HBase, ideally including Phoenix
*Good knowledge of Spark
*Good knowledge of Linux OS
*Scripting with Python and / or bash
*Collaborative team member with good communication skills

Apply now or call to find out more about this Big Data Engineer (HBase Phoenix Hadoop) contract opportunity.

Job Role: Big Data Engineer (HBase Phoenix Hadoop); Location: London; Rate: £600 to £700 p/day; Term: 3 months; Start: ASAP

AWS Glue & Pyspark Developer 500-700pd DPE

2 days ago
Premier Group - London, UK

This is an opportunity to join a large IT & Digital transformation consultancy and be the main AWS Glue & Pyspark Developer for one of their leading Insurance clients.

The successful Glue & Pyspark Developer will need to be very hands on with AWS Glue and be able to code effectively with Pyspark too.

This project is looking to begin immediately so you will need to be able to start work within 2 weeks.

  • Location: Fenchurch Street LDN
  • Duration: 6 months
  • Daily Rate: 500-700pd DPE
  • Start Date: Immd

Full skill set below:

Necessary Qualifications (most important in bold)

  • Bachelor's degree in computer science, engineering, mathematics, or a related technical discipline
  • 4+ years of industry experience in software development, data engineering, business intelligence, data science, or related field with a track record of manipulating, processing, and extracting value from large datasets
  • Demonstrated strength in data modelling, ETL development, and data warehousing
  • Hands-on Experience using big data technologies (Hadoop, Hive, Hbase, Spark etc.) , Apache Spark mandatory (python + pyPanda preferred , if python is not a skill should be willing to learn quickly)
  • Hands-on Experience with AWS Glue – Mandatory
  • Experience using business intelligence reporting tools (Tableau, Business Objects, Cognos etc.)
  • Knowledge of data management fundamentals and data storage principles
  • Knowledge of distributed systems as it pertains to data storage and computing

Preferred Qualifications

  • Experience working with AWS big data technologies (Redshift, S3, EMR)
  • Proven success in communicating with users, other technical teams, and senior management to collect requirements, describe data modeling decisions and data engineering strategy
  • Experience providing technical leadership and mentoring other engineers for best practices on data engineering
  • Familiarity with statistical models and data mining algorithms
  • Knowledge of software engineering best practices across the development lifecycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations
  • Masters in computer science, mathematics, statistics, economics, or other quantitative fields.

If you are hands on with AWS Glue and Pyspark then please apply now


mdglue123

AWS Big Data Engineer - greenfield digital transformation

3 days ago
Asset Resourcing Limited - London, UK
AWS Big Data Engineer - an exciting, involved role working closely with the Senior Solution Architect on the delivery of a greenfield digital transformation.

Essential:
Significant AWS Big Data Engineer employment with solid experience in AWS Stack (S3,EMR, Redshift, Glue,Kinesis & Athena) and Big Data stack (Hadoop,HBase,Hive, Spark,Kafka,Nifi & Airflow) - minimum 3 years
Programming and Scripting languages: Java, Python & Scala - minimum 5 years

Beneficial:
1. Experience or worked in a DevOps environment using e.g. GitHub, Jenkins, Nexus, Ansible for CI/CD pipeline
2. Knowledge in KMS, Data Masking, Anonymization, Tokenization & Access Management.

Additional roles will include:
1.Participate in development planning
2.Document LLD
3.Work closely with Senior Solution Architect

AWS Glue & Pyspark Developer 500-700pd DPE

3 days ago
Premier IT - London, UK

This is an opportunity to join a large IT & Digital transformation consultancy and be the main AWS Glue & Pyspark Developer for one of their leading Insurance clients.

The successful Glue & Pyspark Developer will need to be very hands on with AWS Glue and be able to code effectively with Pyspark too.

This project is looking to begin immediately so you will need to be able to start work within 2 weeks.

  • Location: Fenchurch Street LDN
  • Duration: 6 months
  • Daily Rate: 500-700pd DPE
  • Start Date: Immd

Full skill set below:

Necessary Qualifications (most important in bold)

  • Bachelor's degree in computer science, engineering, mathematics, or a related technical discipline
  • 4+ years of industry experience in software development, data engineering, business intelligence, data science, or related field with a track record of manipulating, processing, and extracting value from large datasets
  • Demonstrated strength in data modelling, ETL development, and data warehousing
  • Hands-on Experience using big data technologies (Hadoop, Hive, Hbase, Spark etc.) , Apache Spark mandatory (python + pyPanda preferred , if python is not a skill should be willing to learn quickly)
  • Hands-on Experience with AWS Glue – Mandatory
  • Experience using business intelligence reporting tools (Tableau, Business Objects, Cognos etc.)
  • Knowledge of data management fundamentals and data storage principles
  • Knowledge of distributed systems as it pertains to data storage and computing

Preferred Qualifications

  • Experience working with AWS big data technologies (Redshift, S3, EMR)
  • Proven success in communicating with users, other technical teams, and senior management to collect requirements, describe data modeling decisions and data engineering strategy
  • Experience providing technical leadership and mentoring other engineers for best practices on data engineering
  • Familiarity with statistical models and data mining algorithms
  • Knowledge of software engineering best practices across the development lifecycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations
  • Masters in computer science, mathematics, statistics, economics, or other quantitative fields.

If you are hands on with AWS Glue and Pyspark then please apply now

Big Data Developer

3 days ago
Salt Search - germany, UK

Global leading Client is currently recruiting for a Big Data Consultant that has experience with Big Data, SQL and Scala/Spark. Based in Germany. 6 month rolling contract paying up to 700 Euro's per day.

Essential skills and experience for the Big Data Consultant, Big Data Engineer, SQL, Scala, Spark

  • 6+ years of Big Data Experience
  • Agile/Scrum development cycle understanding
  • Thorough understanding of SQL databases
  • Excellent coding skill in Scala
  • Understanding Spark
  • Hadoop, HBase, and/or MapReduce experience
  • Proficiency in Linux environment
  • Knowledge working closely with bitbucket
  • Excellent communication skills and being able to work independently or in a full team
  • Experience in test automation and test-driven development
  • Experience with AWS, Jenkins, CI/CD
  • 2+ years of Scala/Spark development experience
  • 5+ years of Big Data experience
  • 5+ years of SQL experience

Package for the for the Big Data Consultant, Big Data Engineer, SQL, Scala, Spark

  • 6 month rolling contract paying up to 700 Euro's per day

This job ad was posted by Salt. To find out more about Salt's Privacy Policy and how your application is processed, please visit our website https://privacy-policy/.

Last 90 Days

Big Data Engineer

7 days ago
Investigo - London, UK

VACANCY

Big Data Engineer


LOCATION

London


START DATE

ASAP


DURATION

6 Months Initial


SKILLS

§ Extensive experience with AWS (S3 , EMR , Redshift , Glue , Kinesis & Athena)

§ Experience with Scala, Java or Python

§ Experience with Big Data Stack (Hadoop , HBase, Hive ,Spark , Kafka , Nifi & Airflow)

§ Experience as a Big Data Engineer working on Greenfield projects

§ Experience with DevOps Stack (GitHub , Jenkins , Nexus , Ansible) Preferred


TASKS

§ To lead the development of a Big Data Greenfield Big Data platform

§ End to end ownership of the platform


RATE

All-inclusive Daily rate £ (GBP)




BBBH82675_155014559265881

Scala Developer

9 days ago
Amsource Technology - Leeds, UK
My client is currently looking for a Scala Developer in Leeds.

The role is working for one of the biggest software houses in the North.

They provide software solutions for some of the largest companies in Europe.

This role is working on a greenfield critical business system.

The key skills required are:

  • Strong Scala developer with experience of working on Linux with Hadoop big data platform

  • Experience of developing in Java/Scala ideally using IntelliJ or Eclipse IDEs with Maven, GIT and appropriate unit testing frameworks

  • Experience with the Spark framework for developing distributed stream processing applications. The ability to produce, and advise upon strategies for the optimisation such as J-Unit and

  • ScalaTest of, efficient Spark Streaming applications is especially advantageous.

  • Experience with HBase and Phoenix, with the ability to advise upon strategies for maintaining optimal performance, and experience using auxiliary indexes to enable rapid extraction of data.

  • Experience using the Kafka streaming platform, with exposure to Confluent.io framework advantageous

This role is for the duration of 12 months and they are paying between £400-465pd DoE.
High rate for HBase in London, UK

Big Data Developer

9 days ago
Haybrook - London, UK
Big Data Developer, Hadoop, AWS, Docker, Kubernetes, Agile, Platform Engineer G1/1056

A growing Engineering Client is currently looking for a Big Data Developer to join their team for an initial 6 months contract. Working with an Agile team you will work to automate and build a platform with Hadoop in AWS. The successful Big Data Developer will have a strong technical background and enjoy keeping up with new technological advances.

This role requires Valid SC Clearance.

Roles and Responsibilities of the Big Data Developer:
  • Design and development of the Big Data store
  • Hands on system development and support
  • Contribute to the design, technical governance and product choices for the data store
  • Represent the programme at the technical design authority as require
  • Ensure the designs address all hosting, security and NFR requirements.
  • Ensure the system is able to gain security accreditation.
  • Technical point of authority
  • Advocate of high quality, maintainable software and systems.
  • Liaising with senior stakeholders

Skills and Experience of the Big Data Developer:
  • Hadoop / HDFS
  • AWS
  • HBase, Kafka, Kerberos
  • Ansible, Terraform
  • Solr
  • Agile
  • Postgres
  • Apache Camel, Knox, Ranger
  • Drools
  • Kubernetes
  • Docker

Big Data Developer, Hadoop, AWS, Docker, Kubernetes, Agile, Platform Engineer G1/1056

Referral Scheme: If this role isn’t for you then perhaps you could recommend a friend or colleague to Haybrook IT. If we go on to place that person in a permanent or temporary capacity then you could be rewarded with £500!! Please see our website for terms and conditions.

Haybrook IT Resourcing is Oxford’s leading IT recruitment agency. With exclusive access to some of the region’s most successful companies, send in your CV today to secure your next IT position.

Haybrook IT Resourcing Ltd acts as an employment agency and an employment business.

We value diversity and always appoint on merit.
High rate for HBase in UK

Big Data Engineer / Developer (Hadoop / Spark)

10 days ago
Modis - Manchester, UK

Big Data Engineer / Developer is required by a Global Energy Company.

You will have the following skills / experience:

Hadoop ecosystem: HBase, MapReduce, Spark, Pig, Hive, Oozie and Flume Experience of technologies such as: Java, Scala, Python, SQL, Build Systems (Maven / Gradle) Exposure to 'Agile' methodologies Exposure to CI/CD tools: Jenkins, Gradle etc

Any exposure to NoSQL technologies is a big advantage: Cassandra, MongoDB, or similar Good working knowledge of Linux, shell scripting Knowledge and a working experience of SCM tools such as SVN or GIT

Please call or send your CV now for more details, this is an URGENT role!!!!

.

By applying for this role your details will be submitted to Modis. Our Candidate Privacy Information Statement explains how we will use your information - please copy and paste the following link in to your browser: www.modis.co.uk/candidate-privacy-information-statement

High rate for HBase in London, UK

Big Data Engineer

10 days ago
SQ Computer Personnel Limited - London, UK

Big Data Developer, Hadoop, AWS, Docker, Kubernetes, Agile, Platform Engineer G1/1056

A growing Engineering Client is currently looking for a Big Data Developer to join their team for an initial 6 months contract. Working with an Agile team you will work to automate and build a platform with Hadoop in AWS. The successful Big Data Developer will have a strong technical background and enjoy keeping up with new technological advances.

This role requires Valid SC Clearance.

Roles and Responsibilities of the Big Data Developer:
  • Design and development of the Big Data store
  • Hands on system development and support
  • Contribute to the design, technical governance and product choices for the data store
  • Represent the programme at the technical design authority as require
  • Ensure the designs address all hosting, security and NFR requirements.
  • Ensure the system is able to gain security accreditation.
  • Technical point of authority
  • Advocate of high quality, maintainable software and systems.
  • Liaising with senior stakeholders
Skills and Experience of the Big Data Developer:
  • Hadoop / HDFS
  • AWS
  • HBase, Kafka, Kerberos
  • Ansible, Terraform
  • Solr
  • Agile
  • Postgres
  • Apache Camel, Knox, Ranger
  • Drools
  • Kubernetes
  • Docker

Big Data Developer, Hadoop, AWS, Docker, Kubernetes, Agile, Platform Engineer G1/1056

Referral Scheme: If this role isn’t for you then perhaps you could recommend a friend or colleague to Haybrook IT. If we go on to place that person in a permanent or temporary capacity then you could be rewarded with £500!! Please see our website for terms and conditions.

Haybrook IT Resourcing is Oxford’s leading IT recruitment agency. With exclusive access to some of the region’s most successful companies, send in your CV today to secure your next IT position.

Haybrook IT Resourcing Ltd acts as an employment agency and an employment business.

We value diversity and always appoint on merit.

High rate for HBase in UK

Big Data Developer, Hadoop, AWS, Docker, Agile, Platform Engineer

10 days ago
Haybrook IT Resourcing Ltd - Croydon, UK

Big Data Developer, Hadoop, AWS, Docker, Kubernetes, Agile, Platform Engineer G1/1056

A growing Engineering Client is currently looking for a Big Data Developer to join their team for an initial 6 months contract. Working with an Agile team you will work to automate and build a platform with Hadoop in AWS. The successful Big Data Developer will have a strong technical background and enjoy keeping up with new technological advances.

This role requires Valid SC Clearance.

Roles and Responsibilities of the Big Data Developer:

  • Design and development of the Big Data store
  • Hands on system development and support
  • Contribute to the design, technical governance and product choices for the data store
  • Represent the programme at the technical design authority as require
  • Ensure the designs address all hosting, security and NFR requirements.
  • Ensure the system is able to gain security accreditation.
  • Technical point of authority
  • Advocate of high quality, maintainable software and systems.
  • Liaising with senior stakeholders

Skills and Experience of the Big Data Developer:

  • Hadoop / HDFS
  • AWS
  • HBase, Kafka, Kerberos
  • Ansible, Terraform
  • Solr
  • Agile
  • Postgres
  • Apache Camel, Knox, Ranger
  • Drools
  • Kubernetes
  • Docker

Big Data Developer, Hadoop, AWS, Docker, Kubernetes, Agile, Platform Engineer G1/1056

Referral Scheme: If this role isn’t for you then perhaps you could recommend a friend or colleague to Haybrook IT. If we go on to place that person in a permanent or temporary capacity then you could be rewarded with £500!! Please see our website for terms and conditions.

Haybrook IT Resourcing is Oxford’s leading IT recruitment agency. With exclusive access to some of the region’s most successful companies, send in your CV today to secure your next IT position.

Haybrook IT Resourcing Ltd acts as an employment agency and an employment business.

We value diversity and always appoint on merit.

Rates by region
UK£533/day
London£521/day
Leeds£492/day