Data Science contract jobs near you / remote

Data Science

Head of Data Science (6 Month Contract)

4 days ago
RemoteADLIB
Tech for good start-up aiming to disrupt the market.
Scope for flexible & remote working.
Flat structure, tight-knit and driven team environment.
I’m helping an up-and-coming green technology start-up in Bath to find a Head of Data Science join their growing team. The company in question are reasonably new to the scene and are the UKs first app-only energy supplier, dealing exclusively with sustainable energy and aiming to disrupt the market moving forward.

What you’ll be doing
Liaising with the current Chief Data Scientist you’ll be tasked with breaking the ground on this company's next step in their Data Science roadmap. You’ll be working both hands-on and hands-off managing a team 3 Data Scientists looking into business functions like; customer journeys, retention, attribution and propensity models. You will also be involved with the coaching and mentoring of junior members of the team to ensure their workload is managed and expectations are met.

This is a varied role with the opportunity to get involved with a range of tasks and tools.

What experience you’ll need to apply
  • Over 3 years’ experience working in a Data Science/Machine Learning role (preferably in an Agile environment)
  • Experience managing and mentoring junior data scientists/analysts.
  • A MSc or PhD in a technology related field (e.g. Computer Science, STEM or Engineering etc.)
  • Experience using Neural Networks and Deep Learning (using libraries like Keras, Tensorflow or PyTorch).
  • Programming skills in Python or R preferably in a Cloud environment.
  • Communication skills are key in this one as well as the ability to work well both in a team and alone.
What you’ll get in return for your experience
A generous salary you can expect a contributory pension scheme, season ticket travel loans, private medical insurance and a healthcare cash plan to name a few benefits. They also offer really good scope for flexible & remote working which is testament to their very relaxed and flat structure.

What's next?
If this sounds like you the role for you, get in touch with Adam with your updated CV and we’ll handle the rest.

Last 90 Days

Get new remote Data Science contracts sent to you every week.
Subscribed to weekly Data Science alerts! 🎉 You can see it in your dashboard.

Big Data Engineer

10 days ago
Chantilly, VAGeneral Dynamics Mission Systems
Basic Qualifications
Bachelor's degree in software engineering or a related technical field is required (or equivalent experience), plus a minimum of 5 years of relevant experience; or Master's degree plus a minimum of 3 years of relevant experience. Agile experience preferred.

KEY SKILLS
  • Minimum three (3) years’ experience in designing, developing, building, and implementing Big Data solutions or developing automated solutions to solve complex problems, a thoughtful ability to solve problems could outweigh years of experience.
  • Ability to identify and implement a data solution strategy
  • Demonstrates intellectual curiosity in exploring new technologies and finding creative ways to solve data management problems
  • Experience developing solutions with Python/Javascript/PERL
  • Experience/knowledge of Spark, Impala, Hadoop, Streamsets, Kafka, Rest APIs
  • Experience in SQL-based and NoSQL-based technologies
  • Experience in Linux administration/troubleshooting
A TS/SCI security clearance with the ability to obtain a Polygraph is required at time of hire. Candidate must be able to obtain the Polygraph within a reasonable amount of time from date of hire. Applicants selected will be subject to a U.S. Government security investigation and must meet eligibility requirements for access to classified information. Due to the nature of work performed within our facilities, U.S. citizenship is required.
Responsibilities for this Position
General Dynamics Mission Systems (GDMS) is seeking motivated candidates to join our insider threat detection, systems integration team. Our mission oriented team is responsible for the design, testing, deployment, maintenance, operation, and evolution of the systems directly supporting the insider threat detection program of a large government customer in the United States Intelligence Community (USIC). GDMS has an immediate opening on the team for a motivated Big Data Engineer with a self-starter mindset who is up to date with the latest tools and techniques. The position will focus on the integration of new data management technologies and software performance tuning and troubleshooting. This is a challenging yet rewarding position that provides an opportunity to leverage cutting edge technologies in pursuit of a vital mission that protects people, sensitive information/technologies, and the national security posture of the USIC.

The majority of work will be performed in Chantilly, Virginia, which is located approximately 25 miles west of Washington D.C., near the Dulles International Airport. The selected Big Data Engineer will support a 6+ year contract that General Dynamics recently secured.

CORE RESPONSIBILITIES:
  • Assist in the development and delivering of large scale data pipelines
  • Develop and implement automated tests for data transformations and data migrations
  • Research and apply big data solution technologies to complex datasets; make recommendations to data science team on new technologies
#CJ3
#CB
Company Overview
General Dynamics Mission Systems (GDMS) engineers a diverse portfolio of high technology solutions, products and services that enable customers to successfully execute missions across all domains of operation. With a global team of 13,000+ top professionals, we partner with the best in industry to expand the bounds of innovation in the defense and scientific arenas. Given the nature of our work and who we are, we value trust, honesty, alignment and transparency. We offer highly competitive benefits and pride ourselves in being a great place to work with a shared sense of purpose. You will also enjoy a flexible work environment where contributions are recognized and rewarded. If who we are and what we do resonates with you, we invite you to join our high performance team!

Sr. Devops Engineer

11 days ago
$60 - $70/hour (Estimated)Reston, VAKMM Technologies
Overview
“US citizens and those authorized to work in the US are encouraged to apply. We are unable to sponsor H1b candidates at this time.”
Senior DevOps Engineer @ Reston, VA
Contract to Hire
  • Working within the Information Technology division and key business units, the Senior DevOps Engineer will support internal development teams by building self-service Continuous Delivery systems using Ruby, Rails/Sinatra, Docker, and React on the AWS cloud.
  • The ideal candidate will have designed, developed, and automated solutions that support business functionality, but who has a passion for automation and learning new technologies. The candidate must possess the technical skills and experience using Ruby or React to build applications and services. We operate using Scrum and Kanban at the client, so experience using Agile methodologies to deliver software is a plus.
  • Our Senior DevOps Engineers must have excellent written and oral communication skills and be adaptive to the changing needs of the department and the organization.
Essential Functions and Responsibilities:
  • Passion to learning new things (Voice interfaces, mobile development, Data Science)
  • Experience writing full stack web applications using Rails/Sinatra/AWS Lambda and React
  • Enjoys writing clean, concise code, and automated tests
  • Create CI/CD infrastructure, infrastructure as code, and integration tests for the software you create
  • Provide personalized support to the internal customers of the tools you build and maintain.
Education/Years of Experience:
  • A bachelor’s degree in Computer Science, Engineering or MIS.
  • At least 5 years of experience using Ruby and/or Rails
  • 1 – 3 years of experience using React, Docker, and AWS
Related Skills and Other Requirements:
  • Ruby developer who values code simplicity and readability.
  • Experience with Git, Docker, AWS Ruby SDK
  • Experience with RESTful web services design, development and automated testing
  • Experience with automated testing tools (i.e. rspec, or cypress.io)
  • Knowledge of continuous integration systems like AWS CodeSuite or Jenkins a plus
  • Ability to self-manage assigned tasks and projects
  • Strong interpersonal skills, written and verbal communication

Cloud Software Engineer

12 days ago
$55 - $70/hour (Estimated)Reston, VAKMM Technologies
Overview
Role : Cloud Software Engineer
Location: Reston, VA
Mode: Contract to Hire (USCs or GCs)
JOB DESCRIPTION
Team: Item Cloud Digital Transformation, Data Science (small, successful team of 7 that handles innovative projects/proto-types; many of the projects are "from scratch" so ideas are welcomed and opportunity to make an impact)

The Developer will be responsible for building software products and data pipelines, using an array of diverse technologies, including Node.js, React.js, and AWS. This position will work on a small team solving high-priority challenges relating to educational content development.

What you'll do
  • Build database-driven web apps from scratch to deployment
  • Play supporting role in understanding business needs in context, from user perspective
  • Support user-driven design of data models, APIs and interfaces
  • Support and maintain apps in production on the AWS cloud
  • Follow best practices around version control, testing, and automated build processes
About you
  • BA/BS required (major in an analytical field desired)
  • Minimum 5 years of experience with full stack development
  • Minimum 3 years of experience with JavaScript, which ideally includes React.js and Node.js
  • Minimum 2 years of experience with AWS, particularly Lambda, CloudFormation, ECS, EC2, IAM, RDS, and Cognito
  • Fluency with CI/CD toolsets such as AWS Pipeline, AWS CodeBuild, CodeCommit, CodeDeploy, CloudFormation, BitBucket, or Artifactory
  • Working knowledge of AWS Serverless architecture
  • Ability and enthusiasm to learn new technologies as required
  • Excellent communicator with practical decision-making skills;
  • Believer in consistent and thorough documentation
More about you
  • Strong analytical thinking and structured problem-solving ability
  • Experience using libraries and frameworks where it makes sense, while still understanding how each line in your code base works
  • Ability to handle multiple projects and assignments simultaneously and effectively in a cross-functional team environment
  • Ethos of continuous improvement and interest in learning new things
  • Strong ability to understand and internalize the big-picture and broader implications
  • Excellent interpersonal and collaboration skill with the ability to work with a diverse set of colleagues, across functions, from different organizations, disciplines, etc.
  • Self-starter, ability to set priorities, work independently and attain goals
Thanks & Regards
-
Laxman Kumar
Talent Acquisition
KMM Technologies, Inc.
CMMI Level 2 and ISO 9001:2008 Certified
WOSB, SBA 8(A), NMSDC & VA SWaM Certified
Contract Vehicles: GSA Schedule 70 & SeaPort-e Prime
Tel: 240-800-0039 | Fax: (866) 856 3684
E-MAIL: laxman@kmmtechnologies.com
LinkedIn: linkedin.com/in/laxman-m-841970aa
www.kmmtechnologies.com

Big Data Engineer

15 days ago
£600 - £650/dayRemoteHarnham

Big Data Engineer
£600-£650 per day
Initial 3 month contract
London/ Sweden

As a Big Data Engineer you will be working heavily with Kafka for streaming purposes alongside Scala for programming

THE COMPANY:

You will be working for a leading consultancy who who specialise in Data Engineering, DevOps and Data Science. As a Big Data Engineer you will be assisting a Scandinavian client in helping them introduce Kafka and therefore your time will be split 50/50 between being based in Sweden and London - with the option to work remotely. You will be working in an agile environment alongside a Data Architect and other Data Engineers.

THE ROLE:

As a Big Data Engineer, you will be heavily involved in introducing Kafka as a technology. Therefore it is imperative that you have extensive experience with Kafka, ideally Confluent Kafka. As a Big Data Engineer you will be coding in Scala primarily and some Java and will be working both in an on premise environment as well as cloud environments. It is most valuable if you have have good exposure to either AWS or Azure as a platform. Though Kafka will be the main technology you will be focusing on introducing as a Big Data Engineer, experience with Spark and exposure to big data platforms such as Hadoop is highly valuable.

YOUR SKILLS AND EXPERIENCE:

The successful Big Data Engineer will have the following skills and experience:

  • Extensive experience with Kafka
  • Good knowledge of either AWS or Azure
  • Good experience coding in Scala and Java
  • Previous experience using Big Data technologies such as Spark/ Hadoop
  • The ideal candidate will also have commercial experience in Confluent Kafka

THE BENEFITS:

  • A competitive day rate of up to £650 per day

HOW TO APPLY:

Please register your interest by sending your CV to Anna Greenhill via the Apply link on this page.

Big Data Architect

19 days ago
£600 - £700/dayRemoteHarnham

Big Data Architect
£600-£700 per day
Initial 3 month contract
London/ Sweden

As a Big Data Architect you will be introducing Kafka as a streaming technology for a financal client - from roadmapping to deployment.

THE COMPANY:

You will be working for a leading consultancy who specialise in Data Engineering, DevOps and Data Science. As a Big Data Architect you will be assisting a Scandinavian client in helping them introduce Kafka and therefore your time will be split 50/50 between being based in Sweden and London - with the option to work remotely. You will be working in an agile environment alongside a Data Architect and other Data Engineers.

THE ROLE:

As a Big Data Architect, you will be heavily involved in introducing Kafka as a technology. Therefore it is imperative that you have extensive experience with Kafka, ideally Confluent Kafka. As a Big Data Architect you will be working in the Kafka roadmap, both working in an on premise environment as well as cloud environments. It is most valuable if you have have good exposure to either AWS or Azure as a platform. Though Kafka will be the main technology you will be focusing on introducing as a Big Data Architect, experience with Spark and exposure to big data platforms such as Hadoop is highly valuable. Though this will be a client facing role you must be also be prepared to be hands on when neccessary and therefore experience with Scala and Java is desirable.

YOUR SKILLS AND EXPERIENCE:

The successful Big Data Architect will have the following skills and experience:

  • Extensive experience with Kafka
  • Good knowledge of either AWS or Azure
  • Good experience coding in Scala and Java
  • Previous experience using Big Data technologies such as Spark/ Hadoop
  • The ideal candidate will also have commercial experience in Confluent Kafka

THE BENEFITS:

  • A competitive day rate of up to £700 per day

HOW TO APPLY:

Please register your interest by sending your CV to Anna Greenhill via the Apply link on this page.

Big Data Engineer

19 days ago
£600 - £700/dayRemoteHarnham

Big Data Engineer
£600-£700 per day
Initial 3 month contract
London/ Sweden

As a Big Data Engineer you will be working heavily with Kafka for streaming purposes alongside Scala for programming

THE COMPANY:

You will be working for a leading consultancy who who specialise in Data Engineering, DevOps and Data Science. As a Big Data Engineer you will be assisting a Scandinavian client in helping them introduce Kafka and therefore your time will be split 50/50 between being based in Sweden and London - with the option to work remotely. You will be working in an agile environment alongside a Data Architect and other Data Engineers.

THE ROLE:

As a Big Data Engineer, you will be heavily involved in introducing Kafka as a technology. Therefore it is imperative that you have extensive experience with Kafka, ideally Confluent Kafka. As a Big Data Engineer you will be coding in Scala primarily and some Java and will be working both in an on premise environment as well as cloud environments. It is most valuable if you have have good exposure to either AWS or Azure as a platform. Though Kafka will be the main technology you will be focusing on introducing as a Big Data Engineer, experience with Spark and exposure to big data platforms such as Hadoop is highly valuable.

YOUR SKILLS AND EXPERIENCE:

The successful Big Data Engineer will have the following skills and experience:

  • Extensive experience with Kafka
  • Good knowledge of either AWS or Azure
  • Good experience coding in Scala and Java
  • Previous experience using Big Data technologies such as Spark/ Hadoop
  • The ideal candidate will also have commercial experience in Confluent Kafka

THE BENEFITS:

  • A competitive day rate of up to £700 per day

HOW TO APPLY:

Please register your interest by sending your CV to Anna Greenhill via the Apply link on this page.

Data Scientist

24 days ago
$55 - $70/hour (Estimated)RemoteStage 19 Management

MOST IMPORTANT PART ABOUT THE GIG: We believe in making yourself rich not just your company rich. That's why for this position you and your team (3-6 people) will be given 33.3% of all distributed profits in addition to an employee stock option program. There is no salary or hourly rate for this position. You can work remotely and part time when you have the time.

Responsibilities:

  • Design and implement data science systems (ML, AI, optimization, and statistics)
  • Improve the scale and efficiency of data science approaches
  • Pair with data engineers to design deployments and pipelines
  • Elevate team technical craft with peer reviews, paper reading, and tech prototyping
  • Identify new product opportunities and collaboratively develop them with partners
  • Represent data products to non-technical partners and collaborators

Desired Qualifications:

  • 5+ years data science industry experience
  • 3+ years "productionalizing" code into live systems
  • 2+ years hands on experience with petabyte-processing-capable technologies such as Spark or Hadoop
  • Experience setting up and interacting with cloud infrastructure (e.g., AWS)
  • Professional proficiency in Python and SQL
  • An MS or PhD in Engineering, Computer Science, Physics, Statistics, or related field

Applicable Technologies/Skills:

  • Understand trade-offs between data infrastructure and database systems
  • Familiarity with iterative development methodologies like Scrum, Agile, etc.
  • Familiarity with Java, Scala, C++
  • Familiarity with git

Job Type: Contract

Experience:

  • Data Scientist: 1 year (Required)

Education:

  • Master's (Preferred)

Data Scientist - Data Engineer

25 days ago
$55 - $70/hour (Estimated)RemoteCBS
Data Scientist - Data Engineer

REF#: 32836

CBS BUSINESS UNIT: Simon & Schuster

JOB TYPE: Temporary / Per Diem / Freelance

JOB SCHEDULE: Full-Time

JOB LOCATION: New York, NY

ABOUT US:

Simon & Schuster, a part of CBS Corporation, is a global leader in the field of general interest publishing, dedicated to providing the best in fiction and nonfiction for consumers of all ages, across all printed, electronic, and audio formats. Its divisions include Simon & Schuster Adult Publishing, Simon & Schuster Children’s Publishing, Simon & Schuster Audio, Simon & Schuster Digital, and international companies in Australia, Canada, India and the United Kingdom.

DESCRIPTION:

Simon & Schuster has an exciting role for a data engineer to join a fast-paced, leading-edge team working to help advance it’s publishing business. In this Freelance or Independent Contractor role, you will be working with a small team of data scientists and analysts to rapidly prototype data applications and analytics tools to serve our fast-growing imprint.

The full-time, contract position will be responsible for developing, testing and maintaining data flows and information architecture. This includes aggregating data from multiple databases in our data warehouse as well as external sources, establishing data pipelines, designing data models, deploying machine learning models to production and building robust data visualizations and reporting tools that align with the business needs of our team.

This is an exciting opportunity for the right candidate to build a robust data science and analytics environment from the ground up. You’ll be working with a small team of professionals based remotely. Hours are flexible. Two scheduled check-ins/code reviews are required weekly.

Responsibilities

  • Collaborate with team to align architecture with business requirements
  • Stand up, test and maintain databases and information architecture
  • Build tools to automate data acquisition and aggregation
  • Develop data models and table schemas
  • Identify ways to improve data reliability, efficiency and quality
  • Deploy sophisticated analytics programs, machine learning and statistical methods
  • Prepare data for predictive and prescriptive modeling
  • Use data to discover tasks that can be automated
  • Develop data visualizations, dashboards, reports and notification tools that respond to business needs
  • Design analytical models that support analysis and inference.

QUALIFICATIONS:

Skills and experience:

  • Computer science degree or equivalent with 2+ years data engineering and/or full stack developer experience
  • Background in data science or equivalent continuing education
  • Strong knowledge of Python 3
  • Experience in developing and maintaining SQL databases and writing SQL queries; NoSQL (MongoDB) experience preferred but not required
  • Experience establishing and maintaining AWS, Google Cloud, Heroku, Tableau or similar cloud-based environments
  • Knowledge of HTML, CSS and JS and lightweight web frameworks such as Django, Flask
  • Familiarity with database administration tools and best practices
  • Background in extracting and storing data from APIs preferred but not required
  • Background in deploying machine learning models to production preferred but not required
  • Knowledge of Git/GitHub and software development workflows
  • Strong written and verbal communication skills.
  • Ability to summarize key findings into clear and actionable recommendations, and to interact with individuals at all levels.

Strong project management skills and ability to plan and prioritize work in a fast-paced environment.Please send a link to your work samples on Github

EEO STATEMENT:

Equal Opportunity Employer Minorities/Women/Veterans/Disabled

Java AWS Developer with Sagemaker

26 days ago
$55 - $70/hour (Estimated)Herndon, VAKGS Technology Group Inc

Role: typical Java/AWS developer, one focused on Sagemaker/Machine Learning

Location: Herndon, VA

Duration: Long Term

Interview preference: Phone then F2F

Onsite Position:

Call notes:

  • Top Skills (Machine learning)
  • Sagemaker or Spark
  • Python
  • Java/REST
  • AWS

Top Skills: (Developer)

  • AWS
  • Java/REST
  • Python

Major difference between the two positions:

We are looking for a Strong AWS Developer who worked on real implementations with major focus on AWS Sagemaker and Machine Learning Models development.

  • 8+ years of programming experience with Java, J2EE, XML, and Web Services
  • 8+ years of experiences in full life cycle application/system development
  • Minimum of 7 years of experience developing in distributed application environments (Database, Transaction Management)
  • 6 years of experience writing conceptual and detailed design documents, and developing large scale enterprise applications following an Agile execution model
  • Strong Development/programming experience in Java or Python is must
  • Strong AWS DevOps Experience (2-3 years) including AWS CodeBuild, AWS CodeCommit, AWS CodePipeline , AWS Lambda, API Gateway, AWS CLI/Yaml/Cloud Formation, Serverless Deployment
  • 2-3 years of Experience/knowledge of AWS development on Lambda, APIGateway services
  • 2-3 years of Experience working on data science projects and using AWS Sagemaker/Spark ML (Highly preferred) or Python
  • Hands-on experience with NLP, mining of structured, semi-structured, and unstructured data
  • Intuitive understanding of machine learning algorithms, supervised and unsupervised modeling techniques
  • At least 2 years of experience with Application integrations (SOAP/REST Web Services, ESB, JMS, File/Data transfers, etc.)
  • Experience with machine learning tools and libraries such as Tensor Flow
  • Ability to write well designed, testable and efficient code
  • Good communication and collaboration skills, team player
  • Takes accountability/ownership for the assigned tasks/deliverable
  • Familiar with GIT repositories (Bit Bucket) and Agile Methodology/SCRUM/Kanban

For Machine Learning role:

  • Most other experience is less important as long as they’re strong with Sagemaker or Spark
  • Spark experience would work in place of Sagemaker
  • Someone who has developed models, and used them (likely in python – Java works, too)
  • Frameworks like Tensor Flow

Nice to have

  • AWS Advanced Certifications
  • Deep learning, computer vision, topic modeling, graph algorithms are pluses
  • Experience with any ESB technology such as TIBCO BW/BE, Mulesoft etc.
  • Experience with Angular 4/5/6 development, Java script
  • Experience in developing APIs using Spring Boot framework, ideally with Docker container or OpenShift Cloud Platform (OCP)

Job Type: Contract

Experience:

  • AWS: 5 years (Required)
  • software development: 5 years (Required)
  • machine learning: 1 year (Required)
  • Java: 9 years (Required)
  • Sagemaker: 5 years (Required)

Contract Length:

  • 1 year

Contract Renewal:

  • Possible

Kafka Data Engineer

27 days ago
£600 - £700/dayRemoteHarnham

Kafka Data Engineer
£600-£700 per day
Initial 3 month contract
London/ Sweden

As a Kafka Data Engineer you will be working heavily with Kafka for streaming purposes alongside Scala for programming

THE COMPANY:

You will be working for a leading consultancy who who specialise in Data Engineering, DevOps and Data Science. As a Data Engineer you will be assisting a Scandinavian client in helping them introduce Kafka and therefore your time will be split 50/50 between being based in Sweden and London - with the option to work remotely. You will be working in an agile environment alongside a Data Architect and other Data Engineers.

THE ROLE:

As a Kafka Data Engineer, you will be heavily involved in introducing Kafka as a technology. Therefore it is imperative that you have extensive experience with Kafka, ideally Confluent Kafka. As a Kafka Data Engineer you will be coding in Scala primarily and some Java and will be working both in an on premise environment as well as cloud environments. It is most valuable if you have have good exposure to either AWS or Azure as a platform. Though Kafka will be the main technology you will be focusing on introducing as a Kafka Data Engineer, experience with Spark and exposure to big data platforms such as Hadoop is highly valuable.

YOUR SKILLS AND EXPERIENCE:

The successful Kafka Data Engineer will have the following skills and experience:

  • Extensive experience with Kafka
  • Good knowledge of either AWS or Azure
  • Good experience coding in Scala and Java
  • Previous experience using Big Data technologies such as Spark/ Hadoop
  • The ideal candidate will also have commercial experience in Confluent Kafka

THE BENEFITS:

  • A competitive day rate of up to £700 per day

HOW TO APPLY:

Please register your interest by sending your CV to Anna Greenhill via the Apply link on this page.

Big Data Architect

28 days ago
£650 - £700/dayRemoteHarnham

Big Data Architect
£650-£700 per day
Initial 3 month contract
London/ Sweden

As a Big Data Architect you will be helping to create the Kafka architecture and outline the strategy for migration to the cloud!

THE COMPANY:

You will be working for a leading consultancy who who specialise in Data Engineering, DevOps and Data Science. As a Big Data Architect you will be assisting a Scandinavian client in helping them introduce Kafka and therefore your time will be split 50/50 between being based in Sweden and London - with the option to work remotely. You will be working in an agile environment alongside Data Engineers.

THE ROLE:

As a Big Data Architect, your main responsability will be creating the Kafka architecture from design to implementation.. Therefore it is imperative that you have extensive experience with Kafka for large implementations, ideally Confluent Kafka. As a Big Data Architect yit is essential you have a good understanding of technologies such as Spark and Hadoop as you will helping to implement these. You will be working in both an on premise environment as well as cloud environmentsand so it is valuable if you have worked in either AWS or Azure as a platform. Though you will heavily be involved in the planning and writing of roadmaps you must be prepared to be hands in and therefore previous experience programming in Scala/ Java is valuable. As you will be working for a consultancy it is essential that you are confident speaking with non technical people as this role will be very client facing.

YOUR SKILLS AND EXPERIENCE:

The successful Big Data Architect will have the following skills and experience:

  • Extensive experience implementing strategies using Kafka
  • Good knowledge of either AWS or Azure
  • Good experience coding in Scala and Java
  • Previous experience using Big Data technologies such as Spark/ Hadoop
  • Experience speaking to stakeholders
  • The ideal candidate will also have commercial experience in Confluent Kafka

THE BENEFITS:

  • A competitive day rate of up to £700 per day

HOW TO APPLY:

Please register your interest by sending your CV to Anna Greenhill via the Apply link on this page.

Data Engineer

28 days ago
£600 - £700/dayRemoteHarnham

Data Engineer
£600-£700 per day
Initial 3 month contract
London/ Sweden

As a Data Engineer you will be working heavily with Kafka for streaming purposes alongside Scala for programming

THE COMPANY:

You will be working for a leading consultancy who who specialise in Data Engineering, DevOps and Data Science. As a Data Engineer you will be assisting a Scandinavian client in helping them introduce Kafka and therefore your time will be split 50/50 between being based in Sweden and London - with the option to work remotely. You will be working in an agile environment alongside a Data Architect and other Data Engineers.

THE ROLE:

As a Data Engineer, you will be heavily involved in introducing Kafka as a technology. Therefore it is imperative that you have extensive experience with Kafka, ideally Confluent Kafka. As a Data Engineer you will be coding in Scala primarily and some Java and will be working both in an on premise environment as well as cloud environments. It is most valuable if you have have good exposure to either AWS or Azure as a platform. Though Kafka will be the main technology you will be focusing on introducing as a Data Engineer, experience with Spark and exposure to big data platforms such as Hadoop is highly valuable.

YOUR SKILLS AND EXPERIENCE:

The successful Data Engineer will have the following skills and experience:

  • Extensive experience with Kafka
  • Good knowledge of either AWS or Azure
  • Good experience coding in Scala and Java
  • Previous experience using Big Data technologies such as Spark/ Hadoop
  • The ideal candidate will also have commercial experience in Confluent Kafka

THE BENEFITS:

  • A competitive day rate of up to £700 per day

HOW TO APPLY:

Please register your interest by sending your CV to Anna Greenhill via the Apply link on this page.

Big Data Architect

29 days ago
£650 - £700/dayRemoteHarnham

Big Data Architect
£650-£700 per day
Initial 3 month contract
London/ Sweden

As a Big Data Architect you will be helping to create the Kafka architecture and outline the strategy for migration to the cloud!

THE COMPANY:

You will be working for a leading consultancy who who specialise in Data Engineering, DevOps and Data Science. As a Big Data Architect you will be assisting a Scandinavian client in helping them introduce Kafka and therefore your time will be split 50/50 between being based in Sweden and London - with the option to work remotely. You will be working in an agile environment alongside Data Engineers.

THE ROLE:

As a Big Data Architect, your main responsability will be creating the Kafka architecture from design to implementation.. Therefore it is imperative that you have extensive experience with Kafka for large implementations, ideally Confluent Kafka. As a Big Data Architect yit is essential you have a good understanding of technologies such as Spark and Hadoop as you will helping to implement these. You will be working in both an on premise environment as well as cloud environmentsand so it is valuable if you have worked in either AWS or Azure as a platform. Though you will heavily be involved in the planning and writing of roadmaps you must be prepared to be hands in and therefore previous experience programming in Scala/ Java is valuable. As you will be working for a consultancy it is essential that you are confident speaking with non technical people as this role will be very client facing.

YOUR SKILLS AND EXPERIENCE:

The successful Big Data Architect will have the following skills and experience:

  • Extensive experience implementing strategies using Kafka
  • Good knowledge of either AWS or Azure
  • Good experience coding in Scala and Java
  • Previous experience using Big Data technologies such as Spark/ Hadoop
  • Experience speaking to stakeholders
  • The ideal candidate will also have commercial experience in Confluent Kafka

THE BENEFITS:

  • A competitive day rate of up to £700 per day

HOW TO APPLY:

Please register your interest by sending your CV to Anna Greenhill via the Apply link on this page.

Data Engineer

29 days ago
£600 - £700/dayRemoteHarnham

Data Engineer
£600-£700 per day
Initial 3 month contract
London/ Sweden

As a Data Engineer you will be working heavily with Kafka for streaming purposes alongside Scala for programming

THE COMPANY:

You will be working for a leading consultancy who who specialise in Data Engineering, DevOps and Data Science. As a Data Engineer you will be assisting a Scandinavian client in helping them introduce Kafka and therefore your time will be split 50/50 between being based in Sweden and London - with the option to work remotely. You will be working in an agile environment alongside a Data Architect and other Data Engineers.

THE ROLE:

As a Data Engineer, you will be heavily involved in introducing Kafka as a technology. Therefore it is imperative that you have extensive experience with Kafka, ideally Confluent Kafka. As a Data Engineer you will be coding in Scala primarily and some Java and will be working both in an on premise environment as well as cloud environments. It is most valuable if you have have good exposure to either AWS or Azure as a platform. Though Kafka will be the main technology you will be focusing on introducing as a Data Engineer, experience with Spark and exposure to big data platforms such as Hadoop is highly valuable.

YOUR SKILLS AND EXPERIENCE:

The successful Data Engineer will have the following skills and experience:

  • Extensive experience with Kafka
  • Good knowledge of either AWS or Azure
  • Good experience coding in Scala and Java
  • Previous experience using Big Data technologies such as Spark/ Hadoop
  • The ideal candidate will also have commercial experience in Confluent Kafka

THE BENEFITS:

  • A competitive day rate of up to £700 per day

HOW TO APPLY:

Please register your interest by sending your CV to Anna Greenhill via the Apply link on this page.