Save this contract

Mind providing us with your email so we can save this contract? We promise we won't spam you, and you can unsubscribe any time.
Saved this job! You can see it in saved.
Saved this job! You can see it in saved.

Remote Data Science contract jobs

Remote
0
USD
/hr

7 remote Data Science contracts

Python Developer Geospatial

14 days ago
£400 - £490/day (Estimated)RemoteTriad Resourcing

Python Developer (Geospatial)

As part of Triad Group's on-going collaboration with a market leading geospatial intelligence company, we are looking to recruit a Python Developer with a strong mastery of both modern Python and Linux (Ubuntu).

As a Python Developer you will work within the software and analytics team with the primary role of enhancing our client's geospatial data processing and delivery pipelines. You will be contributing towards products which are cutting edge developments in geospatial processing.

This is a fully remote opportunity!

Principal Responsibilities

  • Develop and support Python related products, using professional coding standards and version management.
  • Support and aid administration of Linux infrastructure and servers.
  • Provide technical support as required.
  • If technically able, contribute to data science related activities.
  • Where necessary support merge requests from other developers.

Skills & Experience Required

Essential

  • Expert knowledge of Python (including Asyncio).
  • Expert knowledge of Linux.
  • Git.
  • PostgreSQL and SQLAlchemy.
  • Docker.
  • Google Cloud Platform or other cloud service providers.
  • Geospatial / Satellite Imagery sector experience.

Desirable

  • Experience and knowledge of data science.
  • NodeJS.
  • Good understanding of HTTP.
  • Understanding of machine learning principles, with emphasis on CNNs.
  • Experience of handling geospatial datasets with some of:

- QGIS
- GDAL
- Geopandas
- Shapely
- Rasterio

Other Information:

If this role is of interest to you, or you would like more information, please contact David Sparkes, or submit your application now.

Triad is an equal opportunities employer and welcomes applications from all suitably qualified people regardless of sex, race, disability, age, sexual orientation, gender reassignment, religion, or belief.

Triad Group Plc acts as an Employment Business for this contract position.

Get new remote Data Science contracts sent to you every week.
Subscribed to weekly Data Science alerts! 🎉

Fullstack Developer / Full Stack Developer (Python/Django/React/Node)

15 days ago
£400 - £600/dayRemoteInside IR35Green Park

Fullstack Developer / Full Stack Developer (Python/Django/React/Node)

Remote Working/Central London

INSIDE IR35

£500-600/day

6-month contract initially

This is a chance to take a leading role in an exciting field. We’re looking for four Full Stack Developers with top class technical and soft skills to help not only write excellent code for our products, but also contribute to the development of junior developers and a positive work culture for everybody. You’ll also get to work with committed and talented developers, designers, product specialists and researchers who share a passion for making government better.

This is a contract post for 6 months (inside IR35) with the view to be extended, based in Central London, although the role will be remote working initially.

The successful Full Stack Developer / Fullstack Developer will need strong knowledge of both Django/Django Rest Framework AND Node/React.

What we offer:

A flexible working environment to match your life. Want to work from home? Okay. Want to start early or finish late? Okay. You know how you work best. We want to promote a healthy work life balance where you can get your head down, feel good about putting in a good day's work and then switch off to enjoy life.

What you’ll work on:

This role is based in the Data team. Here you’ll get the chance to work on a number of exciting projects: using the latest Data Science techniques to find companies with the highest potential to export and invest, bringing together data sets from across government and beyond, and building tools on top of all this that help our staff around the globe work more efficiently. There’s also scope to work in one of our other teams if priorities shift.

What you will do:

? Full stack web application development using a variety of technologies, including but not limited to JavaScript (Node & React) and/or Python (Django and Django Rest Framework)

? Work with our developer communities to share experience and learn from other teams, departments and the wider industry.

? Coach and mentor junior and civil servant developers, sharing your knowledge and expertise

? Write automated tests to support our continuous integration environment

? Support the day-to-day operation of our live services, investigating and fixing bugs and performance issues

What the Full Stack Developer / Fullstack Developer will need:

? Expert knowledge of at least one of:

? JavaScript, Node, React

? Python, Django, Django Rest Framework

? Expert knowledge of testing, including unit testing, functional testing and end-to-end testing

? Knowledge of agile delivery techniques

? Experience with version control, docker, deployment pipelines, CI/CD and the web ops space in general

? A strong focus on users and iterative development. Knowledge of Government Digital Service standards and methodologies would be a plus.

? Great soft skills. We work in teams, and we’ll need you to thrive in a collaborative, communicative environment

Please note, this role is inside of IR35 and you must be eligible to work in the UK.

If this role is of interest, please apply for immediate consideration.

Senior Data Engineer - Consulting

14 days ago
£84.09 - £92.5/hourRemoteHarnham

Senior Data Engineer - Consulting
1000 SEK per hour
6-months
Stockholm, Sweden

As a Senior Data Engineer, you will be building out a greenfield implementation of an AWS platform to host big data techs as well as Machine Learning pipelines. It is a very exciting new project for this insurance company who wants to build an analytics centre of excellence.

THE COMPANY:
This company are of the leading insurance companies across the Nordics. In order to compete with their competitors, they are looking to migrate services to the cloud and build a Data Science function who can provide insights to help them launch products that their customers want. They have the capacity to ingest big data so are building out their Engineering team and require a contractor to come in as the most senior person to lead the greenfield implementation.

THE ROLE:
In this role, you will be reporting the Head of Engineering and working with architects to build out AWS components that will host Spark applications. You need to have expert coding skills in Python as will be building a Machine Learning pipeline for Data Scientists to eventually build their models on. The team are looking for someone who can advise them on the architecture, mentor junior developers and bring new ideas/techs to the table.

YOUR SKILLS AND EXPERIENCE:
The ideal Senior Data Engineer will have:

  • An expert understanding of AWS (Lambda, Kinesis, Glue, Athena)
  • Good hands on coding skills with Python
  • An understanding of Machine Learning
  • Strong exposure to Spark

HOW TO APPLY:
Please submit your CV to Henry Rodrigues at Harnham via the Apply Now button.
Please note that our client is currently running a fully remote interview process, and able to on-board and hire remotely as well. This role is intended to be home working for the duration of COVID-19 isolation period.

Principal Data Engineer

1 month ago
£600 - £650/dayRemoteHarnham

Principal Data Engineer
£650 per day
6-months
South West London/Remote

As a Principal Data Engineer, you will be building out a greenfield implementation of an AWS platform to host big data techs as well as Machine Learning pipelines. It is a very exciting new project for this insurance company who wants to build an analytics centre of excellence.

THE COMPANY:
This company are of the leading insurance companies in the UK. In order to compete with their competitors, they are looking to migrate services to the cloud and build a Data Science function who can provide insights to help them launch products that their customers want. They have the capacity to ingest big data so are building out their Engineering team and require a contractor to come in as the most senior person to lead the greenfield implementation.

THE ROLE:
In this role, you will be reporting the Head of Engineering and working with architects to build out AWS components that will host Spark applications. You need to have expert coding skills in Python as will be building a Machine Learning pipeline for Data Scientists to eventually build their models on. The team are looking for someone who can advise them on the architecture, mentor junior developers and bring new ideas/techs to the table.

YOUR SKILLS AND EXPERIENCE:
The ideal Principal Data Engineer will have:

  • An expert understanding of AWS (Lambda, Kinesis, Glue, Athena)
  • Good hands on coding skills with Python
  • An understanding of Machine Learning
  • Strong exposure to Spark

HOW TO APPLY:
Please submit your CV to Henry Rodrigues at Harnham via the Apply Now button.
Please note that our client is currently running a fully remote interview process, and able to on-board and hire remotely as well. This role is intended to be home working for the duration of COVID-19 isolation period.

Cloud/DevOps Engineer

15 days ago
RemoteVerstand AI

POSITION: GCP / Confluent DevOps Engineer

LOCATION: Remote / Washington, DC

POSITION HIGHLIGHTS:

Verstand AI (www.verstand.ai) is seeking Google Cloud Platform (GCP) DevOps Engineers with strong SQL expertise, Python and Kafka (Confluent) proficiency. The DevOps engineers will be instrumental in significant initiatives to transform all aspects of environment management, continuous integration and delivery for Verstand commercial clients. The work will lead to implementing best practice approaches for enterprise data warehousing, business intelligence and data wrangling/ELT/ETL. This individual will work closely with the business stakeholders, software development and support teams. Most importantly, Verstand AI's Cloud DevOps engineers will get an opportunity to work with cutting edge technologies and be part of data teams that help clients with end to end data science programs.

KEY RESPONSIBILITIES:

  • Design and implement a containerization strategy that could be applied to Ops for Google Cloud-based environment
  • Automate management and orchestration tasks.
  • Building CI/CD pipelines for Microservices
  • Conduct root cause analysis for container runtime problems
  • Author documentation and procedures for DevOps in a Google cloud-based environment.
  • Monitor, measure, and automate all things to ensure exceed performance and availability goals
  • Identify bottlenecks in development and deployment processes
  • Participate and potentially lead technical presentations on the work.
  • Understand the current systems, algorithms, and cloud-based HPC architecture
  • Instrument the infrastructure with frameworks that can be appropriately adopted for logging, monitoring, and alerting
  • Participate in team meetings, interface independently with SMEs, and interact with client staff

JOB REQUIREMENTS:

Minimum Experience, Skills and Education:

  • 6+ years of experience in cloud environment, distributed systems, system automation, and real-time platform.
  • 5 + years production experience with cloud technologies such as Google Cloud Platform (GCP), Azure and Amazon Web Services (AWS)
  • 2+ years design and maintenance expertise with system administration of Cloud infrastructure, including Amazon Web Services, Google Cloud Platform, and/or Microsoft Azure cloud services.
  • Experience with cloud databases.
  • Experience with batch and stream processing
  • Experience with managing large scale data processing systems
  • Experience with agile software development practices and drive to ship quickly
  • Experience leading change, taking initiative, and driving results
  • Effective communication skills and strong problem-solving skills
  • Proven ability and desire to mentor others in a team environment
  • Bachelor's degree from four-year College or university in Computer Science, Technology or related field

Experience That Sets You Apart:

  • Experience with the Google Cloud Platform
  • Experience with Apache Kafka and Confluent
  • Familiarity with Python
  • Experience with microservice platforms, API development, and containers.
  • Retail vertical production experience

Verstand AI is a fast-growing firm that believes in ongoing training and development for its staff. The firm's mission is to help both its commercial and public sector clients resolve data management challenges and move to delivering insight and benefits for stakeholders, customers and constituents.

Based out of Tysons Corner, VA, Verstand does business across the United States and is moving into Europe, Africa and Asia. If you're interested in working with us and have a desire to tackle challenging data problems, we welcome your interest and encourage you to apply.

Job Types: Full-time, Contract, Permanent position opportunity

Job Types: Full-time, Contract

Experience:

  • Google Cloud Platform: 2 years (Required)
  • DevOps: 5 years (Required)
  • Python: 2 years (Required)
  • Apache Kafka: 2 years (Required)
  • Cloud: 5 years (Required)

Work authorization:

  • United States (Required)

Contract Renewal:

  • Likely

Full Time Opportunity:

  • Yes

Additional Compensation:

  • Bonuses
  • Other forms

Work Location:

  • Fully Remote

Benefits:

  • Health insurance
  • Dental insurance
  • Vision insurance
  • Retirement plan
  • Paid time off
  • Professional development assistance

This Company Describes Its Culture as:

  • Team-oriented -- cooperative and collaborative
  • Outcome-oriented -- results-focused with strong performance culture
  • Innovative -- innovative and risk-taking

Schedule:

  • Monday to Friday

Company's website:

  • www.verstand.ai

Benefit Conditions:

  • Only full-time employees eligible

Work Remotely:

  • Yes

Scientific Data Engineer

28 days ago
$55 - $70/hour (Estimated)RemoteAllen Institute for Immunology

Bioinformatics Data Engineer

The mission of the Allen Institute is to unlock the complexities of bioscience and advance our knowledge to improve human health. Using an open science, multi-scale, team-oriented approach, the Allen Institute focuses on accelerating foundational research, developing standards and models, and cultivating new ideas to make a broad, transformational impact on science.

The goal of the Allen Institute for Immunology is to advance the fundamental understanding of human immunology through the study of immune health and disease where excessive or impaired immune responses drive pathological processes.

The Allen Institute for Immunology is seeking a Bioinformatics Data Engineer (Data Scientist) with broad experience in developing computer codes/scripts to automate the analysis of omics data, especially next generation sequencing (NGS) data, to join our Informatics and Computational Biology team.

You will be part of a multidisciplinary team and will be responsible for (i) development and implementation of data processing and analysis software as needed, (ii) assisting in both pipeline and exploratory analysis of data from diverse assays and sample types, (iii) working towards visualizations and reports for internal and external dissemination. As such, ideal candidates should have a good understanding of sequencing technologies, and a proven track record of development of analytical software packages. This role includes analysis and integration of “big data” types, and working in close collaboration with the software development team for deployment on our interactive cloud environment to ensure user accessibility and generation of actionable insights. You will also support technology development projects in collaboration with the Molecular Biology and Immunology teams.

Good judgment and problem-solving skills are required for recognizing anomalous data, identifying and fixing code bugs and participating in data-driven algorithm design and improvement. A successful candidate will have demonstrated success in big data science, code optimization and deployment. The Bioinformatics Data Engineer must have excellent attention to detail and the eagerness to work in a team science, deadline-driven atmosphere.

Essential Functions

  • Design and develop software programs to optimize scRNA-seq, scATAC-seq & CITE-seq processing pipelines and analysis algorithms including PCA and dimensionality reduction

  • Deploy automated pipelines in our interactive cloud environment with graphical user interface to facilitate user accessibility

  • Publish codebase or software as part of high impact publications or releases

  • Integrate multiple data streams for “Big Data” analysis (examples include scRNA-seq, scATAC-seq, flow cytometry, WGS)

  • Generate interactive data visualizations and work with end users to identify actionable insights

  • Exploratory data mining

  • Meet production deadlines for data analysis and be able to pivot between multiple projects

Required Qualifications

  • Bachelor's degree in a big data computational field (e.g., Bioinformatics, Computer Science, Biostatistics, Physics, Mathematics) with a minimum of 2 years experience in analyzing omics data.

  • Demonstrated success in a multidisciplinary team environment.

  • Good understanding of sequencing technologies, data processing and integrative analysis

  • Fluency in Java, Python, R and Unix shell scripting.

  • Experience in Big Data analysis, code optimization & parallel programming. Proven experience with big data analysis technical and languages such as Apache Spark, BigTable, Scala or Rust.

  • Good knowledge of version control systems such as Git

  • Strong organizational, teamwork, and communication skills

  • Attention to detail, and good problem-solving skills

Preferred Qualifications

  • Masters or PhD in Bioinformatics/Computational Biology or similar

  • Familiarity with immunology

  • Understanding of Flow Cytometry and CyTOF analysis a plus

  • Familiarity with cloud computing

  • Ability to implement, test, and share new computational tools quickly, in an iterative manner, after feedback from experimental, data production, and analysis teams

  • Excellent work ethic displayed as a reliable, self-motivated, enthusiastic team player

  • Ability to learn new programming languages and packages

  • Eager to learn new skills

Work Environment

  • Working at a computer and using a mouse for extended periods of time

  • May need to work outside of standard working hours at times

Travel

  • Some travel may be required

Additional Details:

  • This role is currently able to work remotely full-time, this may change and you may be required to work onsite as safety restrictions are lifted in relation to Covid-19. You must be a Washington State resident to work remotely.

  • We are open to full-time, part-time, and/or contract work for this role. When you apply, please specify which work arrangement you desire. We are flexible.


Additional Comments

**Please note, this opportunity does sponsor work visas**

**Please note, this opportunity offers relocation assistance**

Data Engineer

27 days ago
RemoteGeorgia IT Inc.

We are looking for strong Data Engineers, skilled in Hadoop, Scala, Spark, Kafka, Python, and AWS. I've included the job description below.
Here is what we are looking for:

Overall Responsibility:

  • Develop sustainable data driven solutions with current new gen data technologies to meet the needs of our organization and business customers.
  • Apply domain driven design practices to build out data applications. Experience in building out conceptual and logical models.
  • Build out data consumption views and provisioning self-service reporting needs via demonstrated dimensional modeling skills.
  • Measuring data quality and making improvements to data standards, helping application teams to publish data in the correct format so it becomes easy for downstream consumption.
  • Big Data applications using Open Source frameworks like Apache Spark, Scala and Kafka on AWS and Cloud based data warehousing services such as Snowflake.
  • Build pipelines to enable features to be provisioned for machine learning models. Familiar with data science model building concepts as well as consuming and from data lake.

Basic Qualifications:

  • At least 8 years of experience with the Software Development Life Cycle (SDLC)
  • At least 5 years of experience working on a big data platform
  • At least 3 years of experience working with unstructured datasets
  • At least 3 years of experience developing microservices: Python, Java, or Scala
  • At least 1 year of experience building data pipelines, CICD pipelines, and fit for purpose data stores
  • At least 1 year of experience in cloud technologies: AWS, Docker, Ansible, or Terraform
  • At least 1 year of Agile experience
  • At least 1 year of experience with a streaming data platform including Apache Kafka and Spark

Preferred Qualifications:

  • 5+ years of data modeling and data engineering skills
  • 3+ years of microservices architecture & RESTful web service frameworks
  • 3+ years of experience with JSON, Parquet, or Avro formats
  • 2+ years of creating data quality dashboards establishing data standards
  • 2+ years experience in RDS, NOSQL or Graph Databases
  • 2+ years of experience working with AWS platforms, services, and component technologies, including S3, RDS and Amazon EMR

Job Type: Contract

Schedule:

  • Monday to Friday

Experience:

  • AWS: 1 year (Preferred)
  • Hadoop: 1 year (Required)
  • Spark: 1 year (Required)
  • Big Data: 1 year (Preferred)
  • Scala: 1 year (Preferred)
  • Data Engineering: 1 year (Required)

Contract Renewal:

  • Possible

Full Time Opportunity:

  • Yes

Work Location:

  • Fully Remote

Work Remotely:

  • Yes