Save this contract

Mind providing us with your email so we can save this contract? We promise we won't spam you, and you can unsubscribe any time.
Saved this job! You can see it in saved.
Saved this job! You can see it in saved.

Remote Ansible contract jobs

Remote
0
USD
/hr

13 remote Ansible contracts

DevSecOps Engineer (Remote)

25 days ago
$60 - $75/hour (Estimated)RemoteHireVergence

Work to be done:

  • Assist with Google Cloud Platform (GCP) operations supporting the FedRAMP initiatives
  • Implement CI/CD Pipelines initiating code builds from repositories and orchestrated deployment within Google Kubernetes Engine (GKE) containers
  • Configure native security tools available within Google Cloud Platform to leverage available controls that enhance security posture
  • Experience with Cloud Security Tools such as Twistlock, C3M, Fortify
  • 4+ years of experience developing and/or administering software in public cloud
  • Experience managing Infrastructure as code via tools such as Terraform or CloudFormation
  • Experience in monitoring infrastructure and application uptime and availability to ensure functional and performance objectives.
  • Experience in languages such as Python, Ruby, Bash, Java, Go, Perl
  • Demonstrable cross-functional knowledge with systems, storage, networking, security and databases
  • System administration skills, including automation and orchestration of Linux/Windows using Chef, Puppet, Ansible, Salt Stack and/or containers (Docker, Kubernetes, etc.)
  • Proficiency with continuous integration and continuous delivery tooling and practices
  • Strong analytical and troubleshooting skills
  • Ability and willingness to participate in on-call rotation

Deliverables:

  • Build and implement endpoint security tools in the FedRAMP environment
  • Write and implement security automation code in the FedRAMP environment
  • Build and implement security tools in the FedRAMP environment
  • Complete the operational readiness assessment for all FedRAMP tools
  • Complete the operational transition for all FedRAMP tools
  • Develop operational SLAs for security tools in the FedRAMP environment
  • Instrument monitoring and reporting for deviations or variances in SLAs
Get new remote Ansible contracts sent to you every week.
Subscribed to weekly Ansible alerts! 🎉

DataOps Tech Lead

1 month ago
£650 - £700/dayRemoteHarnham

Data Ops Tech Lead
Remote/London
6-month Contract
£700 per day

As a Tech Lead, you will be building the infrastructure of an AWS platform for a niche IoT company. You will be working alongside the Data Scientists.

THE COMPANY:
This company are a niche start-up who have been financially backed by a large finance company. Their mission is to support people and business by tracking all bad content on social media and removing this from feeds. They are established in the US and are now targeting the UK. They have a team of Data Scientists working with real-time data which they supply to the Software team for them to build their products.

THE ROLE:
You are required to have expertise working with AWS as you will be building the infrastructure for the Engineers to store their data in. You will be introducing Kubernetes for deployment and Ansible/Terraform for automation. You are also required to build the CI/CD pipelines and if you have experience working with Kafka, that would be desirable. This is a fantastic opportunity to setup a cloud infrastructure for a niche start-up who have access to Big Data.

YOUR SKILLS AND EXPERIENCE:
The ideal Tech Lead will have:

  • Built a greenfield infrastructure using AWS, from scratch
  • Worked in an Agile environment building CI/CD pipelines
  • Introduced Kubernetes for deployment
  • Implemented infrastructure-as-code methodologies

HOW TO APPLY:
Please submit your CV to Henry Rodrigues at Harnham via the Apply Now button.
Please note that our client is currently running a fully remote interview process, and able to on-board and hire remotely as well.

Mid to Senior Level DevOps Engineer - Contract

1 day ago
RemotePersefoni AI, Inc

Mid to Senior Level React / NextJS / Node Full Stack Developer

Company Introduction

Our mission is the enablement of every organization and person with the technology to positively impact the health of planet Earth. Persefoni is creating an all-in-one platform that allows organizations to measure, analyze, and reduce their Enterprise Carbon Footprint. Our goal is to provide our customers unprecedented visibility and insights into the impact their organization has on the environment. Leveraging the latest breakthroughs in data science and software, our technology will empower teams and leaders to mobilize their organizations to continuously improve their greenhouse gas emissions metrics.

Our Core Values

Sustainability - We are committed to sustainable business practices across our entire operation and culture. We go beyond achieving balance. We are a net-positive contributor to the environment, our employee’s lives, and the global community.

Impact - We are focused on and passionate about tackling the biggest and hardest problems that will have the greatest impact. We create significant, not incremental, solutions.

Collaboration- We are always aligned in our goals and efforts to create the most impactful technologies possible. Constant cooperation across our company, customers, and partners is our standard mode of operating.

Equality - We value and respect people and organizations of all backgrounds. Ours is a culture of innovation, creativity, diversity of thought, and inclusion.

Job Description

We are in search of an experienced DevOps Engineer to collaborate with software developers, data engineers and other IT staff members to manage our CI/CD pipeline and ensure efficient and stable automated code releases. This position may also work with various departments to create, develop, and optimize systems to increase productivity across the company.

Successful candidates should have a minimum of two years of recent professional experience at positions requiring the skills listed below with an emphasis on CICD pipeline design, implementation, and support.

Our project entails implementing, automating, and maturing a CI/CD pipeline used by our global delivery team which utilizes React, NodeJS, Go and MySQL. At present, our CI/CD stack includes BitBucket Cloud, Prettier, ESLint, and StyleLint, Enzyme, Husky, Docker, AWS CodePipeline, AWS Elastic Container Repository and AWS Elastic Container Service.

Responsibilities

· Work closely with engineering professionals within the company to support and maintain CI/CD software and solutions needed for projects to be completed efficiently while maintaining regular communication of progress

· Design and implement future CI/CD pipeline revisions for both code promotion and database configuration

· Integration of CI/CD pipeline with code scanning, functional testing, security scanning and enterprise collaboration tools

· Build and manage automated code deployments, fixes, updates, and related processes. Maintain, and monitor configuration standards

· Testing and evaluating marketing leading CI/CD tools for pipeline orchestration, as well as point solutions

· Ensuring the CI/CD pipeline is built for scalability, reusability, and accessibility

· Troubleshooting promotion, scanning, integration, and communication issues with the current and future pipeline versions

· Help establish and follow DevOps Engineering best practices including creating requirements and procedures for implementing routine maintenance

· Install and configure solutions, implement reusable components, translate technical requirements, assist with all stages of test data, develop interface stubs and simulators, and perform script maintenance and updates

· Monitor metrics and give recommendations for enhancing performance via gap analysis, identifying the most practical alternative solutions, and assisting with modifications

· Assist with automation of our operational processes such as infrastructure provisioning as needed, with accuracy and in compliance with our security requirements

· Analyze current technology utilized within the company and develop steps and processes to improve and expand upon them

· Stay current with industry trends and source new ways for continuous improvement

Skills

· More than two years of experience in a DevOps Engineer or similar role; experience in software development and infrastructure development is a plus

· Minimum two years’ experience with CI/CD pipeline design, documentation, implementation, administration, support, and troubleshooting with tools such as AWS CodePipeline, CircleCI, Travis CI, or Jenkins

· Familiarity with current cloud components including BitBucket, Docker, and AWS Elastic Container Service

· Strong cloud experience with Linux-based infrastructures, Linux/Unix administration, and AWS including creation of cloud formation templates

· At least four years working with databases such as SQL, NoSQL, MySQL, AWS RDS Aurora, Elasticsearch, Redis, Cassandra, and/or Mongo including automating database schema changes and other configuration items between environments

· Proven experience designing, implementing, and testing highly performant and scalable infrastructures

· Experience with various code scanning, security scanning and testing packages which can be integrated with a CI/CD pipeline

· Understanding of containers, microservice hosting, and application security practices

· Appreciation for clean and well documented systems and attention to detail

· Demonstrated understanding of best practices regarding system security measures

· Knowledge of Agile process management systems such as Jira

· Hands-on experience with Ansible, AWS CloudFormation or Hashicorp Terraform preferred

· Strong communication skills and ability to explain protocol and processes with team and management

Job Types: Full-time, Contract

Pay: $65,000.00 - $115,000.00 per year

Schedule:

  • Monday to Friday

COVID-19 considerations:
All positions have been made remote with video conferencing for collaboration. For anyone in the Tempe area that would like to meet in person we have hand sanitizer and masks available.

Experience:

  • CICD: 2 years (Required)
  • DevOps: 2 years (Required)

Contract Renewal:

  • Likely

Full Time Opportunity:

  • Yes

Additional Compensation:

  • Other forms

Work Location:

  • Fully Remote

This Job Is Ideal for Someone Who Is:

  • Dependable -- more reliable than spontaneous
  • Adaptable/flexible -- enjoys doing work that requires frequent shifts in direction
  • Detail-oriented -- would rather focus on the details of work than the bigger picture
  • Achievement-oriented -- enjoys taking on challenges, even if they might fail
  • Autonomous/Independent -- enjoys working with little direction
  • Innovative -- prefers working in unconventional ways or on tasks that require creativity

This Job Is:

  • A job for which military experienced candidates are encouraged to apply
  • A job for which all ages, including older job seekers, are encouraged to apply
  • Open to applicants who do not have a college diploma
  • A job for which people with disabilities are encouraged to apply

Company's website:

  • https://www.persefoni.com/

Work Remotely:

  • Yes

AWS/DevOps Engineer with Secret Clearance (100% Remote Posit...

1 day ago
RemotePentrogon security

KEY RESPONSIBILITIES:

· Performs systems administration functions on Amazon Web Services (AWS) GovCloud infrastructure

· Maintains and creates new, as needed, cloud compute instances, storage, and other cloud services required to support development team

· Performs and verifies Red Hat Enterprise Linux server systems installation, configuration, optimization, security hardening and administration for servers running on Amazon Web Services Cloud Environment.

· Cloud Monitoring experience with CloudWatch and NagioXI

· Maintain cloud-based servers – patching vulnerabilities, backup/restore operations, provision new servers, configure firewalls, configure monitoring systems

REQUIRED SKILLS:

· Experience in Redhat Linux Administration including Bash/shell scripting

· Experience administering services on AWS

· Experience w/ monitoring tools

· Experience with configuration management tools

· Patch Management

· Backup / Recovery

PREFERRED SKILLS:

· Experience working with Continuous Integration/ Continuous Deployment technologies, including JIRA, Git, Jenkins, Chef, Ansible,

· Federal Government experience is a plus

· B.S.Bachlor degree

Job Types: Full-time, Contract

Experience:

  • AWS: 5 years (Preferred)
  • Docker: 1 year (Preferred)
  • Ansible : 1 year (Preferred)

License:

  • TS/Secret Clearance (Preferred)

DevOps Engineer

6 days ago
RemoteStackOverdrive.io

StackOverdrive.io is hiring DevOps Engineers for permanent and contract positions

As a DevOps Consultant at StackOverdrive.io you will work with large to medium-sized companies helping them streamline their infrastructure and deployment strategies.

As a DevOps Engineer you will:

  • Create automated scripts that will build, configure, deploy and test applications deployed to different environments; maintain, support, and enhance our continuous integration environment.
  • Develop system-engineering solutions to help achieve highly available, highly scalable systems.

Requirements:

  • Bachelor’s degree in one or more following disciplines: Computer Science, Computer Engineering, Electrical Engineering, Software Engineering, Informatics, Symbolic Systems, Mathematics or Physics. An advanced degree in one of the above disciplines is preferred.
  • Minimum of 3 years as a Software Developer, Systems Engineer, or DevOps Engineer at a startup, mid- to large-size company is required.
  • In-depth automation experience using configuration management tools Chef, Ansible & CloudFormation
  • Experience with a CI system (i.e. Gitlab & Jenkins) and monitoring tools (New Relic, Datadog, Sensu).
  • Experience with Kubernetes
  • Prior experience developing or working with any of the following Cloud Computing Service: Amazon Web Services (Preferred), Google Compute, Nutanix and Openstack Private Clouds
  • Strong Python, Ruby coding/scripting skills (other scripting languages may be considered).
  • Ability to work with teams both internally and externally
  • Ability to communicate effectively to both technical and non-technical teammates.
  • Flexibility to work within different environments.
  • Willingness to learn and apply new technologies and skills.

Location:

We are located in New York City

Telecommute, Remote OK (Must be located in the United States)

We are an equal opportunity employer.

Java AWS Developer : W2 only

9 days ago
$60 - $80/hourRemoteNeev Systems LLC

Title : Java Developer

Location : Atlanta, GA
Duration : 24+ Months
Start date : ASAP

Key skills required for the job are:

  • Self-starter, seasoned JAVA developer (10+ years ) with working experience with Microservices, Docker, Kubernetes.
  • DevOps experience ( UNIX scripting tools, ansible or chef or puppet , etc.) is required. Groovy knowledge is desired.
  • Working experience with various AWS components and cloud based development is required (5+ years).
  • Hands-on experience in Virtual Assistant/Bot, & NLP/NLU technology is desired, but not required. Machine learning, data modelling and mining is desired.
  • Excellent critical thinking skills, combined with the ability to present your ideas clearly in both verbal and written form.
  • Self-motivated, Self-starter, quick learner, and team player and easy to work with is a must.
  • Work is based in Atlanta office. Candidate can work remotely in current pandemic situation, with daily check-ins with other member of the team working on the project.

Job Types: Full-time, Contract

Pay: $60.00 - $80.00 per hour

Experience:

  • AWS: 1 year (Preferred)
  • spring: 1 year (Preferred)
  • Java: 1 year (Preferred)

DevOps Engineer

17 days ago
RemoteSanderson Recruitment Plc

DevOps Engineer

IR35 Status: Inside
Pay: £495.00

Duration: End date 30/09/2020
Clearance required- BPSS
Location- Newcastle, during this COVID-19 Pandemic can work remotely

Our client require a DevOps Engineer to engage with their public sector customer.

Key skills

  • Version Control Systems - GitLab/GitHub with ability to build pipelines into Version Control Systems
  • Continuous Integration Tools- Jenkins
  • Db Management Tools- NoSQL (Mongo), PostGreSQL / EDB, Windows SQL, AWS RDS
  • Cloud Services- Azure, AWS (Networking, VPC, EC2, S3 & KMS)
  • Operating Systems- Windows, Linux Admin (with Bash scripting)
  • Configuration Management Tools- Ansible, Puppet, Packer (Creating AMI's)
  • Containerisation & Orchestration Tools- Docker, Kubernetes, Openshift
  • Monitoring Tools - Prometheus, Grafana
  • Programming Language - Java, Python & .NET, Bash Scripting (CLI)

Responsibilities:

  • Specify and apply for all Azure accounts via HCS
  • Specify ZScaler access 3
  • Create CIDR block for Dev with segregated subnets
  • Specify AMI’s with end client hardened images and test using Packer
  • Create Development pipeline into GitLab
  • Using IaaC, develop code for Dev subnets using Terraform code for deployment
  • Create RPM packages to deploy code set into VPC subnets using HCS deployers
  • Ensure version control is applied to the RPM packages
  • Ensure all AWS End client CloudWatch, Grafana and Prometheus agents are deployed into Management VPC
  • Create all Workspaces within Dev to Test automated deployments away from live Dev environment.
  • Create all Ansible code for all application deployments and configurations 12.Ensure environment is in accordance with SRE principles (by engaging with SRE)

DevOps Engineer

18 days ago
£450 - £495/dayRemoteSanderson

DevOps Engineer

IR35 Status: Inside
Pay: £495.00

Duration: End date 30/09/2020
Clearance required- BPSS
Location- Newcastle, during this COVID-19 Pandemic can work remotely

Our client require a DevOps Engineer to engage with their public sector customer.

Key skills

  • Version Control Systems - GitLab/GitHub with ability to build pipelines into Version Control Systems
  • Continuous Integration Tools- Jenkins
  • Db Management Tools- NoSQL (Mongo), PostGreSQL / EDB, Windows SQL, AWS RDS
  • Cloud Services- Azure, AWS (Networking, VPC, EC2, S3 & KMS)
  • Operating Systems- Windows, Linux Admin (with Bash scripting)
  • Configuration Management Tools- Ansible, Puppet, Packer (Creating AMI's)
  • Containerisation & Orchestration Tools- Docker, Kubernetes, Openshift
  • Monitoring Tools - Prometheus, Grafana
  • Programming Language - Java, Python & .NET, Bash Scripting (CLI)

Responsibilities:

  1. Specify and apply for all Azure accounts via HCS
  2. Specify ZScaler access 3
  3. Create CIDR block for Dev with segregated subnets
  4. Specify AMI's with end client hardened images and test using Packer
  5. Create Development pipeline into GitLab
  6. Using IaaC, develop code for Dev subnets using Terraform code for deployment
  7. Create RPM packages to deploy code set into VPC subnets using HCS deployers
  8. Ensure version control is applied to the RPM packages
  9. Ensure all AWS End client CloudWatch, Grafana and Prometheus agents are deployed into Management VPC
  10. Create all Workspaces within Dev to Test automated deployments away from live Dev environment.
  11. Create all Ansible code for all application deployments and configurations 12.Ensure environment is in accordance with SRE principles (by engaging with SRE)

DevOps Engineer

18 days ago
£450 - £495/dayRemoteJumar Solutions

Tyne and Wear

Contract

£450 - £495 per day

Job Title – DevOps Engineer

IR35 Status: Inside
Clearance required- BPSS
Location- Newcastle, during this COVID-19 Pandemic can work remotely

Key skills
  • Version Control Systems – GitLab/GitHub with ability to build pipelines into Version Control Systems
  • Continuous Integration Tools- Jenkins
  • Db Management Tools- NoSQL (Mongo), PostGreSQL / EDB, Windows SQL, AWS RDS
  • Cloud Services- Azure, AWS (Networking, VPC, EC2, S3 & KMS)
  • Operating Systems- Windows, Linux Admin (with Bash scripting)
  • Configuration Management Tools- Ansible, Puppet, Packer (Creating AMI’s)
  • Containerisation & Orchestration Tools- Docker, Kubernetes, Openshift
  • Monitoring Tools – Prometheus, Grafana
  • Programming Language – Java, Python & .NET, Bash Scripting (CLI)

Activities likely undertaken –

Dev Environment
1.Specify and apply for all Azure accounts via HCS
2.Specify ZScaler access
3.Create CIDR block for Dev with 3 segregated subnets
4.Specify AMIs with end client hardened images and test using Packer
5.Create Development pipeline into GitLab
6.Using IaaC, develop code for Dev subnets using Terraform code for deployment
7.Create RPM packages to deploy code set into VPC subnets using HCS deployers
8.Ensure version control is applied to the RPM packages
9.Ensure all AWS End client CloudWatch, Grafana and Prometheus agents are deployed into Management VPC
10.Create all Workspaces within Dev to Test automated deployments away from live Dev environment.
11.Create all Ansible code for all application deployments and configurations
12.Ensure environment is in accordance with SRE principles (by engaging with SRE)
Rinse and repeat for Test Environment.

Contact details:

Email: ryan.hargreaves@jumar-solutions.com

TEL: – 07471228141

AWS DevOps Engineer

20 days ago
$60 - $70/hour (Estimated)RemoteAVMSI Technologies

DevOps Engineer

  • Design, architect, deploy and maintain cloud solutions using AWS
  • Advise engineering and software engineering team as they migrate from on-premise to cloud infrastructure
  • Maintain and improve existing infrastructure, e.g., autoscaling, new services, optimizations, etc.
  • Help build a highly automated infrastructure
  • Optimize cloud workloads for cost, scalability, availability, governance, compliance, etc.
  • Guide and/or provide hands-on support to administer production, staging, and deployment environments
  • Advise and assist on Red Hat Enterprise Linux capabilities (preferred but not required)
  • Write Terraform or CloudFormation scripts to create Infrastructure
  • Familiarity with most AWS Services - EC2, ECS, RDS, ECR, S3, SNS, SQS, and more
  • Articulate solutions on various topics, e.g., cloud migrations, enterprise implementation, private and hybrid clouds, etc.
  • Partner with multi-disciplinary teams to teams to understand requirements and plan architecture and solutions
  • Use automation tools like Ansible for provisioning, configuration, deployment, etc.
  • Advise and assist on architecture and strategy across the enterprise
  • Use continuous integration and deployment (CI/CID) tools like Jenkins, Ansible/Chef, Nexus, etc.
  • Experience working in a SAFE Agile environment
  • Apply knowledge on how to integrate with SonarQube for finding Code Quality issues
  • Maintain and improve existing build and deployment processing using CI/CD tools
  • Apply insight and expertise across AWS services
  • Work with virtual machines
  • Apply knowledge of scripting and automation using tools like PowerShell, Python, Bash, Ruby, Perl, etc.
  • Apply knowledge of enterprise-level systems design, networking, software, hardware, integration, etc.
  • Apply DevOps best practices
  • Help protect and preserve the enterprise’s security posture
  • Enable and empower engineering teams to implement STIGs (security technical implementation guides)
  • Collaborate across multi-disciplinary teams to refine the security baseline and harden key components

BACKGROUND

  • Competency in AWS
  • Demonstrated experience with DevOps
  • Ability to write CloudFormation and or Terraform scripts to create infrastructure
  • Competencies in the cloud, storage, networking, etc.

EDUCATION

  • Bachelor’s degree
  • AWS DevOps Developer or Architect Certification. Strong preference for AWS certification(but not necessary)
  • Preference for Agile certification(s)

Send resumes to hr.at.avmsi.com or call 443-860-0834.

Job Type: Contract

Pay: $99,419.00 - $180,791.00 per year

Schedule:

  • 8 Hour Shift
  • Day Shift

Experience:

  • DevOps in AWS: 3 years (Preferred)

Education:

  • High school or equivalent (Preferred)

Location:

  • Seattle, WA (Preferred)

Work authorization:

  • United States (Required)

Full Time Opportunity:

  • No

Work Location:

  • Multiple locations
  • Fully Remote

This Job Is:

  • A good fit for applicants with gaps in their resume, or who have been out of the workforce for the past 6 months or more
  • A good job for someone just entering the workforce or returning to the workforce with limited experience and education
  • A job for which all ages, including older job seekers, are encouraged to apply
  • Open to applicants who do not have a college diploma

Company's website:

  • avmsi.com

DevOps/Infrastructure Engineer

1 month ago
£75.68 - £84.09/hourRemoteHarnham

DevOps Engineer
Stockholm, Sweden/Remote
6-month Contract
1000 SEK per hour


As a DevOps Engineer, you will be responsible for stabilising a Hadoop cluster using Ansible, Terraform and Kubernetes.


THE COMPANY:
This company are globally established gaming and betting firm who have have seen a steady increase ein activity since sports has started to return. In order to store and secure customer data, the big data team need to ensure their infrastructure is secure enough to house a Hadoop cluster. You will be automating the development process, introducing CI/CD to the team.

THE ROLE:
As a Big Data DevOps Engineer, you are required to help advise on how to architect and deploy a big data solution to assist with the vast transformation project that the company are doing. If you have experience working with the Hadoop eco-system, this will be the perfect role for you. As for the DevOps side of the role, you will be building CI/CD pipelines in Python. You will also get the chance to install Docker containers as well as working with Kubernetes and Ansible.

YOUR SKILLS AND EXPERIENCE:
The ideal DevOps Engineer will have:

  • Experience working with the Hadoop eco-system
  • Expertise with Ansible, Docker and Jenkins
  • Reviewed production level code in Java or Python
  • Built CI/CD pipelines


HOW TO APPLY:
Please submit your CV to Henry Rodrigues at Harnham via the Apply Now button.
Please note that our client is currently running a fully remote interview process, and able to on-board and hire remotely as well.

Python / Unix administrator

1 month ago
RemoteZensar Technologies
Python / Unix administrator - (0054555)
Description

About Zensar Technologies

Zensar is a leading digital solutions and technology services company partnering with global organizations on their digital transformation journey. A technology partner of choice, with strong track-record of innovation, credible investment in digital solutions and assertion of commitment to client’s success, Zensar’s comprehensive range of services and solutions enable clients achieve new thresholds of performance. Part of the $40 billion APAX Partners’ portfolio of companies, Zensar is uniquely positioned to help existing businesses run efficiently, manage legacy transformation and plan business growth through innovative digital platform.


Working at Zensar

Working at Zensar is an enriching experience. While work is driven by innovation and passion, fun is taken seriously too. An open environment is encouraged making it easy to brainstorm with colleagues. Creative thinking is encouraged through time out activities. Moreover, the offices have been designed to foster creativity and communication, bringing a little bit of home into work every day. Zensar provides and a comprehensive benefit package for all full time or contract employee.


Zensar Technologies is seeking a DevOPs/ Unix Administrator to work remotely in the USA. The full time role provides great benefits.


Requirements:


Minimum Requirements: (“Must have” Qualifications)


  • Expert knowledge of Python.
  • Expert knowledge of Bash.
  • UNIX admin, comfortable with Root access to servers
  • Container orchestration and management (deploy docker containers, logging, trouble shooting in a container environment, backup and restores in Container environment)
  • Expert knowledge of CI/CD devops experience with multiple source code management systems, multiple package managers, orchestration tools like ansible, puppet
  • Good communication skills
  • Good time management skill


Desired Skills/Qualifications/System Experience requirements: (“Nice to have Qualifications”)

  • Developing shared libraries and plugins using Groovy
  • Familiarity with managing infrastructure within a public cloud provider (AWS/GCP/Azure)
  • Familiarity with multiple package managers for Python, NPM, C++
  • Blackduck (or other open source scanning software) a plus
  • Operations support functions
  • Postgres for basic database interactions


Typical expectations include:

  • Leverage rest API’s to feed information between BlackDuck and dependent applications in the eco-system (integrations)
  • Manage daily operations of the Blackduck host computers by monitoring the application availability, configuration and performance.
  • Apply security recommendations and be in compliance with security policies
  • Develop custom scripts for monitoring and alerting
  • Work with the vendor on troubleshooting issues with the product.
  • Provision accounts for end user.
  • Develop backup strategies using industry technologies.
  • Be responsible for product upgrades and apply patches as needed.
  • Integrate with multiple package managers


Education:

Bachelor Degree: Information technology, Computer Science or related field


Primary Location: United States of America-North Carolina-Raleigh
Job Posting: May 26, 2020, 7:43:21 PM
Total Experience (In Years): 4 To 6

Data Engineer

1 month ago
RemoteGeorgia IT Inc.

We are looking for strong Data Engineers, skilled in Hadoop, Scala, Spark, Kafka, Python, and AWS. I've included the job description below.
Here is what we are looking for:

Overall Responsibility:

  • Develop sustainable data driven solutions with current new gen data technologies to meet the needs of our organization and business customers.
  • Apply domain driven design practices to build out data applications. Experience in building out conceptual and logical models.
  • Build out data consumption views and provisioning self-service reporting needs via demonstrated dimensional modeling skills.
  • Measuring data quality and making improvements to data standards, helping application teams to publish data in the correct format so it becomes easy for downstream consumption.
  • Big Data applications using Open Source frameworks like Apache Spark, Scala and Kafka on AWS and Cloud based data warehousing services such as Snowflake.
  • Build pipelines to enable features to be provisioned for machine learning models. Familiar with data science model building concepts as well as consuming and from data lake.

Basic Qualifications:

  • At least 8 years of experience with the Software Development Life Cycle (SDLC)
  • At least 5 years of experience working on a big data platform
  • At least 3 years of experience working with unstructured datasets
  • At least 3 years of experience developing microservices: Python, Java, or Scala
  • At least 1 year of experience building data pipelines, CICD pipelines, and fit for purpose data stores
  • At least 1 year of experience in cloud technologies: AWS, Docker, Ansible, or Terraform
  • At least 1 year of Agile experience
  • At least 1 year of experience with a streaming data platform including Apache Kafka and Spark

Preferred Qualifications:

  • 5+ years of data modeling and data engineering skills
  • 3+ years of microservices architecture & RESTful web service frameworks
  • 3+ years of experience with JSON, Parquet, or Avro formats
  • 2+ years of creating data quality dashboards establishing data standards
  • 2+ years experience in RDS, NOSQL or Graph Databases
  • 2+ years of experience working with AWS platforms, services, and component technologies, including S3, RDS and Amazon EMR

Job Type: Contract

Schedule:

  • Monday to Friday

Experience:

  • AWS: 1 year (Preferred)
  • Hadoop: 1 year (Required)
  • Spark: 1 year (Required)
  • Big Data: 1 year (Preferred)
  • Scala: 1 year (Preferred)
  • Data Engineering: 1 year (Required)

Contract Renewal:

  • Possible

Full Time Opportunity:

  • Yes

Work Location:

  • Fully Remote

Work Remotely:

  • Yes