Remote
0
USD
/hr

4 remote Apache Solr contracts

Senior DevOps Engineer

1 day ago
£540 - £660/day (wellpaid.io estimate)RemoteAlexander Mann Solutions (Contingent)

Alexander Mann Solutions (AMS) is the world's leading provider of Talent Acquisition and Management Services. We deliver award-winning solutions to over 65 outsourcing clients and consulting services to hundreds more. Our Contingent Workforce Solutions (CWS) service acts as an extension of our clients' recruitment team and provides professional interim and temporary resources.

Our investment banking client has been present in the UK for more than 150 years, they're a long-term partner to British business. Today, the Group is formed of 10 divisions and employs 9,300 staff based in 21 core locations right across the country. Their role is simply stated: help clients achieve their goals by combining local know-how and global reach. In so doing, they seek to make a positive, sustainable contribution to both the UK economy and society.

On behalf of this organisation, AMS are looking for a Senior DevOps Engineer for a 6 month contract based remotely (then 50/50 post covid)

Purpose of the Role:

An architecture transformation strategy is taking place to develop Native Cloud Applications with a technology stack based on Micro-Service Architecture, API, Docker, Kubernetes. Over the next year there will be a number of migrations taking place, alongside these you would be expected to provide support to dev teams, upgrades and continuous improvement.

As a Senior DevOps Engineer you will be responsible for:

  • Continuous Integration and Continuous Delivery
  • Configuration management
  • Automating tasks (Ruby, Python, Bash)
  • Setting up logging and monitoring for applications and infrastructure
  • Containerization of applications and deployment into production environments

What we require from the candidate:

  • Expert level on Continuous Delivery, BDD and Testing Strategies.
  • Knowledge on build, deployment and testing tools (Bitbucket/Git, JIRA, TeamCity, Artifactory, Ansible, Puppet, JBehave).
  • Java and C# knowledge including hands on experience with Spring and REST.
  • Distributed Systems (Oracle Coherence, Apache Cassandra, IBM Spectrum Symphony).
  • An appreciation of distributed systems techniques.

If you are interested in applying for this position and meet the criteria outlined above, please click the link to apply and we will contact you with an update in due course.

This client will only accept workers operating via an Umbrella or PAYE engagement model.

Alexander Mann Solutions, a Recruitment Process Outsourcing Company, may in the delivery of some of its services be deemed to operate as an Employment Agency or an Employment Business

Get new remote Apache Solr contracts sent to you every week.
Subscribed to weekly Apache Solr alerts! 🎉

Senior Software Developer (Java)

16 days ago
RemoteBDR Solutions

BDR Solutions, LLC, (BDR) supports the U.S. Federal Government in successfully achieving their mission and goals. Our service and solution delivery starts with understanding each client’s end-state, and then seamlessly integrating within each Agency’s organization to improve and enhance business and technical operations and deployments.

BDR is an SBA approved 8(a) program participant, Service-Disabled Veteran-Owned Small Business (SDVOSB), certified Historically Underutilized Business Zone (HUBZone), and Minority-Owned Small Disadvantaged Business (SDB).

BDR is seeking a Senior Software Developer (Java) to support the Department of Veterans Affairs (VA), Community Care Reimbursement System (CCRS) Development contract. The place of performance of this position is in Vienna, VA, however the individual will work remotely until it is safe to return working in the office environment. When business resumes to working ‘in office’ the individual will have the option of working remotely on a part-time basis. The ideal candidate will be local to the Vienna, VA area and able to meet with team members as needed.

 

Senior Software Developer (Java) (Military Veterans are highly encouraged to apply)

 

Role Overview

Senior Software Developer (Java) will be responsible for providing sustainment support and development for defects, updates and enhancement requests for CCRS supported applications.  The developer will also be responsible for serving in a DevOps role as the administrator for the CCRS database and WebLogic servers as well as providing configuration management support.

 

Essential Job Functions and Responsibilities

  • Perform complex analysis, design, development, testing, and debugging of computer software
  • Maintain responsibility for activities that range from software design, coding, unit testing, and orchestration
  • Participate as an active member of the scrum team, providing input to team velocity and sprint ceremonies, planning, demonstrations and retrospectives
  • Apply knowledge of one or more systems and one or more platforms and programming languages, including Java, J2EE, JavaScript, etc.
  • Support the back-end developers to code and program extraction, transformation, standardization, aggregation and mining of data for the repository, including coding web interfaces using SOAP/XML, REST and others
  • Experience in development using rules-engine model, to build and deploy middleware business service applications from code to test and production environments
  • Participate in planning effective management, operations, and maintenance of systems and/or networks
  • Supports and assists in maintenance of a wide variety of systems and networks to include high volume/high availability systems
  • Supports and assists in the maintenance of the integrity and security of servers and systems
  • Configures, maintains and troubleshoots network related interfaces on servers and implements corrective actions
  • Conducts systems analysis and development to keep systems current with changing technologies
  • Performs, maintains, troubleshoots and does analysis of alerts on Servers
  • Contributes and executes plans, designs, and implements necessary configurations to ensure compliance with back-up and disaster recovery policies, procedures for servers
  • Supports the maintenance of Security Compliancy processes and policies

 

Required Skills & Experience

  • Strong knowledge of Object-Oriented concepts, principals and patterns
  • Full stack development
  • Strong working knowledge of:
    • Java 8+
    • OOD, Design Patterns
    • Distributed messaging, JMS
  • Spring and its frameworks like Spring Boot, Spring Data
  • Relational Databases (such as Oracle), SQL, PL/SQL, JDBC, and JPA
  • Multi-threaded server side development
  • Experience in Java performance tuning, debugging and memory profiling
  • Able to work productively under pressure
  • Experience in Db2, SQL Server 2012/2016, and SSRS
  • Proficient in Java 1.6+ language
  • J2EE hands-on knowledge, including JSP and EJB authoring experience, struts/servlet technologies, and related J2EE componentry, including JDBC and persistence frameworks like Spring/Hibernate/etc.
  • Proficient knowledge of WildFly/JBoss and Java compatibility
  • Working knowledge of WebLogic
  • Hands on knowledge of Linux/Apache – hands-on with apache extensions, a plus
  • Comfortable with basic command-line server administrative functions, including server restarts, startups, administration of httpd/apache configuration files, etc.
  • Comfortable with basic Linux/Unix server utilities, including crontab, bash/shell scripting, and server log management and forensics
  • Experience integrating with IAM & 2-Factor Authentication models
  • Experience integrating with Oracle/MSSQL and other enterprise-grade RDBMS solutions
  • Experience with web-service development, in SOAP/XML, REST, JSON, WSDL
  • Experience using Gradle, Maven, Jenkins, Hibernate and Spring and/or Boot

 

Desired Skills & Experience

  • Experience working under Agile methodologies or SAFe
  • Identify opportunities for process improvement
  • Strong interpersonal skills
  • Ability to work effectively and efficiently
  • Strong time management skills

 

Required Minimum Qualifications

  • Bachelor’s degree in computer science, electronics engineering, or other engineering or technical discipline is required OR an additional 8 years’ experience
  • Minimum of 5 years of relevant experience on similar projects

 

Physical Requirements

  • Ability to safely and successfully perform the essential job functions consistent with the ADA, FMLA and other federal, state and local standards, including meeting qualitative and/or quantitative productivity standards
  • Ability to maintain regular, punctual attendance consistent with the ADA, FMLA and other federal, state and local standards
  • Must be able to talk, listen and speak clearly on telephone

 

In addition, U.S Citizenship is required. Applicants selected will be subject to a government security investigation and must meet eligibility requirements for access to classified information and be able to obtain a government-granted security clearance. Individuals may also be subject to a background investigation including, but not limited to criminal history, employment and education verification, drug testing, and creditworthiness.

 

BDR is an Equal Opportunity Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, age, national origin, marital status, disability, veteran status, sexual orientation, or genetic information.

Powered by JazzHR

DevOps Engineer

18 days ago
RemoteApex Systems

Apex Systems the 2nd largest IT Staffing firm in the nation is seeking an experienced DevOps Engineer to join our client’s team. This is a  W2 contract position  is slated for 6 months with possibility for extension/conversion and is FULLY REMOTE (PST hours).

**Must be comfortable sitting on Apex System's W2**

Job Description:

We are on a mission to connect every member of the global workforce with economic opportunity, and that starts right here. Talent is our number one priority, and we make sure to apply that philosophy both to our customers and to our own employees as well. Explore cutting-edge technology and flex your creativity. Work and learn from the best. Push your skills higher. Tackle big problems. Innovate. Create. Write code that makes a difference in professionals’ lives.

Gobblin is a distributed data integration framework that was born at client and was later released as an open-source project under the Apache foundation. Gobblin is a critical component in client's data ecosystem, and is the main bridge between the different data platforms, allowing efficient data movement between our AI, analytics, and member-facing services. Gobblin utilizes and integrates with the latest open source big data technologies, including Hadoop, Spark, Presto, Iceberg, Pinot, ORC, Avro, and Kubernetes. Gobblin is a key piece in client's data lake, operating at a massive scale of hundreds of petabytes.

Our latest work involves integrations with cutting edge technologies such as Apache Iceberg to allow near-real-time ingestion of data from various sources onto our persistent datasets that allow complex and highly scalable query processing for various business logic applications, serving machine-learning and data-science engineers. Furthermore, we play an instrumental role in client's transformation from on-prem oriented deployment to Azure cloud-based environments. This transformation prompted a massive modernization and rebuilding efforts of Gobblin, transforming it from a managed set of Hadoop batch jobs to an agile, auto-scalable, real-time streaming oriented PaaS, with user-friendly self-management capabilities that will boost productivity across our customers. This is an exciting opportunity to take part in shaping the next generation of the platform.

What is the Job

You will be working closely with development and site reliability teams to better understand their challenges in aspects like:

Increasing development velocity of data management pipelines by automating testing and deployment processes,

Improving the quality of data management software without compromising agility.

You will create and maintain fully-automated CI/CD processes across multiple environments and make them reproducible, measurable, and controllable for data pipelines that deal with PBs every day. With your abundant skills as a DevOps engineer, you will also be able to influence the broad teams and cultivate DevOps culture across the organization.

Why it matters

CI/CD for big data management pipelines have been a traditional challenge for the industry. This is becoming more critical as we evolve our tech stack into the cloud age (Azure). With infrastructure shifts and data lake features being developed/deployed at an ever fast pace, our integration and deployment processes must evolve to ensure the highest-quality and fulfill customer commitments. The reliability of our software greatly influences the analytical workload and decision-making processes across many company-wide business units, the velocity of our delivery plays a critical role to transform the process of mining insights from massive-scale Data Lake into an easier and more efficient developer productivity paradigm.

What You’ll Be Doing

  • Work collaboratively in an agile, CI/CD environment
  • Analyze, document, and implement and maintain CI/CD pipelines/workflows in cooperation with the data lake development and SRE teams
  • Build, improve, and maintain CI/CD tooling for data management pipelines
  • Identify areas for improvement for the development processes in data management teams
  • Evangelize CI/CD best practices and principles
  • Technical Skills

  • Experienced in building and maintaining successful CI/CD pipelines
  • Self-driven and independent
  • Has experience with Java, Scala, Python or other programming language
  • Great communication skills
  • Master of automation
  • Years of Experience

  • 5+
  • Preferred Skills

  • Proficient in Java/Scala
  • Proficient in Python
  • Experienced in working with:
  • Big Data environments: Hadoop, Kafka, Hive, Yarn, HDFS, K8S
  • ETL pipelines and distributed systems
  • DevOps Engineer

    1 month ago
    RemoteApex Life Sciences.

    Apex Systems the 2nd largest IT Staffing firm in the nation is seeking an experienced DevOps Engineer to join our client’s team. This is a  W2 contract position  is slated for 6 months with possibility for extension/conversion and is FULLY REMOTE (PST hours).

    **Must be comfortable sitting on Apex System's W2**

    If you are interested send all qualified resumes to Nathan Castillo (Professional Recruiter with Apex Systems) at Ncastillo@apexsystems.com! 

    Job Description:

    We are on a mission to connect every member of the global workforce with economic opportunity, and that starts right here. Talent is our number one priority, and we make sure to apply that philosophy both to our customers and to our own employees as well. Explore cutting-edge technology and flex your creativity. Work and learn from the best. Push your skills higher. Tackle big problems. Innovate. Create. Write code that makes a difference in professionals’ lives.

    Gobblin is a distributed data integration framework that was born at client and was later released as an open-source project under the Apache foundation. Gobblin is a critical component in client's data ecosystem, and is the main bridge between the different data platforms, allowing efficient data movement between our AI, analytics, and member-facing services. Gobblin utilizes and integrates with the latest open source big data technologies, including Hadoop, Spark, Presto, Iceberg, Pinot, ORC, Avro, and Kubernetes. Gobblin is a key piece in client's data lake, operating at a massive scale of hundreds of petabytes.

    Our latest work involves integrations with cutting edge technologies such as Apache Iceberg to allow near-real-time ingestion of data from various sources onto our persistent datasets that allow complex and highly scalable query processing for various business logic applications, serving machine-learning and data-science engineers. Furthermore, we play an instrumental role in client's transformation from on-prem oriented deployment to Azure cloud-based environments. This transformation prompted a massive modernization and rebuilding efforts of Gobblin, transforming it from a managed set of Hadoop batch jobs to an agile, auto-scalable, real-time streaming oriented PaaS, with user-friendly self-management capabilities that will boost productivity across our customers. This is an exciting opportunity to take part in shaping the next generation of the platform.

    What is the Job

    You will be working closely with development and site reliability teams to better understand their challenges in aspects like:

    Increasing development velocity of data management pipelines by automating testing and deployment processes,

    Improving the quality of data management software without compromising agility.

    You will create and maintain fully-automated CI/CD processes across multiple environments and make them reproducible, measurable, and controllable for data pipelines that deal with PBs every day. With your abundant skills as a DevOps engineer, you will also be able to influence the broad teams and cultivate DevOps culture across the organization.

    Why it matters

    CI/CD for big data management pipelines have been a traditional challenge for the industry. This is becoming more critical as we evolve our tech stack into the cloud age (Azure). With infrastructure shifts and data lake features being developed/deployed at an ever fast pace, our integration and deployment processes must evolve to ensure the highest-quality and fulfill customer commitments. The reliability of our software greatly influences the analytical workload and decision-making processes across many company-wide business units, the velocity of our delivery plays a critical role to transform the process of mining insights from massive-scale Data Lake into an easier and more efficient developer productivity paradigm.

    What You’ll Be Doing

  • Work collaboratively in an agile, CI/CD environment
  • Analyze, document, and implement and maintain CI/CD pipelines/workflows in cooperation with the data lake development and SRE teams
  • Build, improve, and maintain CI/CD tooling for data management pipelines
  • Identify areas for improvement for the development processes in data management teams
  • Evangelize CI/CD best practices and principles
  • Technical Skills

  • Experienced in building and maintaining successful CI/CD pipelines
  • Self-driven and independent
  • Has experience with Java, Scala, Python or other programming language
  • Great communication skills
  • Master of automation
  • Years of Experience

  • 5+
  • Preferred Skills

  • Proficient in Java/Scala
  • Proficient in Python
  • Experienced in working with:
  • Big Data environments: Hadoop, Kafka, Hive, Yarn, HDFS, K8S
  • ETL pipelines and distributed systems