Save this contract

Mind providing us with your email so we can save this contract? We promise we won't spam you, and you can unsubscribe any time.
Saved this job! You can see it in saved.
Saved this job! You can see it in saved.
Remote
0
USD
/hr

6 remote Scala contracts

Scala Developer (Java, API, Microservices, Kafka, Cloud, NOS...

25 days ago
$55 - $70/hour (Estimated)RemoteSpiceorb
Title: Scala Developer (Java, API, Microservices, Kafka, Cloud, NOSQL)
Location: Bentonville, Arkansas
No of Positions: 8

REQUIREMENTS:
  • Min three years of hands on experience in projects in Scala & Kafka
  • Must have worked on Play / AKKA framework
  • Expertise in tool kits Akka, SBT Parser:Lift JSON
  • Server-side experience in JDBC, JSP, SAX/DOM, Web Services, SOAP, WSDL, UDDI, JAXB
  • Must be able to write complex SQL statements
  • Should be able to demonstrate experience in
o SCM: Git, SVN, ClearCase Good understanding of JVM
o Build: NAnt, sbt, FMake, NuGet, gulp
o Application Containers: Apache, Tomcat, Jetty
o Web: Web Logic 5.x/6.x, Websphere 3.5/4, Play Framework, Spray
  • Should have worked in Azure or Google or AWS environment
  • Experience in NoSQL solution such as MongoDb, or Cassandra will be a plus

We are looking for someone who must have below mentioned experience in recent project:-

  • who might have started their career as a Java Developer but recent experience should be in Scala Scala with API Microservices Development ( 1 year of experience is must)
  • Scala with API Microservices Development
  • Complete Backend with Scala
  • Scala is advanced functional programming
  • Scala Play / Akka
  • Kafka: Message Service
  • On Cloud experience
  • NoSQL Database

This position is based in Bentonville, Arkansas though can discuss remote working options if required
Get new remote Scala contracts sent to you every week.
Subscribed to weekly Scala alerts! 🎉

Scala Developer Remote

1 month ago
$60/hourRemoteXyant Services

We are looking for Java Developers with Scala.

Location: Bentonville, AR. Remote

- Candidate should have real time experience in scala recent projects.

- Should have at least two experience in scala API development not in Hadoop

- Scala with API Microservices Development

Requirements:

  • Min three years of hands on experience in projects in Scala & Kafka
  • Must have worked on Play / AKKA framework
  • Should have worked in Azure or Google or AWS environment
  • Five plus years of Java Development experience (Java 8 functional programming is must)
  • Expertise in tool kits Akka, SBT Parser: Lift JSON
  • Server-side experience in JDBC, JSP, SAX/DOM, Web Services, SOAP, WSDL, UDDI, JAXB
  • Must be able to write complex SQL statements
  • Should be able to demonstrate experience in

- SCM: Git, SVN, ClearCase Good understanding of JVM

- Build: NAnt, sbt, FMake, NuGet, gulp

- Application Containers: Apache, Tomcat, Jetty

- Web: Web Logic 5.x/6.x, Websphere 3.5/4, Play Framework, Spray

  • Experience in NoSQL solution such as MongoDb, or Cassandra will be a plus

Job Type: Contract

Salary: $60.00 /hour

Senior Hadoop Developer w Spark/Scala/ETL (Remote)

13 days ago
Remotecloudteam

A Hadoop developer is responsible for the design, development and operations of systems that store and manage large amounts of data. Most Hadoop developers have a computer software background and have a degree in information systems, software engineering, computer science, or mathematics.

IT Developers are responsible for development, programming, coding of Information Technology solutions. IT Developers are responsible for documenting detailed system specifications, participation in unit testing and maintenance of planned and unplanned internally developed applications, evaluation and performance testing of purchased products. IT Developers are responsible for including IT Controls to protect the confidentiality, integrity, as well as availability of the application and data processed or output by the application. IT Developers are assigned to moderately complex development projects.

Essential Functions:

  • Write code for moderately complex system designs. Write programs that span platforms. Code and/or create Application Programming Interfaces (APIs).
  • Write code for enhancing existing programs or developing new programs.
  • Review code developed by other IT Developers.
  • Provide input to and drive programming standards.
  • Write detailed technical specifications for subsystems. Identify integration points.
  • Report missing elements found in system and functional requirements and explain impacts on subsystem to team members.
  • Consult with other IT Developers, Business Analysts, Systems Analysts, Project Managers and vendors.
  • “Scope” time, resources, etc., required to complete programming projects. Seek review from other IT Developers, Business Analysts, Systems Analysts or Project Managers on estimates.
  • Perform unit testing and debugging. Set test conditions based upon code specifications. May need assistance from other IT Developers and team members to debug more complex errors.
  • Supports transition of application throughout the Product Development life cycle. Document what has to be migrated. May require more coordination points for subsystems.
  • Researches vendor products / alternatives. Conducts vendor product gap analysis / comparison.
  • Accountable for including IT Controls and following standard corporate practices to protect the confidentiality, integrity, as well as availability of the application and data processed or output by the application.
  • The essential functions listed represent the major duties of this role, additional duties may be assigned.


Job Requirements:

  • Experience and understanding with unit testing, release procedures, coding design and documentation protocol as well as change management procedures
  • Proficiency using versioning tools
  • Thorough knowledge of Information Technology fields and computer systems
  • Demonstrated organizational, analytical and interpersonal skills
  • Flexible team player
  • Ability to manage tasks independently and take ownership of responsibilities
  • Ability to learn from mistakes and apply constructive feedback to improve performance
  • Must demonstrate initiative and effective independent decision-making skills
  • Ability to communicate technical information clearly and articulately
  • Ability to adapt to a rapidly changing environment
  • In-depth understanding of the systems development life cycle
  • Proficiency programming in more than one object-oriented programming language
  • Proficiency using standard desktop applications such as MS Suite and flowcharting tools such as Visio
  • Proficiency using debugging tools
  • High critical thinking skills to evaluate alternatives and present solutions that are consistent with business objectives and strategy


Specific Tools/Languages Required:

  • HADOOP
  • Spark

Experience:

  • 5-8 years related work experience or equivalent combination of transferable experience and education
  • Hadoop 4 years’ experience
  • ETL Data Warehousing need 7+ Years of experience
  • Ab initio conversion to Spark – but also maintaining Ab-Initio (7+ years of Ab-Initio Experience)
  • Experience with Agile Methodology


Required Education:

  • Related Bachelor's degree in an IT related field or relevant work experience

BigData Solutions Engineer

14 days ago
$62 - $68/hourRemoteOmnipoint Services Inc

Our fortune client is looking for a talented Solutions Engineer.. This is one of our top clients and we have been successful in building out entire teams for this organization. This role will be temp to permanent, 40 hours/week, paid on an hourly rate plus very highly subsidized benefits. This role will start working remotely but after Covid restrictions are lifted, the goal is to have this person onsite in Hartford CT

  • 6+ years Hortonworks HDP Solution Architect helping re-solution the migration projects from HDP 2.6 to 3.1.
  • Thorough understanding of HDP 2.6 and 3.1 platforms and related tech stack.
  • Good documentation (Vizio) and presentation(PPT) skills.
  • HDP 2.x and HDP 3.x

Deliverables:

  • Review project current solution, document the proposed solution review with involved groups, help engineering teams implement the solution end to end with low level technical recommendations and code review.
  • Document existing and new solution patterns.

Tools involved:

  • Apache Hadoop 3.1.1(Hadoop File System)
  • Apache HBase 2.0.0(Java APIs)
  • Apache Hive 3.1.0(Hive Query Language)
  • Apache Kafka 1.1.1(Java/Python/Spark streaming APIs)
  • Apache Phoenix 5.0.0(Standard SQL, JDBC, ODBC)
  • Apache Pig 0.16.0
  • Apache Ranger 1.1.0
  • Apache Spark 2.3.1(Java, Scala, Python)
  • Apache Sqoop 1.4.7
  • Apache Tez 0.9.1

Java based web services APIs and python clients.

Job Types: Full-time, Contract

Pay: $62.00 - $68.00 per hour

Experience:

  • Apache Hive 3.1.0(Hive Query Language): 4 years (Required)
  • Apache Kafka 1.1.1(Java/Python/Spark streaming APIs): 4 years (Required)
  • Apache HBase 2.0.0(Java APIs): 4 years (Required)
  • Apache Ranger 1.1.0: 4 years (Required)
  • Java based webservices APIs and python clients: 2 years (Required)
  • Apache Spark 2.3.1(Java, Scala, Python): 4 years (Required)
  • Hortonworks HDP Solution Architect: 8 years (Required)
  • Apache Pig 0.16.0: 4 years (Required)
  • Apache Hadoop 3.1.1(Hadoop File System): 4 years (Required)
  • HDP 2.6 and 3.1 platforms and related tech stack: 5 years (Required)
  • Apache Phoenix 5.0.0(Standard SQL, JDBC, ODBC): 4 years (Required)

Work Remotely:

  • No

Scientific Data Engineer

1 month ago
$55 - $70/hour (Estimated)RemoteAllen Institute for Immunology

Bioinformatics Data Engineer

The mission of the Allen Institute is to unlock the complexities of bioscience and advance our knowledge to improve human health. Using an open science, multi-scale, team-oriented approach, the Allen Institute focuses on accelerating foundational research, developing standards and models, and cultivating new ideas to make a broad, transformational impact on science.

The goal of the Allen Institute for Immunology is to advance the fundamental understanding of human immunology through the study of immune health and disease where excessive or impaired immune responses drive pathological processes.

The Allen Institute for Immunology is seeking a Bioinformatics Data Engineer (Data Scientist) with broad experience in developing computer codes/scripts to automate the analysis of omics data, especially next generation sequencing (NGS) data, to join our Informatics and Computational Biology team.

You will be part of a multidisciplinary team and will be responsible for (i) development and implementation of data processing and analysis software as needed, (ii) assisting in both pipeline and exploratory analysis of data from diverse assays and sample types, (iii) working towards visualizations and reports for internal and external dissemination. As such, ideal candidates should have a good understanding of sequencing technologies, and a proven track record of development of analytical software packages. This role includes analysis and integration of “big data” types, and working in close collaboration with the software development team for deployment on our interactive cloud environment to ensure user accessibility and generation of actionable insights. You will also support technology development projects in collaboration with the Molecular Biology and Immunology teams.

Good judgment and problem-solving skills are required for recognizing anomalous data, identifying and fixing code bugs and participating in data-driven algorithm design and improvement. A successful candidate will have demonstrated success in big data science, code optimization and deployment. The Bioinformatics Data Engineer must have excellent attention to detail and the eagerness to work in a team science, deadline-driven atmosphere.

Essential Functions

  • Design and develop software programs to optimize scRNA-seq, scATAC-seq & CITE-seq processing pipelines and analysis algorithms including PCA and dimensionality reduction

  • Deploy automated pipelines in our interactive cloud environment with graphical user interface to facilitate user accessibility

  • Publish codebase or software as part of high impact publications or releases

  • Integrate multiple data streams for “Big Data” analysis (examples include scRNA-seq, scATAC-seq, flow cytometry, WGS)

  • Generate interactive data visualizations and work with end users to identify actionable insights

  • Exploratory data mining

  • Meet production deadlines for data analysis and be able to pivot between multiple projects

Required Qualifications

  • Bachelor's degree in a big data computational field (e.g., Bioinformatics, Computer Science, Biostatistics, Physics, Mathematics) with a minimum of 2 years experience in analyzing omics data.

  • Demonstrated success in a multidisciplinary team environment.

  • Good understanding of sequencing technologies, data processing and integrative analysis

  • Fluency in Java, Python, R and Unix shell scripting.

  • Experience in Big Data analysis, code optimization & parallel programming. Proven experience with big data analysis technical and languages such as Apache Spark, BigTable, Scala or Rust.

  • Good knowledge of version control systems such as Git

  • Strong organizational, teamwork, and communication skills

  • Attention to detail, and good problem-solving skills

Preferred Qualifications

  • Masters or PhD in Bioinformatics/Computational Biology or similar

  • Familiarity with immunology

  • Understanding of Flow Cytometry and CyTOF analysis a plus

  • Familiarity with cloud computing

  • Ability to implement, test, and share new computational tools quickly, in an iterative manner, after feedback from experimental, data production, and analysis teams

  • Excellent work ethic displayed as a reliable, self-motivated, enthusiastic team player

  • Ability to learn new programming languages and packages

  • Eager to learn new skills

Work Environment

  • Working at a computer and using a mouse for extended periods of time

  • May need to work outside of standard working hours at times

Travel

  • Some travel may be required

Additional Details:

  • This role is currently able to work remotely full-time, this may change and you may be required to work onsite as safety restrictions are lifted in relation to Covid-19. You must be a Washington State resident to work remotely.

  • We are open to full-time, part-time, and/or contract work for this role. When you apply, please specify which work arrangement you desire. We are flexible.


Additional Comments

**Please note, this opportunity does sponsor work visas**

**Please note, this opportunity offers relocation assistance**

Data Engineer

1 month ago
RemoteGeorgia IT Inc.

We are looking for strong Data Engineers, skilled in Hadoop, Scala, Spark, Kafka, Python, and AWS. I've included the job description below.
Here is what we are looking for:

Overall Responsibility:

  • Develop sustainable data driven solutions with current new gen data technologies to meet the needs of our organization and business customers.
  • Apply domain driven design practices to build out data applications. Experience in building out conceptual and logical models.
  • Build out data consumption views and provisioning self-service reporting needs via demonstrated dimensional modeling skills.
  • Measuring data quality and making improvements to data standards, helping application teams to publish data in the correct format so it becomes easy for downstream consumption.
  • Big Data applications using Open Source frameworks like Apache Spark, Scala and Kafka on AWS and Cloud based data warehousing services such as Snowflake.
  • Build pipelines to enable features to be provisioned for machine learning models. Familiar with data science model building concepts as well as consuming and from data lake.

Basic Qualifications:

  • At least 8 years of experience with the Software Development Life Cycle (SDLC)
  • At least 5 years of experience working on a big data platform
  • At least 3 years of experience working with unstructured datasets
  • At least 3 years of experience developing microservices: Python, Java, or Scala
  • At least 1 year of experience building data pipelines, CICD pipelines, and fit for purpose data stores
  • At least 1 year of experience in cloud technologies: AWS, Docker, Ansible, or Terraform
  • At least 1 year of Agile experience
  • At least 1 year of experience with a streaming data platform including Apache Kafka and Spark

Preferred Qualifications:

  • 5+ years of data modeling and data engineering skills
  • 3+ years of microservices architecture & RESTful web service frameworks
  • 3+ years of experience with JSON, Parquet, or Avro formats
  • 2+ years of creating data quality dashboards establishing data standards
  • 2+ years experience in RDS, NOSQL or Graph Databases
  • 2+ years of experience working with AWS platforms, services, and component technologies, including S3, RDS and Amazon EMR

Job Type: Contract

Schedule:

  • Monday to Friday

Experience:

  • AWS: 1 year (Preferred)
  • Hadoop: 1 year (Required)
  • Spark: 1 year (Required)
  • Big Data: 1 year (Preferred)
  • Scala: 1 year (Preferred)
  • Data Engineering: 1 year (Required)

Contract Renewal:

  • Possible

Full Time Opportunity:

  • Yes

Work Location:

  • Fully Remote

Work Remotely:

  • Yes