Scala contract jobs near you / remote

Scala
拢146/day

Hadoop Developer

12 days ago
Reston, VADAtec Solutions

Requirements:

5+ years with Java , Python/Scala programming languages

Advanced Level experience (3+ years ) in Hadoop, Yarn, HDFS, MapReduce, Sqoop, Oozie , Hive, Spark and other related Big Data technologies Experience tuning Hadoop/Spark parameters for optimal performance Experience with Big Data querying tools including Impala Advanced experience with SQL and at least one major RDBMS (Oracle, DB2).

Job Type: Contract

Get new remote Scala contracts sent to you every week.
Subscribed to weekly Scala alerts! 馃帀 You can see it in your dashboard.

Scala Developer

12 days ago
$65 - $75/hour (Estimated)Reston, VADAtec Solutions

Looking for a Scala Engineer.

8+ years IT experience

Experience on SCALA, Spark

Experience / Knowledge of Hadoop ecosystem

Design and Developer ETL for their reporting.

Responsibilities:

  • Building ETL for Data & analytics team.
  • Lead the development capabilities and manage the team.

Job Type: Contract

Senior Software Developer

13 days ago
$60 - $80/hourRemoteAMPLY Power

RESPONSIBILITIES:

The ideal candidate has built contract-first, highly scalable event driven, micro-service IoT platforms, in the Electric vehicle / charging platform space, with a focus on integration of best of breed value-add partner applications in the fleet management, billing, scheduling, and real time vehicle to grid control space. The successful candidate is a hands on developer, and technology leader, with a strong experience in integrating third party products and application in an enterprise platform.

Remote work okay within U.S. only with ability for occasional travel to Mountain View, CA. This is a consulting to possible full time hire position.

  • You will bootstrap the development of the AMPLY platform (developer No. 1), including the tools and infrastructure needed (CICD, testing)
  • You will design the technical architecture of the AMPLY service platform, based on 99.99% target service availability
  • You will participate and own the cloud software part of the 3.party EVSE and vehicle OEM validation process with emphasis on end-to-end system validation, to achieve robust end to end service availability, resilience and redundancy
  • You will heavily leverage public cloud / AWS services
  • Work in a continuous integration continuous development environment where you help maintain test driven discipline and enforce the 'do not break the build rule' right from the beginning, Establish and maintain continuous integration, continuous delivery pipelines
  • Establish and maintain technical excellence of additional individual contributors
  • Lead technical and architecture reviews
  • In the near future: Mentor team members in coding and development skills, including software architecture, cloud-native design, contract-first design and test driven development

REQUIRED QUALIFICATIONS:

  • Bachelor鈥檚 Degree in Computer Science, Engineering or related field
  • 10 to 15 years of professional experience in event driven (server + client) software development and delivery
  • Professional software engineering management experience is an asset
  • Experience with Docker containerization and container orchestration
  • Deep experience with AWS services (ECS, managed db services like dynamo, lambda, api gateway, multi-region setup, cloud front, cloud formation, CodePipeline),
  • Demonstrated experience designing API-first contracts and microservices
  • Demonstrated experience maintaining continuous delivery pipelines
  • Demonstrated experience running an agile scrum process
  • Technical excellence in service oriented / object oriented / event driven development; deep experience in modern internet event-drive languages (node-js, python, scala)
  • Deep experience in HTTP RESTful API design, Linux and Python, experience with IoT frameworks, additional asset: Embedded C, C++
  • Knowledge on M2M communication standards and platforms
  • Strong troubleshooting/analytical skills
  • Additional asset: Engineering experience in low/medium voltage EV charging stations (EVSE), Software validation experience in the EV industry, automotive or electrical industry, is a plus
  • Aditional asset: Good knowledge of evolving level2/3 charging protocols, CCS / OCPP / CharIn, and preferably hands-on experience with vehicle to charger to cloud integration testing

At Amply Power, you become a driver of the electric vehicle revolution. We are building an EV charging as a service business for fleets, accelerating and simplifying the electrification of buses, trucks, cars, and autonomous vehicles.

AMPLY builds fully automated charging systems, based on our IoT real time control platform, optimized for lowest electricity cost, while delivering a per-electric-mile-driven EV charging service to our fleet customers.

AMPLY removes the risk for a Fleet Operator on choosing charging stations. We purchase and operate both the charging stations and are the account holder of the utility meter. Amply supports all major charging stations on the market, both DC fast chargers and AC level 2 chargers.

AMPLY removes the risk of time-of-use and demand-charge driven pricing variance, to provide a known, consistent cost for electric fuel-as-a-service, by using our breadth of knowledge, access to electricity markets, and real-time energy-flow management technology that keeps your fleet charged at the best cost.

As IoT platform software engineer, you will join a company that makes a difference on the journey to sustainable eMobility. Amply was honored by Fast Company鈥檚 2019 World Changing Ideas Awards for Innovative Charging-as-a-Service Business Model for Fleets of Electric Vehicles.

Job Type: Contract

Salary: $60.00 to $80.00 /hour

Experience:

  • Nodejs: 5 years (Required)
  • Microservice IOT Platforms: 10 years (Required)

Contract Length:

  • 5 - 6 months

Contract Renewal:

  • Likely

Full Time Opportunity:

  • Yes

SENIOR SCALA ENGINEER

13 days ago
拢520 - 拢630/day (Estimated)RemoteIntec Select

Senior Scala Engineer

Scala, Python, Spark, Kafka, AWS

Overview:
Our client a leading digital distributor of household financial services, requires an experienced Senior Scala Engineer to join a talented team of Big Data Engineers, Subject Matter Experts and Thought-Leaders in this cloud-based Big Data business unit.

As a Senior Engineer in my client鈥檚 team you will be driving the development of quality solutions for our customers and be proactive in adopting and championing best practices.

You鈥檒l make a real impact by taking an active role in the team鈥檚 agile practices, technical decision making and development, generating value and continuously striving to improve the quality and reliability of our data.

You will be working on complex, challenging and exciting enterprise-level Big Data solutions/programmes which are SUPER cutting-edge.

For this Senior Scala Engineer / Senior Scala Developer role, you must have:

Scala, Spark, Python, AWS

Strong grasp of Big Data, SQL and database technologies

Enthusiasm for agile and lean development

Exposure to operation of business critical data products

AWS, RedShift, Kinesis experience

Git, CI/CD, Test driven approach (TDD/BDD)

Package:

拢100,00 package

15% bonus

Dress Down

Remote Working (usually on a Friday/Monday)

Flexible Working (very relaxed/not a clock watch company)

Private HealthCare

Scala, Python, Spark, Kafka, AWS

Please respond to this advert with an up to date version of your CV and the leading consultant will be in touch.

Senior Scala Engineer

DATA MODELLING / ARCHITECT

16 days ago
Reston, VADAtec

Responsibilities

路 Creating and maintaining optimal data pipeline
architecture.
路 Assemble large, complex data sets that meet
functional/non-functional business requirements.
路 Identify , design, and implement internal process
improvements: automating manual processes. Optimizing data delivery,
re-designing code and infrastructure for greater scalability.
路 Build re-usable data pipelines to ingest, standardize,
shape data from various zones in Hadoop data lake.
路 Build analytic tools that utilize datapipeline to provide
actionable insights into customer acquisition, revenue management,
Digital and marketing areas for operational efficiency and KPI's.
路 Design and build BI API's on established enterprise
Architecture patterns , for data sharing from various sources.
路 Design and integrate data using big data tools - Spark,
Scala , Hive etc.
路 Helping manage the library of all deployed Application
Programming Interface (API)s
路 Supporting API documentation of classes, methods scenarios,
code, design rationales, and contracts.
路 Design/build , maintain small set of highly flexible and
scalable models linked to client's specific business needs.

Required Qualifications:

路 5+ years experience in data engineering /data integrationrole.
路 5+ years Advanced working SQL knowledge and experience working with
relational databases, query authoring (SQL) as well as working
familiarity with a variety of databases.
路 Experience building and optimizing 'big data' data
pipelines, architecture and data sets.
路 Build programs/processes supporting data transformation,
data structures, metadata, dependency and workload management.
路 Experience in Data warehouse , Data Mart ETL
implementations using big data technologies.
路 Working knowledge of message queuing , stream processing
and scalable data stores.
路 Experience with relational SQL and NoSQL databases, Graph
databases (preferred).
路 Strong experience with object oriented programming - Java,
C++ , Scala (preferred)
路 Experience with AWS cloud services: strom, spark-streaming
etc.
路 Experience with API, web services design and development

Preferred Qualifications:

路 Functional experience in hospitality
路 End-to-end experience in building data flow process (from
ingestion to consumption layer)
路 Solid working experience with surrounding and supporting
disciplines (data quality, data operations, BI/Data WH/Data Lake)
路 Effective communicator, collaborator, influencer, solution
seeker across variety of opinions
路 Self-starter, well organized, extremely detail-oriented and
an assertive team player, willing to take ownership of
responsibilities, and possess a high level of positive energy and
drive.
路 Excellent time management and organizational skills
路 Ability to manage multiple priorities, work well under
pressure and effectively handle concurrent demands to prioritize
responsibilities

Job Type: Contract

Azure Big Data Developer

20 days ago
Reston, VADAtec Solutions

Essential Job Functions:

  • Production experience in large-scale SQL, NoSQL data infrastructures such as Cosmos DB, Cassandra, MongoDB, HBase, CouchDB, Apache Spark etc.
  • Application experience with SQL databases such as Azure SQL Data Warehouse, MS SQL, Oracle, PostgreSQL, etc.
  • Proficient understanding of code versioning tools {such as Git, CVS or SVN}
  • Strong debugging skills with the ability to reach out and work with peers to solve complex problems
  • Ability to quickly learn, adapt, and implement Open Source technologies.
  • Familiarity with continuous integration (DevOps)
  • Proven ability to design, implement and document high-quality code in a timely manner.
  • Excellent interpersonal and communication skills, both written and oral.

Educational Qualifications and Experience:

  • Role Specific Experience: 2+ years of experience in Big Data platform development.

Certification Requirements (desired):

Azure Designing and Implementing Big Data Analytics Solutions

Required Skills/Abilities:

  • Experience with NoSQL databases, such as HBase, Cassandra or MongoDB.
  • Proficient in designing efficient and robust ETL/ELT using Data Factory, workflows, schedulers, and event-based triggers.
  • 1+ experience with SQL databases (Oracle, MS SQL, PostgreSQL, etc.).
  • 1+ years of hands on experience with data lake implementations, core modernization and data ingestion.
  • 3+ years of Visual Studio C# or core Java.
  • Experience at least in one of the following programming languages: R, Scala, Python, Clojure, F#.
  • 1+ years of experience in Spark systems.
  • Good understanding of multi-temperature data management solution.
  • Practical knowledge in design patterns.
  • In depth knowledge of developing large distributed systems.
  • Good understanding of DevOps tools and automation framework.

Desired Skills/Abilities (not required but a plus):

  • Experience in designing and implementing scalable, distributed systems leveraging cloud computing technologies.
  • Experience with Data Integration on traditional and Hadoop environments.
  • Experience with Azure Time Series Insights.
  • Some knowledge of machine learning tools and libraries such as Tensor flow, Turi, H2O, Spark ML lib, and Carrot (R).
  • Understanding of AWS data storage and integration with Azure.
  • Some knowledge of graph database.

Job Type: Contract

Senior Scala Developer

20 days ago
$65 - $75/hour (Estimated)RemoteEnterprise Peak

Please apply with an ORIGINAL resume (NOT an Indeed-generated version) in order to be considered for the interviewing process. Any applications submitted without a resume in its ORIGINAL version will NOT be processed for review.

OUR PROJECT

Immediate opportunity to join our team with a digital healthcare company to develop a suite of products that will assist users in selecting healthcare plans and living healthier lives.

This is a contract-to-hire position with a competitive hourly rate. You can expect very competitive pay as our median compensation is $180,000.

WHO WE ARE LOOKING FOR

We are looking for a Senior Scala Developer to work on the product鈥檚 middle tier and backend development. This position is available in Chicago, Washington, D.C., San Francisco, or Minneapolis with limited remote work. Industry experience building highly scalable distributed systems in Scala is required. You will be joining a team of developers to bring this product to fruition. This is a fantastic opportunity to develop your skills in Scala for a growing healthcare company.

We are interviewing qualified candidates immediately and intend to move into offer stage quickly. If you are interested, please apply with an updated resume.

QUALIFICATIONS

  • Must demonstrate industry experience building highly scalable distributed systems and webservices
  • Scala development experience is required
  • Must have prior experience using AngularJS, ReactJS, or similar JavaScript-based frameworks
  • Must demonstrate strong unit testing and version control experience; Scalatest, Spec2, ScalaCheck, Git preferred
  • Database and caching experience with SQL or NoSQL is required; PostGRES, Elasticsearch, MongoDB preferred
  • Must have experience with cloud-based hosting and deployment
  • Must have experience working in Agile environment

Effective written and verbal communication skills are absolutely required for this role. You must be able to work LEGALLY in the United States asNO SPONSORSHIPwill be provided. NO 3rd PARTIES.

Job Type: Contract

Work authorization:

  • United States (Required)

Big Data Engineer

21 days ago
拢600 - 拢650/dayGreat rateRemoteHarnham

Big Data Engineer
拢600-拢650 per day
Initial 3 month contract
London/ Sweden

As a Big Data Engineer you will be working heavily with Kafka for streaming purposes alongside Scala for programming

THE COMPANY:

You will be working for a leading consultancy who who specialise in Data Engineering, DevOps and Data Science. As a Big Data Engineer you will be assisting a Scandinavian client in helping them introduce Kafka and therefore your time will be split 50/50 between being based in Sweden and London - with the option to work remotely. You will be working in an agile environment alongside a Data Architect and other Data Engineers.

THE ROLE:

As a Big Data Engineer, you will be heavily involved in introducing Kafka as a technology. Therefore it is imperative that you have extensive experience with Kafka, ideally Confluent Kafka. As a Big Data Engineer you will be coding in Scala primarily and some Java and will be working both in an on premise environment as well as cloud environments. It is most valuable if you have have good exposure to either AWS or Azure as a platform. Though Kafka will be the main technology you will be focusing on introducing as a Big Data Engineer, experience with Spark and exposure to big data platforms such as Hadoop is highly valuable.

YOUR SKILLS AND EXPERIENCE:

The successful Big Data Engineer will have the following skills and experience:

  • Extensive experience with Kafka
  • Good knowledge of either AWS or Azure
  • Good experience coding in Scala and Java
  • Previous experience using Big Data technologies such as Spark/ Hadoop
  • The ideal candidate will also have commercial experience in Confluent Kafka

THE BENEFITS:

  • A competitive day rate of up to 拢650 per day

HOW TO APPLY:

Please register your interest by sending your CV to Anna Greenhill via the Apply link on this page.

Big Data Architect

25 days ago
拢600 - 拢700/dayGreat rateRemoteHarnham

Big Data Architect
拢600-拢700 per day
Initial 3 month contract
London/ Sweden

As a Big Data Architect you will be introducing Kafka as a streaming technology for a financal client - from roadmapping to deployment.

THE COMPANY:

You will be working for a leading consultancy who specialise in Data Engineering, DevOps and Data Science. As a Big Data Architect you will be assisting a Scandinavian client in helping them introduce Kafka and therefore your time will be split 50/50 between being based in Sweden and London - with the option to work remotely. You will be working in an agile environment alongside a Data Architect and other Data Engineers.

THE ROLE:

As a Big Data Architect, you will be heavily involved in introducing Kafka as a technology. Therefore it is imperative that you have extensive experience with Kafka, ideally Confluent Kafka. As a Big Data Architect you will be working in the Kafka roadmap, both working in an on premise environment as well as cloud environments. It is most valuable if you have have good exposure to either AWS or Azure as a platform. Though Kafka will be the main technology you will be focusing on introducing as a Big Data Architect, experience with Spark and exposure to big data platforms such as Hadoop is highly valuable. Though this will be a client facing role you must be also be prepared to be hands on when neccessary and therefore experience with Scala and Java is desirable.

YOUR SKILLS AND EXPERIENCE:

The successful Big Data Architect will have the following skills and experience:

  • Extensive experience with Kafka
  • Good knowledge of either AWS or Azure
  • Good experience coding in Scala and Java
  • Previous experience using Big Data technologies such as Spark/ Hadoop
  • The ideal candidate will also have commercial experience in Confluent Kafka

THE BENEFITS:

  • A competitive day rate of up to 拢700 per day

HOW TO APPLY:

Please register your interest by sending your CV to Anna Greenhill via the Apply link on this page.

Big Data Engineer

25 days ago
拢600 - 拢700/dayGreat rateRemoteHarnham

Big Data Engineer
拢600-拢700 per day
Initial 3 month contract
London/ Sweden

As a Big Data Engineer you will be working heavily with Kafka for streaming purposes alongside Scala for programming

THE COMPANY:

You will be working for a leading consultancy who who specialise in Data Engineering, DevOps and Data Science. As a Big Data Engineer you will be assisting a Scandinavian client in helping them introduce Kafka and therefore your time will be split 50/50 between being based in Sweden and London - with the option to work remotely. You will be working in an agile environment alongside a Data Architect and other Data Engineers.

THE ROLE:

As a Big Data Engineer, you will be heavily involved in introducing Kafka as a technology. Therefore it is imperative that you have extensive experience with Kafka, ideally Confluent Kafka. As a Big Data Engineer you will be coding in Scala primarily and some Java and will be working both in an on premise environment as well as cloud environments. It is most valuable if you have have good exposure to either AWS or Azure as a platform. Though Kafka will be the main technology you will be focusing on introducing as a Big Data Engineer, experience with Spark and exposure to big data platforms such as Hadoop is highly valuable.

YOUR SKILLS AND EXPERIENCE:

The successful Big Data Engineer will have the following skills and experience:

  • Extensive experience with Kafka
  • Good knowledge of either AWS or Azure
  • Good experience coding in Scala and Java
  • Previous experience using Big Data technologies such as Spark/ Hadoop
  • The ideal candidate will also have commercial experience in Confluent Kafka

THE BENEFITS:

  • A competitive day rate of up to 拢700 per day

HOW TO APPLY:

Please register your interest by sending your CV to Anna Greenhill via the Apply link on this page.

Software Engineer

26 days ago
$60 - $70/hour (Estimated)Herndon, VAAmazon Web Services, Inc.
  • 4+ years of professional software development experience
  • 3+ years of programming experience with at least one modern language such as Java, C++, or C# including object-oriented design
  • 2+ years of experience contributing to the architecture and design (architecture, design patterns, reliability and scaling) of new and current systems
  • Bachelors degree in Computer Science or related field
  • Minimum of eight years software development experience in a modern programming language, such as C, C++, Java, or Scala
  • Alternatively (no degree) minimum of ten years of professional software development experience
  • 3 years experience leading a team of software engineers in an agile development process
  • 3 years experience running services on Linux

Amazon Web Services (AWS) EC2 Core Platform is looking for experienced engineers to join our development team in Herndon, VA (DMV) or Seattle, WA. Next month the team will be in Atlanta for a hiring fair.
We build software core to the EC2 network virtualization systems supporting the massive AWS cloud. We program mostly in Java and use scripting (Ruby/Python/etc.) whenever possible to automate. Each individual on our team is expected to be a leader; be pro-active on acting on behalf of our customers to exceed expectations on quality, speed of delivery, and communication. We challenge each other to put forth deliberate efforts to grow and improve our skills. At the end of the day, we know what鈥檚 important in our lives 鈥 family, friends, and recreation; creating a healthy balance is important to maintain the high level of execution we expect from each other.

To learn more about EC2, check us out here: https://aws.amazon.com/ec2/.


  • Computer Science fundamentals in data structures, problem solving, algorithm design and analysis
  • Expert skill in one modern programming language such as C, C++, Java, or Python and proficiency with one other programming language
  • Networking experience (HTTP, UDP, TCP/IP) and/or virtualization
  • Experience building complex software systems that have been successfully delivered to customers
  • Knowledge of professional software engineering practices & best practices for the full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations
  • Ability to take a project from scoping requirement through actual launch of the project
  • Strong distributed systems and web services design and implementation experience

For more information contact with me
Twitter: DariusAWS
LinkedIn: https://www.linkedin.com/in/dariusr/

鈥淎mazon is an Equal Opportunity Employer 鈥 Minority / Women / Disability / Veteran / Gender Identity / Sexual Orientation / Age.鈥

Data Scientist

1 month ago
$55 - $70/hour (Estimated)RemoteStage 19 Management

MOST IMPORTANT PART ABOUT THE GIG: We believe in making yourself rich not just your company rich. That's why for this position you and your team (3-6 people) will be given 33.3% of all distributed profits in addition to an employee stock option program. There is no salary or hourly rate for this position. You can work remotely and part time when you have the time.

Responsibilities:

  • Design and implement data science systems (ML, AI, optimization, and statistics)
  • Improve the scale and efficiency of data science approaches
  • Pair with data engineers to design deployments and pipelines
  • Elevate team technical craft with peer reviews, paper reading, and tech prototyping
  • Identify new product opportunities and collaboratively develop them with partners
  • Represent data products to non-technical partners and collaborators

Desired Qualifications:

  • 5+ years data science industry experience
  • 3+ years "productionalizing" code into live systems
  • 2+ years hands on experience with petabyte-processing-capable technologies such as Spark or Hadoop
  • Experience setting up and interacting with cloud infrastructure (e.g., AWS)
  • Professional proficiency in Python and SQL
  • An MS or PhD in Engineering, Computer Science, Physics, Statistics, or related field

Applicable Technologies/Skills:

  • Understand trade-offs between data infrastructure and database systems
  • Familiarity with iterative development methodologies like Scrum, Agile, etc.
  • Familiarity with Java, Scala, C++
  • Familiarity with git

Job Type: Contract

Experience:

  • Data Scientist: 1 year (Required)

Education:

  • Master's (Preferred)

IT - Applications Development Consultant III

1 month ago
$50 - $65/hourRemoteEssani International

Title: **Data Engineer

Location: **Franklin, Tennessee 37067

Duration: **6+ months Contract Role

Mode of Interview : **Webex/Skype

Cloud and Big Data related Projects

BIE- Analytics

What are the top 5-10 responsibilities for this position: **

  • Designing and building production data pipelines from ingestion to consumption within a big data architecture, using Azure Services, Python, Scala or other custom programming
  • Provide guidance around modern data warehouse design, implementation for migration from on premises to Azure
  • Perform detail assessments of current state data platform and create an appropriate transition path to Cloud technologies
  • Responsible for developing, and implementing Big Data platforms using Cloud Platform with structured and unstructured data sources.
  • Connecting and automating data sources, along with building visualizations.
  • Configuring, connecting, and setting up the infrastructure.
  • What software tools/skills are needed to perform these daily responsibilities?
  • Python, Scala, Azure , ELT, ETL**

What skills/attributes are a must have?**

  • 3 years of Data Warehousing and Big Data Tools and Technologies.
  • Demonstrated expertise with object oriented development languages (.Net, Java, etc.)preferred
  • 1+ Experience with Programming Languages such as Python ,Java, Scala
  • 2+ Advanced experience with both Relational and No SQL Databases
  • 3+ Advanced experience in Data Modeling, Data Structures, and Algorithms
  • Experience with Cloud Environments such as AWS OR Azure
  • Advanced knowledge in Linux & shell programming

What skills/attributes are nice to have?**

  • Strong analytical skills in problem solving, troubleshooting, and issue resolution
  • Ability to communicate effectively (oral and written) across multiple teams, facilitate meetings, and coordinate activities
  • Experience with Kafka Understanding of CI/CD tools and technologies
  • Experience in Data Sciences & Machine Learning is a plus
  • Healthcare industry experience
  • Where is the work to be performed? (Please list preferred UHG facility, if other please specify i.e. remote work, rural, etc.)

Job Type: Contract

Salary: $50.00 to $65.00 /hour

Experience:

  • Python ,Java, Scala: 1 year (Preferred)
  • .NET: 1 year (Preferred)
  • AWS: 1 year (Preferred)
  • Scala: 1 year (Preferred)
  • Machine Learning: 1 year (Preferred)
  • Cloud Environments such as AWS OR Azure: 1 year (Preferred)
  • Relational and NoSQL Databases: 2 years (Preferred)
  • Data Modeling, Data Structures, and Algorithms: 3 years (Preferred)
  • Java: 1 year (Preferred)
  • Data Sciences & Machine Learning: 1 year (Preferred)
  • Healthcare industry: 1 year (Preferred)
  • Kafka Understanding of CI/CD tools and technologies: 1 year (Preferred)

Work Location:

  • One location

Kafka Data Engineer

1 month ago
拢600 - 拢700/dayGreat rateRemoteHarnham

Kafka Data Engineer
拢600-拢700 per day
Initial 3 month contract
London/ Sweden

As a Kafka Data Engineer you will be working heavily with Kafka for streaming purposes alongside Scala for programming

THE COMPANY:

You will be working for a leading consultancy who who specialise in Data Engineering, DevOps and Data Science. As a Data Engineer you will be assisting a Scandinavian client in helping them introduce Kafka and therefore your time will be split 50/50 between being based in Sweden and London - with the option to work remotely. You will be working in an agile environment alongside a Data Architect and other Data Engineers.

THE ROLE:

As a Kafka Data Engineer, you will be heavily involved in introducing Kafka as a technology. Therefore it is imperative that you have extensive experience with Kafka, ideally Confluent Kafka. As a Kafka Data Engineer you will be coding in Scala primarily and some Java and will be working both in an on premise environment as well as cloud environments. It is most valuable if you have have good exposure to either AWS or Azure as a platform. Though Kafka will be the main technology you will be focusing on introducing as a Kafka Data Engineer, experience with Spark and exposure to big data platforms such as Hadoop is highly valuable.

YOUR SKILLS AND EXPERIENCE:

The successful Kafka Data Engineer will have the following skills and experience:

  • Extensive experience with Kafka
  • Good knowledge of either AWS or Azure
  • Good experience coding in Scala and Java
  • Previous experience using Big Data technologies such as Spark/ Hadoop
  • The ideal candidate will also have commercial experience in Confluent Kafka

THE BENEFITS:

  • A competitive day rate of up to 拢700 per day

HOW TO APPLY:

Please register your interest by sending your CV to Anna Greenhill via the Apply link on this page.

Big Data Architect

1 month ago
拢650 - 拢700/dayGreat rateRemoteHarnham

Big Data Architect
拢650-拢700 per day
Initial 3 month contract
London/ Sweden

As a Big Data Architect you will be helping to create the Kafka architecture and outline the strategy for migration to the cloud!

THE COMPANY:

You will be working for a leading consultancy who who specialise in Data Engineering, DevOps and Data Science. As a Big Data Architect you will be assisting a Scandinavian client in helping them introduce Kafka and therefore your time will be split 50/50 between being based in Sweden and London - with the option to work remotely. You will be working in an agile environment alongside Data Engineers.

THE ROLE:

As a Big Data Architect, your main responsability will be creating the Kafka architecture from design to implementation.. Therefore it is imperative that you have extensive experience with Kafka for large implementations, ideally Confluent Kafka. As a Big Data Architect yit is essential you have a good understanding of technologies such as Spark and Hadoop as you will helping to implement these. You will be working in both an on premise environment as well as cloud environmentsand so it is valuable if you have worked in either AWS or Azure as a platform. Though you will heavily be involved in the planning and writing of roadmaps you must be prepared to be hands in and therefore previous experience programming in Scala/ Java is valuable. As you will be working for a consultancy it is essential that you are confident speaking with non technical people as this role will be very client facing.

YOUR SKILLS AND EXPERIENCE:

The successful Big Data Architect will have the following skills and experience:

  • Extensive experience implementing strategies using Kafka
  • Good knowledge of either AWS or Azure
  • Good experience coding in Scala and Java
  • Previous experience using Big Data technologies such as Spark/ Hadoop
  • Experience speaking to stakeholders
  • The ideal candidate will also have commercial experience in Confluent Kafka

THE BENEFITS:

  • A competitive day rate of up to 拢700 per day

HOW TO APPLY:

Please register your interest by sending your CV to Anna Greenhill via the Apply link on this page.

Data Engineer

1 month ago
拢600 - 拢700/dayGreat rateRemoteHarnham

Data Engineer
拢600-拢700 per day
Initial 3 month contract
London/ Sweden

As a Data Engineer you will be working heavily with Kafka for streaming purposes alongside Scala for programming

THE COMPANY:

You will be working for a leading consultancy who who specialise in Data Engineering, DevOps and Data Science. As a Data Engineer you will be assisting a Scandinavian client in helping them introduce Kafka and therefore your time will be split 50/50 between being based in Sweden and London - with the option to work remotely. You will be working in an agile environment alongside a Data Architect and other Data Engineers.

THE ROLE:

As a Data Engineer, you will be heavily involved in introducing Kafka as a technology. Therefore it is imperative that you have extensive experience with Kafka, ideally Confluent Kafka. As a Data Engineer you will be coding in Scala primarily and some Java and will be working both in an on premise environment as well as cloud environments. It is most valuable if you have have good exposure to either AWS or Azure as a platform. Though Kafka will be the main technology you will be focusing on introducing as a Data Engineer, experience with Spark and exposure to big data platforms such as Hadoop is highly valuable.

YOUR SKILLS AND EXPERIENCE:

The successful Data Engineer will have the following skills and experience:

  • Extensive experience with Kafka
  • Good knowledge of either AWS or Azure
  • Good experience coding in Scala and Java
  • Previous experience using Big Data technologies such as Spark/ Hadoop
  • The ideal candidate will also have commercial experience in Confluent Kafka

THE BENEFITS:

  • A competitive day rate of up to 拢700 per day

HOW TO APPLY:

Please register your interest by sending your CV to Anna Greenhill via the Apply link on this page.

Big Data Architect

1 month ago
拢650 - 拢700/dayGreat rateRemoteHarnham

Big Data Architect
拢650-拢700 per day
Initial 3 month contract
London/ Sweden

As a Big Data Architect you will be helping to create the Kafka architecture and outline the strategy for migration to the cloud!

THE COMPANY:

You will be working for a leading consultancy who who specialise in Data Engineering, DevOps and Data Science. As a Big Data Architect you will be assisting a Scandinavian client in helping them introduce Kafka and therefore your time will be split 50/50 between being based in Sweden and London - with the option to work remotely. You will be working in an agile environment alongside Data Engineers.

THE ROLE:

As a Big Data Architect, your main responsability will be creating the Kafka architecture from design to implementation.. Therefore it is imperative that you have extensive experience with Kafka for large implementations, ideally Confluent Kafka. As a Big Data Architect yit is essential you have a good understanding of technologies such as Spark and Hadoop as you will helping to implement these. You will be working in both an on premise environment as well as cloud environmentsand so it is valuable if you have worked in either AWS or Azure as a platform. Though you will heavily be involved in the planning and writing of roadmaps you must be prepared to be hands in and therefore previous experience programming in Scala/ Java is valuable. As you will be working for a consultancy it is essential that you are confident speaking with non technical people as this role will be very client facing.

YOUR SKILLS AND EXPERIENCE:

The successful Big Data Architect will have the following skills and experience:

  • Extensive experience implementing strategies using Kafka
  • Good knowledge of either AWS or Azure
  • Good experience coding in Scala and Java
  • Previous experience using Big Data technologies such as Spark/ Hadoop
  • Experience speaking to stakeholders
  • The ideal candidate will also have commercial experience in Confluent Kafka

THE BENEFITS:

  • A competitive day rate of up to 拢700 per day

HOW TO APPLY:

Please register your interest by sending your CV to Anna Greenhill via the Apply link on this page.

Data Engineer

1 month ago
拢600 - 拢700/dayGreat rateRemoteHarnham

Data Engineer
拢600-拢700 per day
Initial 3 month contract
London/ Sweden

As a Data Engineer you will be working heavily with Kafka for streaming purposes alongside Scala for programming

THE COMPANY:

You will be working for a leading consultancy who who specialise in Data Engineering, DevOps and Data Science. As a Data Engineer you will be assisting a Scandinavian client in helping them introduce Kafka and therefore your time will be split 50/50 between being based in Sweden and London - with the option to work remotely. You will be working in an agile environment alongside a Data Architect and other Data Engineers.

THE ROLE:

As a Data Engineer, you will be heavily involved in introducing Kafka as a technology. Therefore it is imperative that you have extensive experience with Kafka, ideally Confluent Kafka. As a Data Engineer you will be coding in Scala primarily and some Java and will be working both in an on premise environment as well as cloud environments. It is most valuable if you have have good exposure to either AWS or Azure as a platform. Though Kafka will be the main technology you will be focusing on introducing as a Data Engineer, experience with Spark and exposure to big data platforms such as Hadoop is highly valuable.

YOUR SKILLS AND EXPERIENCE:

The successful Data Engineer will have the following skills and experience:

  • Extensive experience with Kafka
  • Good knowledge of either AWS or Azure
  • Good experience coding in Scala and Java
  • Previous experience using Big Data technologies such as Spark/ Hadoop
  • The ideal candidate will also have commercial experience in Confluent Kafka

THE BENEFITS:

  • A competitive day rate of up to 拢700 per day

HOW TO APPLY:

Please register your interest by sending your CV to Anna Greenhill via the Apply link on this page.

GIS Software Engineer

1 month ago
Reston, VA 20191Octo Consulting Group

Our team is what makes Octo great. At Octo you'll work beside some of the smartest and most accomplished staff you'll find in your career. Octo offers fantastic benefits and an amazing workplace culture where you will feel valued while you perform mission critical work for our government. Voted one of the region鈥檚 best places to work multiple times, Octo is an employer of choice!

Octo Consulting Group has an immediate need for a GIS Software Engineer. In this role, successful candidate will apply their experience with Java, Scala , and Python to develop mission critical GIS applications. Preferably, the successful candidate will also have experience with Spark as well some familiarity with AWS or PCF. The software engineer must have adequate domain knowledge and hands-on experience in developing, implementing software programs. As a mid-level coder, this labor competency is responsible for maintaining and improving the performance of existing software code, with duties to write and update software code under contract and direction from the assigned Government Product Manager. Clear communication skills are required. Astute ability in writing test scripts, in an agile software development environment, where building in automated test procedures in addition to functional code are paramount to continuous integration and continuous delivery of software. This competency is required to further test and maintain software products to ensure strong functionality and optimization. Recommendation of improvements to existing software programs as necessary. The mid-level coder shall be capable of performing the software tasks identified in the contract requirements in forming and working on Government/Contractor software coding teams.

  • 4 years鈥 experience in full stack development to include Java, Web services, Database, and micro-service development.
  • 3 years鈥 experience with agile and lean philosophies, serving as scrum or team lead.
  • Experience working independently with clients or stakeholders conducting interviews, observations, and surveys, to develop user-stories in support of full-service consumer and business applications.
  • Experience with Continuous Delivery and Continuous Integration (CI/CD) techniques, test-driven development, and automated testing practices.
  • Development of customized code, scripts, modules, macro procedures, and libraries to implement specialized spatial analysis functions using Python, Java, and Scala
  • Understanding of and familiarity with current and developing Geospatial data formats
  • Understanding and familiarity with Government and Military common geospatial formats

    Desired Education/Experience:

  • BS or equivalent in Computer Science, Engineering, Mathematics or equivalent technical degree.
  • 5 years鈥 experience in full stack development to include Java, Web services, Database, and micro-service development.
  • 4 years鈥 experience with agile and lean philosophies, serving as scrum or team lead.
  • Integration and tailoring of geospatial Commercial Off-The-Shelf (COTS) software applications; specialized software and database development and maintenance; integration of related specialized hardware; engineering studies to identify and remedy geodata processing bottlenecks.
  • Integration and tailoring of geospatial Commercial Off-The-Shelf (COTS) software applications; specialized software and database development and maintenance;
  • Expertise in multiple Commercial, Open Source and Government created Geospatial Information Systems/Platforms
  • Use and development with common geospatial tools, data, and operating platforms. These may include, but are not limited to:
    • Tools - Boundless Spatial suite, Remote View, PostGRES, and/or other geospatial databases, ArcGIS Desktop, ArcGIS Server, Image Server, and File Geodatabases.
    • Data formats including KML, KMZ, NITF, TIFF, JPEG, GeoPDF, and similar geo-related formats.
    • Operating Platforms 鈥 ESRI, OpenGeo Suite, or similar
    • Advanced knowledge of and ability to work and develop with geospatial information systems (GIS) to include open-source and proprietary geospatial formats.
  • Use and development of common geospatial tools, data, and operating platforms. These may include, but are not limited to:
    • Tools - Boundless Spatial suite, Remote View, PostGRES, and/or other geospatial databases, ArcGIS Desktop, ArcGIS Server, Image Server, and File Geodatabases.
    • Data formats including KML, KMZ, NITF, TIFF, JPEG, GeoPDF, and similar geo-related formats.
    • Operating Platforms 鈥 ESRI, OpenGeo Suite, or similar.
  • Possesses a TS//SCI level, or above, security clearance, is desired.

Octo is a growth-oriented firm that provides a unique, differentiated employee culture relative to our Federal market peers. We leverage this culture to attract and retain a higher grade of talent than our peers to be an employer of choice.


Please no Corp-to-Corp or 1099 candidates; this position is W-2 only. Octo Consulting Group is an Equal Opportunity/Affirmative Action employer. All qualified candidates will receive consideration for employment without regard to disability, protected veteran status, race, color, religious creed, national origin, citizenship, marital status, sex, sexual orientation/gender identity, age, or genetic information. Selected applicant will be subject to a background investigation.

Java AWS Developer

1 month ago
$55 - $70/hour (Estimated)Herndon, VA 20170SV Professionals LLC

Job title:

Java Developer(AWS)

Location:

Tampa FL

Duration:

12+ Months

Job Description:

8+ Years Experience

We must need SQL and AWS experience

Strong in Python or Scala or Java preferably Python.

Good experience in SQL - prefer someone who has experience with Postgress

AWS and Dockers/Containers experience is good to have.

Job Type: Contract

Experience:

  • software development: 1 year (Preferred)
  • Java: 1 year (Preferred)

Work authorization:

  • United States (Preferred)

Work Location:

  • One location