· AWS Lambda
· Java Core Experience
· DynamoDB or MongoDB
· AWS API Gateways
Job Type: Contract
Design, develop, test and deploy the API solution
Ensure quality deliverables following the defined API standards
Escalate issues as appropriate
Framework deliverables for the sprint and estimate time and effort
Perform life cycle management of API's
Bachelor’s Degree in Computer Science or equivalent experience
5+ years of strong software development experience
AWS Lambda experience
Strong understanding of various API development environments
Working knowledge of automating CI/CD pipelines
Strong understanding of data structures and integration patterns
Microservice based architecture exposure would be a plus
Job Type: Contract
Pay: $60.00 - $70.00 per hour
Full Stack Developer - Node.js / AWS / React
Full Stack Developer needed for a leading digital agency in London where you'll be using Node.js and React to develop new features and applications for one of their clients core commercial products.
The role is based remotely.
Please apply now for more details.
As a Senior Big Data Engineer, you will develop new data engineering
patterns that leverage new cloud architectures, and will extend or
migrate existing data pipelines to the architectures as needed.
You will be responsible for designing and implementing the complex ETL
pipelines in Big data platform and other BI solutions to support the
rapidly growing and dynamic business demand for data, and use it to
deliver the data as service which will have an immediate influence on
day-to-day decision making at Amazon.com
This position requires a Bachelor's Degree in Computer Science or a
related technical field, and 5+ years of meaningful employment
· 5+ years of relevant work experience in Big data engineering, ETL,
Data Modeling, and Data Architecture.
· Expert-level skills in writing and optimizing SQL.
· Experience with Big Data technologies such as Hive/Spark, AWS EMR,
AWS Glue, AWS Lambda and Kinesis.
· Proficiency in one of the scripting languages - python, ruby, java or similar.
· Experience operating very large data warehouses, data lakes and
building streaming data pipelines.
· Proven interpersonal skills and standout colleague.
· A real passion for technology. We are looking for someone who is
keen to demonstrate their existing skills while trying new approaches.
· Master’s or Bachelor degree in Computer Science, Engineering, or related field
· Authoritative in ETL optimization, designing, coding, and tuning big
data processes using Apache Spark or similar technologies.
· Experience with building data pipelines and applications to stream
and process datasets at low latencies.
· Demonstrate efficiency in handling data - tracking data lineage,
ensuring data quality, and improving discoverability of data.
· Sound knowledge of distributed systems and data architecture
(lambda)- design and implement batch and stream data processing
pipelines, knows how to optimize the distribution, partitioning, and
MPP of high-level data structures.
· Knowledge of Engineering and Operational Excellence using standard
Job Types: Temporary, Contract
Pay: $100,239.00 - $131,000.00 per year
Full Time Opportunity: