Remote
0
USD
/hr

3 remote Hadoop contracts

Scala Expert Developer/Engineer

3 days ago
RemoteNext Ventures
  • Practice Development & Integration

  • Technologies Design Development Skills

  • Scala Expert Developer/Engineer – Netherlands – Start ASAP – Remote work

    Short contract with extensions possible.

    We are currently looking for Developer (Scala/Pyspark, Kafka)

    Job Purpose and primary objectives:

    Key responsibilities (please specify if the position is an individual one or part of a team): 
    The associate should have good knowledge on Big data/Hadoop echo systems, Kafka , Scala, Pyspark

    Key Skills/Knowledge:

    Primary Skill- Looking for developer should have knowledge of Data Collector, Data Collector Edge, Banking Domain Knowledge, Big Data ,Kafka, Scala, Pyspark, Hadoop

    Experience required:

    Primary Skill- Looking for developer should have knowledge of Data Collector, Data Collector Edge, Banking Domain Knowledge, Big Data ,Kafka, Scala, Pyspark, Hadoop


    Get new remote Hadoop contracts sent to you every week.
    Subscribed to weekly Hadoop alerts! 🎉

    Spotfire Developer

    3 days ago
    RemoteNext Ventures
  • Practice Development & Integration

  • Technologies Design Development Skills

  • Spotfire Developer – Pharmaceuticals – Contract – 12 months- Remote working – India/Bangalore

    Job Title: Tibco Spotfire Developer

    Next Ventures seeks 2 x Skilled Spotfire Developers to complete a long term project with a European client working on a Project Delivered from Bangalore.

    Roles & Responsibilities:

    The candidate will be a critical development and support resource using Tibco Spotfire to deliver integrated reporting and analytic services. Candidate will also help train end users, core Internal & external development team, and support build visualizations using Spotfire, Python, SAS and R.

    •Work with Analytic, Account, and customer stakeholders to define and assimilate requirements to design and architect client Spotfire reporting and measurement solution(s). Create and support application of Spotfire development best practices and in troubleshooting complex challenges.
    •Develop re-usable medium-complex reports and visualizations, re-usable data sets for reporting, and analytic integrations working with customer, internal teams, and analytic/data scientists where required.
    •Lead training of users of varying roles on use of Tibco Spotfire and reporting solution developed. Assist with development of re-usable client training materials to be used in training sessions.
    •Work with internal technology teams to optimize Spotfire and Big data environments to support the reporting and analytics solutions.

    Qualification, Skills & Experience:

    •Bachelor’s Degree in Informatics, Information Systems, Engineering or related field with 6+ years’ relevant experience
    •Proven specialist knowledge of Pharma development process e.g. Regulatory Affairs, Bio Statistics, Drug Safety, Clinical Operations
    •Extensive knowledge on technologies and trends related to Research and Development including big data, advanced analytics and Internet of Things
    •Prior experience in completing a full computerized systems validation and testing methodology with awareness of the risks, issues, complications, and activities involved in these processes
    •Versed in TIBCO Spotfire best practices and able to incorporate and influence their use in reporting solutions.
    •Advanced Oracle SQL, Data Visualization, Modelling and Wrangling skills using SpotFire are required.
    •Strong Customer Facing skills and experienced in training end users of all levels in use of Tibco Spotfire.
    •Prior experience integrating R analytics, python/CSS/HTML, JavaScript and developing integrated data solutions to support high performance reporting and analytics.
    •Experienced working in iterative and agile reporting environments.
    •Versed in Tibco Spotfire Server Administration, configuration, and troubleshooting.
    •Leading, influencing, and consulting skills.
    •Advanced SQL and Data Blending Skills preferred

    Expert level Tibco Spotfire report and dashboard development experience in biotech industry with exposure using Informatica, Hadoop and other related big data technologies.

    Excellent written and verbal communication skills; able to communicate effectively with internal and external contact persons.

    Excellent negotiation and conflict management skills. Analytical skills, quick perception and excellent judgment; able to identify risks and problems, to develop adequate problem-solving strategies even in complex situations, and to take appropriate measures when required. Strategic thinking beyond own function; is familiar with and considers overall business objectives and company strategy.

    Call or email for more info :

    Matt@next-ventures.com
    +442075494034


    Senior Data Engineer Scala, Spark, NOSQL

    1 month ago
    £490 - £600/day (wellpaid.io estimate)RemoteSalt Search

    Senior Data Engineer (Freelance Contractor needed) - Banking Client - Brussels

    Rate: 700 - 900 per day

    Duration: 1 year contract

    *** Remote working till the end of 2021, must be prepared to be in onsite in Belgium in 2022****

    Job description

    The Advanced Analytics team in is currently looking for a Senior Data Engineer with design skills whose core objectives will be to:

    - Collect, clean, prepare and load the necessary data onto Hadoop, our Data Analytics Platform, so that these can be used for reporting purposes; creating insights and responding to business challenges

    - Act as a liaison between the team and other stakeholders and contribute to support the Hadoop cluster and the compatibility of all the different software that run on the platform (Scala, Spark, Python, …).

    • Identify the most appropriate data sources to use for a given purpose and understand their structures and contents, in collaboration with subject matter experts.
    • Extract structured and unstructured data from the source systems (relational databases, data warehouses, document repositories, file systems, …), prepare such data (cleanse, re-structure, aggregate, …) and load them onto Hadoop.
    • Actively support the reporting teams in the data exploration and data preparation phases.
    • Implement data quality controls and where data quality issues are detected, liaise with the data supplier for joint root cause analysis
    • Be able to autonomously design data pipelines, develop them and prepare the launch activities
    • Properly document your code, share and transfer your knowledge with the rest of the team to ensure a smooth transition into maintenance and support of production applications
    • Liaise with IT infrastructure teams to address infrastructure issues and to ensure that the components and software used on the platform are all consistent

    Qualifications

    Required skills

    • Experience with analysis and creation of data pipelines, data architecture, ETL/ELT development and with processing structured and unstructured data
    • Proven experience with using data stored in RDBMSs and experience or good understanding of NoSQL databases
    • Ability to write performant Scala code and SQL statements
    • Ability to design with focus on solutions that are fit for purpose whilst keeping options open for future needs
    • Ability to analyze data, identify issues (e.g. gaps, inconsistencies) and troubleshoot these
    • Have a true agile mindset, capable and willing to take on tasks outside of her/his core competencies to help the team
    • Experience in working with customers to identify and clarify requirements
    • Strong verbal and written communication skills, good customer relationship skills
    • Strong interest in the financial industry and related data.

    Will be considered as assets

    • Knowledge of Python and Spark
    • Understanding of the Hadoop ecosystem including Hadoop file formats like Parquet and ORC
    • Experience with open source technologies used in Data Analytics like Spark, Pig, Hive, HBase, Kafka, …
    • Ability to write MapReduce & Spark jobs
    • Knowledge of Cloudera
    • Knowledge of IBM mainframe
    • Knowledge of AGILE development methods such as SCRUM is clearly an asset.