This job is restricted to tax residents of , but we detected your IP as outside of the country. Please only apply if you are a tax resident.
Job Key Responsibilities
Build, maintain, and evolve our graph ecosystem – creating graph models, feeding data at scale connecting nodes and relationships and enabling graph algorithms at scale
Apply your knowledge and experience with Graph technologies (e.g. Neo4j, SparQL, GraphQL, REST) to our problem space
Build fundamental traversal algorithms, breadth-first, depth-first, through graph properties [uni/directed, cyclic/acyclic, weighted/unweighted, sparse and dense]
Collaborate with data science engineering team to enable new functionality on graph engineering and graph analytics leveraging Spark, GraphX and SparkSQL through Neo4J spark connector
Invest in deeply understanding the use cases of teams across a range of dimensions, such as data modeling, data fetching and mutation, error handling, cache management, performance
Improve productivity through better tooling and insights into our graph systems: discoverability, versioning, error rates, and frequency analysis
Optimize server and protocol performance to improve efficiency and latency
Partner with central engineering teams to build, integrate, and/or evolve platform and infrastructure