![]() ![]() In the next few sections we will go over why we took this approach while also introducing the different components that enabled us to adopt GraphQL with this architecture. Only pre-registered queries are allowed for execution on our production servers. The GraphQL query execution endpoint is distributed and available on each individual frontend microservice. The GraphQL type system used at LinkedIn is completely autogenerated. Here are some of the primary differences compared to the widely used architecture in the industry: The GraphQL architecture we use at LinkedIn is unique due to some of the decisions we took during the design phase. Taking this approach helped us reduce distractions while we built the infrastructure for all of our future needs at LinkedIn. To keep the core team’s focus clear and the logistics under our control, we decided to start with a smaller scope of only enabling read operations through GraphQL and target only our frontend stack. Moving to GraphQL was a huge initiative that changed the development workflow for thousands of engineers who work tirelessly building products at LinkedIn. After careful consideration and discussions with several partner teams, we decided to adopt GraphQL at LinkedIn. We also made sure GraphQL wasn’t solving a specific problem for a specific team, but rather introducing a paradigm shift that would boost productivity across the company. When choosing GraphQL as a technology, we took sufficient time to understand how GraphQL would fit into our current tech stack, and the best way to adopt it without setting aside a lot of infrastructure we had invested in and built over time. We had two choices in hand - invest more in Deco to address the pressing issues, or adopt GraphQL and build the necessary infrastructure around it for our needs. GraphQL had already become public and the industry was showing widespread adoption. While Deco was growing in application and use, we saw an alternative, GraphQL, emerging in the industry. On the developer experience front, Deco’s query language being cryptic made writing, testing and maintaining queries hard especially with no developer tools to assist our engineers during the development time. Deco being schema-less, the client queries and the response data couldn’t be validated which caused unexpected issues in production. We started noticing some real shortcomings of Deco, which had a direct impact on our productivity. Though Deco addressed the functional gaps mentioned above, its usability problems became pronounced when LinkedIn adopted the technology on our frontend stack (commonly referred to as “backend-for-frontend” or “BFF”) where hundreds of our frontend client engineers started using the proprietary query language to express their application data needs. Since it was built as a library, several of our mid-tier services adopted Deco to hand off the complex logic of fanning out and fetching data from downstream services. To address the client-side issues mentioned earlier, LinkedIn created an internal library called Deco which allowed our client engineers to express the data they wanted using a proprietary query language and let Deco figure out the efficient way to fetch the requested data. Specifically, we will dive into some of the architectural choices that are unique to LinkedIn and why we chose each one of them. ![]() In this blog post, we will cover how the GraphQL layer is architected for use by our internal engineers to build member and customer facing applications. In our previous blog post on GraphQL, we explained how LinkedIn uses GraphQL to expedite the process of onboarding new use-cases for external API partners. Over time, this has become a challenge as every client is faced with a few key issues:įiguring out which microservice serves the right data and making several round trips to fetch the data they need.Īddressing partial failures and resilience issues due to multiple network calls in the distributed microservices architecture.ĭealing with inefficiencies due to duplicated downstream calls while resolving a tree of nodes. Though this microservice architecture has worked out really well for our API engineers, when our clients need to fetch data they find themselves talking to several of these microservices. With the widespread adoption of Rest.li since its inception in 2013, LinkedIn has built thousands of microservices to enable the exchange of data with our engineers and our external partners.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |