{"id":74969,"date":"2023-10-30T09:22:59","date_gmt":"2023-10-30T16:22:59","guid":{"rendered":"https:\/\/github.blog\/?p=74969"},"modified":"2024-02-07T18:02:20","modified_gmt":"2024-02-08T02:02:20","slug":"the-architecture-of-todays-llm-applications","status":"publish","type":"post","link":"https:\/\/github.blog\/ai-and-ml\/llms\/the-architecture-of-todays-llm-applications\/","title":{"rendered":"The architecture of today&#8217;s LLM applications"},"content":{"rendered":"<p>We want to empower you to experiment with LLM models, build your own applications, and discover untapped problem spaces. That\u2019s why we sat down with GitHub\u2019s <a href=\"https:\/\/github.com\/whatsinfinitum\" target=\"_blank\" rel=\"noopener\">Alireza Goudarzi<\/a>, a senior machine learning researcher, and <a href=\"https:\/\/github.com\/wunderalbert\" target=\"_blank\" rel=\"noopener\">Albert Ziegler<\/a>, a principal machine learning engineer, to discuss the emerging architecture of today\u2019s LLMs.<\/p>\n<p>In this post, we\u2019ll cover five major steps to building your own LLM app, the emerging architecture of today&#8217;s LLM apps, and problem areas that you can start exploring today.<\/p>\n<h2 id=\"five-steps-to-building-an-llm-app\"><a class=\"heading-link\" href=\"#five-steps-to-building-an-llm-app\">Five steps to building an LLM app<span class=\"heading-hash pl-2 text-italic text-bold\" aria-hidden=\"true\"><\/span><\/a><\/h2>\n<p>Building software with LLMs, or any machine learning (ML) model, is <a href=\"https:\/\/karpathy.medium.com\/software-2-0-a64152b37c35\" target=\"_blank\" rel=\"noopener\">fundamentally different<\/a> from building software without them. For one, rather than compiling source code into binary to run a series of commands, developers need to navigate datasets, embeddings, and parameter weights to generate consistent and accurate outputs. After all, LLM outputs are probabilistic and don\u2019t produce the same predictable outcomes.<\/p>\n<figure id=\"attachment_74972\"  class=\"wp-caption alignnone mx-0\"><a href=\"https:\/\/github.blog\/wp-content\/uploads\/2023\/10\/FivestepstobuildingLLMapp.png?w=1022\" target=\"_blank\" rel=\"noopener\"><img data-recalc-dims=\"1\" decoding=\"async\" width=\"1022\" height=\"537\" loading=\"lazy\" class=\"width-fit size-full wp-image-74972 width-fit\" src=\"https:\/\/github.blog\/wp-content\/uploads\/2023\/10\/FivestepstobuildingLLMapp.png?resize=1022%2C537\" alt=\"Diagram that lists the five steps to building a large language model application. Data source for diagram is detailed here: https:\/\/github.blog\/?p=74969&amp;preview=true#five-steps-to-building-an-llm-app\" srcset=\"https:\/\/github.blog\/wp-content\/uploads\/2023\/10\/FivestepstobuildingLLMapp.png?w=1022 1022w, https:\/\/github.blog\/wp-content\/uploads\/2023\/10\/FivestepstobuildingLLMapp.png?w=300 300w, https:\/\/github.blog\/wp-content\/uploads\/2023\/10\/FivestepstobuildingLLMapp.png?w=768 768w\" sizes=\"auto, (max-width: 1000px) 100vw, 1000px\" \/><\/a><figcaption class=\"text-mono color-fg-muted mt-14px f5-mktg\">Click on diagram to enlarge and save.<\/figcaption><\/figure>\n<p><strong>Let\u2019s break down, at a high level, the steps to build an LLM app today. <g-emoji fallback-src=\"https:\/\/github.githubassets.com\/images\/icons\/emoji\/unicode\/1f447.png?v8\" alias=\"point_down\">&#128071;<\/g-emoji><\/strong><\/p>\n<p><strong>1. Focus on a single problem, first<\/strong>. The key? Find a problem that\u2019s the right size: one that\u2019s focused enough so you can quickly iterate and make progress, but also big enough so that the right solution will wow users.<\/p>\n<p>For instance, rather than trying to address all developer problems with AI, the GitHub Copilot team initially focused on one part of the software development lifecycle: <a href=\"https:\/\/github.blog\/2023-09-06-how-to-build-an-enterprise-llm-application-lessons-from-github-copilot\/\" target=\"_blank\" rel=\"noopener\">coding functions in the IDE<\/a>.<\/p>\n<p><strong>2. Choose the right LLM<\/strong>. You\u2019re saving costs by building an LLM app with a pre-trained model, but how do you pick the right one? Here are some factors to consider:<\/p>\n<ul>\n<li><strong>Licensing<\/strong>. If you hope to eventually sell your LLM app, you\u2019ll need to use a model that has an API licensed for commercial use. To get you started on your search, here\u2019s a community-sourced <a href=\"https:\/\/github.com\/eugeneyan\/open-llms\" target=\"_blank\" rel=\"noopener\">list of open LLMs that are licensed for commercial use<\/a>.<\/li>\n<li><strong>Model size.<\/strong> The size of LLMs can range from 7 to 175 billion parameters\u2014and some, like <a href=\"https:\/\/learn.microsoft.com\/en-us\/semantic-kernel\/prompt-engineering\/llm-models\" target=\"_blank\" rel=\"noopener\">Ada<\/a>, are even as small as 350 million parameters. Most LLMs (at the time of writing this post) range in size from 7-13 billion parameters.<\/li>\n<\/ul>\n<p>Conventional wisdom tells us that if a model has more parameters (variables that can be adjusted to improve a model\u2019s output), the better the model is at learning new information and providing predictions. However, the <a href=\"https:\/\/spectrum.ieee.org\/large-language-models-size\" target=\"_blank\" rel=\"noopener\">improved performance of smaller models<\/a> is challenging that belief. Smaller models are also usually faster and cheaper, so improvements to the quality of their predictions make them a viable contender compared to big-name models that might be out of scope for many apps.<\/p>\n<aside class=\"post-aside--small float-sm-right col-sm-5 col-md-6 col-lg-5 my-5 my-sm-2 ml-sm-4 ml-lg-6\"><p class=\"h6-mktg gh-aside-title\">Looking for open source LLMs?<\/p><p>Check out our <a href=\"https:\/\/github.blog\/2023-10-05-a-developers-guide-to-open-source-llms-and-generative-ai\/#open-source-llms-available-today\" target=\"_blank\" rel=\"noopener\">developer\u2019s guide to open source LLMs and generative AI<\/a>, which includes a list of models like OpenLLaMA and Falcon-Series.<\/p>\n<\/aside>\n<ul>\n<li><strong>Model performance<\/strong>. Before you customize your LLM using techniques like fine-tuning and in-context learning (which we\u2019ll cover below), evaluate how well and fast\u2014and how consistently\u2014the model generates your desired output. To measure model performance, you can use <strong>offline evaluations<\/strong>.<\/li>\n<\/ul>\n<aside class=\"p-4 p-md-6 post-aside--large\"><p class=\"h5-mktg gh-aside-title\">What are offline evaluations?<\/p><p>They&#8217;re tests that assess the model and ensure it meets a performance standard before advancing it to the next step of interacting with a human. These tests measure latency, accuracy, and contextual relevance of a model&#8217;s outputs by asking it questions, to which there are either correct or incorrect answers that the human knows.<\/p>\n<p>There&#8217;s also a subset of tests that account for ambiguous answers, called incremental scoring. This type of offline evaluation allows you to score a model&#8217;s output as incrementally correct (for example, 80% correct) rather than just either right or wrong.<\/p>\n<\/aside>\n<p><strong>3&#046; Customize the LLM<\/strong>. When you train an LLM, you\u2019re building the scaffolding and neural networks to enable deep learning. When you customize a pre-trained LLM, you\u2019re adapting the LLM to specific tasks, such as generating text around a specific topic or in a particular style. The section below will focus on techniques for the latter. To customize a pre-trained LLM to your specific needs, you can try in-context learning, reinforcement learning from human feedback (RLHF), or fine-tuning.<\/p>\n<ul>\n<li><strong>In-context learning,<\/strong> sometimes referred to as <a href=\"https:\/\/github.blog\/2023-06-20-how-to-write-better-prompts-for-github-copilot\/\" target=\"_blank\" rel=\"noopener\">prompt engineering<\/a> by end users, is when you provide the model with specific instructions or examples at the time of inference\u2014or the time you\u2019re querying the model\u2014and asking it to infer what you need and generate a contextually relevant output.<\/li>\n<\/ul>\n<p>In-context learning can be done in a variety of ways, like providing examples, rephrasing your queries, and adding a sentence that states your goal at a high-level.<\/p>\n<ul>\n<li><strong>RLHF<\/strong> comprises a reward model for the pre-trained LLM. The reward model is trained to predict if a user will accept or reject the output from the pre-trained LLM. The learnings from the reward model are passed to the pre-trained LLM, which will adjust its outputs based on user acceptance rate.<\/li>\n<\/ul>\n<p>The benefit to RLHF is that it doesn&#8217;t require supervised learning and, consequently, expands the criteria for what\u2019s an acceptable output. With enough human feedback, the LLM can learn that if there\u2019s an 80% probability that a user will accept an output, then it\u2019s fine to generate. Want to try it out? Check out these <a href=\"https:\/\/github.com\/opendilab\/awesome-RLHF\" target=\"_blank\" rel=\"noopener\">resources, including codebases, for RLHF<\/a>.<\/p>\n<ul>\n<li><strong>Fine-tuning<\/strong> is when the model\u2019s generated output is evaluated against an intended or known output. For example, you know that the sentiment behind a statement like this is negative: \u201cThe soup is too salty.\u201d To evaluate the LLM, you\u2019d feed this sentence to the model and query it to label the sentiment as positive or negative. If the model labels it as positive, then you\u2019d adjust the model\u2019s parameters and try prompting it again to see if it can classify the sentiment as negative.<\/li>\n<\/ul>\n<p>Fine-tuning can result in a highly customized LLM that excels at a specific task, but it uses supervised learning, which requires time-intensive labeling. In other words, each input sample requires an output that&#8217;s labeled with exactly the correct answer. That way, the actual output can be measured against the labeled one and adjustments can be made to the model&#8217;s parameters. The advantage of RLHF, as mentioned above, is that you don&#8217;t need an exact label.<\/p>\n<p><strong>4&#046; Set up the app\u2019s architecture<\/strong>. The different components you\u2019ll need to set up your LLM app can be roughly grouped into three categories:<\/p>\n<ul>\n<li><strong>User input<\/strong> which requires a UI, an LLM, and an app hosting platform.<\/li>\n<li><strong>Input enrichment and prompt construction tools.<\/strong> This includes your data source, embedding model, a vector database, prompt construction and optimization tools, and a data filter.<\/p>\n<\/li>\n<li>\n<p><strong>Efficient and responsible AI tooling,<\/strong> which includes an LLM cache, LLM content classifier or filter, and a telemetry service to evaluate the output of your LLM app.<\/p>\n<\/li>\n<\/ul>\n<p><strong>5&#046; Conduct online evaluations of your app.<\/strong> These evaluations are considered \u201conline\u201d because they assess the LLM\u2019s performance during user interaction. For example, online evaluations for GitHub Copilot are measured through acceptance rate (how often a developer accepts a completion shown to them), as well as the retention rate (how often and to what extent a developer edits an accepted completion).<\/p>\n<aside class=\"p-4 p-md-6 post-aside--large\"><p class=\"h5-mktg gh-aside-title\">Why are online evaluations important?<\/p><p>Although a model might pass an offline test with flying colors, its output quality could change when the app is in the hands of users. This is because it\u2019s difficult to predict how end users will interact with the UI, so it\u2019s hard to model their behavior in offline tests.<\/p>\n<\/aside>\n<hr \/>\n<h2 id=\"the-emerging-architecture-of-llm-apps\"><a class=\"heading-link\" href=\"#the-emerging-architecture-of-llm-apps\">The emerging architecture of LLM apps<span class=\"heading-hash pl-2 text-italic text-bold\" aria-hidden=\"true\"><\/span><\/a><\/h2>\n<p>Let\u2019s get started on architecture. We\u2019re going to revisit our friend <a href=\"https:\/\/github.blog\/2023-07-17-prompt-engineering-guide-generative-ai-llms\/#building-applications-using-llms\" target=\"_blank\" rel=\"noopener\">Dave<\/a>, whose Wi-Fi went out on the day of his World Cup watch party. Fortunately, Dave was able to get his Wi-Fi running in time for the game, thanks to an LLM-powered assistant.<\/p>\n<p><strong>We\u2019ll use this example and the diagram above to <strong>walk through a user flow with an LLM app, and break down the kinds of tools you\u2019d need to build it. <g-emoji fallback-src=\"https:\/\/github.githubassets.com\/images\/icons\/emoji\/unicode\/1f447.png?v8\" alias=\"point_down\">&#128071;<\/g-emoji><\/strong><\/p>\n<figure id=\"attachment_74991\"  class=\"wp-caption alignnone mx-0\"><a href=\"https:\/\/github.blog\/wp-content\/uploads\/2023\/10\/LLMapparchitecturediagram.png?w=1536\" target=\"_blank\" rel=\"noopener\"><img data-recalc-dims=\"1\" decoding=\"async\" width=\"4088\" height=\"2148\" loading=\"lazy\" class=\"width-fit size-full wp-image-74991 width-fit\" src=\"https:\/\/github.blog\/wp-content\/uploads\/2023\/10\/LLMapparchitecturediagram.png?resize=4088%2C2148\" alt=\"Flow chart that reads from right to left, showing components of a large language model application and how they all work together. Data source for diagram is detailed here: https:\/\/github.blog\/?p=74969&amp;preview=true#the-emerging-architecture-of-llm-apps\" srcset=\"https:\/\/github.blog\/wp-content\/uploads\/2023\/10\/LLMapparchitecturediagram.png?w=4088 4088w, https:\/\/github.blog\/wp-content\/uploads\/2023\/10\/LLMapparchitecturediagram.png?w=300 300w, https:\/\/github.blog\/wp-content\/uploads\/2023\/10\/LLMapparchitecturediagram.png?w=768 768w, https:\/\/github.blog\/wp-content\/uploads\/2023\/10\/LLMapparchitecturediagram.png?w=1024 1024w, https:\/\/github.blog\/wp-content\/uploads\/2023\/10\/LLMapparchitecturediagram.png?w=1536 1536w, https:\/\/github.blog\/wp-content\/uploads\/2023\/10\/LLMapparchitecturediagram.png?w=2048 2048w, https:\/\/github.blog\/wp-content\/uploads\/2023\/10\/LLMapparchitecturediagram.png?w=3000 3000w\" sizes=\"auto, (max-width: 1000px) 100vw, 1000px\" \/><\/a><figcaption class=\"text-mono color-fg-muted mt-14px f5-mktg\">Click diagram to enlarge and save.<\/figcaption><\/figure>\n<h3 id=\"user-input-tools\"><a class=\"heading-link\" href=\"#user-input-tools\">User input tools<span class=\"heading-hash pl-2 text-italic text-bold\" aria-hidden=\"true\"><\/span><\/a><\/h3>\n<p>When Dave\u2019s Wi-Fi crashes, he calls his internet service provider (ISP) and is directed to an LLM-powered assistant. The assistant asks Dave to explain his emergency, and Dave responds, \u201cMy TV was connected to my Wi-Fi, but I bumped the counter, and the Wi-Fi box fell off! Now, we can\u2019t watch the game.\u201d<\/p>\n<p>In order for Dave to interact with the LLM, we need four tools:<\/p>\n<ul>\n<li><strong>LLM API and host<\/strong>: Is the LLM app running on a local machine or in the cloud? In an ISP\u2019s case, it\u2019s probably hosted in the cloud to handle the volume of calls like Dave\u2019s. <a href=\"https:\/\/github.com\/vercel\" target=\"_blank\" rel=\"noopener\">Vercel<\/a> and early projects like <a href=\"https:\/\/github.com\/jina-ai\/rungpt\" target=\"_blank\" rel=\"noopener\">jina-ai\/rungpt<\/a> aim to provide a cloud-native solution to deploy and scale LLM apps.<\/li>\n<\/ul>\n<p>But if you want to build an LLM app to tinker, hosting the model on your machine might be more cost effective so that you\u2019re not paying to spin up your cloud environment every time you want to experiment. You can find conversations on GitHub Discussions about hardware requirements for models like LLaMA\u201a two of which can be found <a href=\"https:\/\/github.com\/facebookresearch\/llama\/issues\/79\" target=\"_blank\" rel=\"noopener\">here<\/a> and <a href=\"https:\/\/github.com\/facebookresearch\/llama\/issues\/425\" target=\"_blank\" rel=\"noopener\">here<\/a>.<\/p>\n<ul>\n<li><strong>The UI<\/strong>: Dave\u2019s keypad is essentially the UI, but in order for Dave to use his keypad to switch from the menu of options to the emergency line, the UI needs to include a router tool.<\/li>\n<li><strong>Speech-to-text translation tool<\/strong>: Dave\u2019s verbal query then needs to be fed through a speech-to-text translation tool that works in the background.<\/li>\n<\/ul>\n<h3 id=\"input-enrichment-and-prompt-construction-tools\"><a class=\"heading-link\" href=\"#input-enrichment-and-prompt-construction-tools\">Input enrichment and prompt construction tools<span class=\"heading-hash pl-2 text-italic text-bold\" aria-hidden=\"true\"><\/span><\/a><\/h3>\n<p>Let\u2019s go back to Dave. The LLM can analyze the sequence of words in Dave\u2019s transcript, classify it as an IT complaint, and provide a contextually relevant response. (The LLM\u2019s able to do this because it\u2019s been trained on the internet\u2019s entire corpus, which includes IT support documentation.)<\/p>\n<p><strong>Input enrichment tools<\/strong> aim to contextualize and package the user\u2019s query in a way that will generate the most useful response from the LLM.<\/p>\n<ul>\n<li>A <strong>vector database<\/strong> is where you can store embeddings, or index high-dimensional vectors. It also increases the probability that the LLM\u2019s response is helpful by providing additional information to further contextualize your user\u2019s query.<\/li>\n<\/ul>\n<p>Let\u2019s say the LLM assistant has access to the company\u2019s complaints search engine, and those complaints and solutions are stored as embeddings in a vector database. Now, the LLM assistant uses information not only from the internet\u2019s IT support documentation, but also from documentation specific to customer problems with the ISP.<\/p>\n<ul>\n<li>But in order to retrieve information from the vector database that\u2019s relevant to a user\u2019s query, we need an <strong>embedding model<\/strong> to translate the query into an embedding. Because the embeddings in the vector database, as well as Dave\u2019s query, are translated into high-dimensional vectors, the vectors will capture both the semantics and intention of the natural language, not just its syntax.<\/li>\n<\/ul>\n<p>Here\u2019s a list of <a href=\"https:\/\/github.com\/topics\/text-embedding\" target=\"_blank\" rel=\"noopener\">open source text embedding models<\/a>. <a href=\"https:\/\/platform.openai.com\/docs\/guides\/embeddings\/embedding-models\" target=\"_blank\" rel=\"noopener\">OpenAI<\/a> and <a href=\"https:\/\/huggingface.co\/blog\/getting-started-with-embeddings\" target=\"_blank\" rel=\"noopener\">Hugging Face<\/a> also provide embedding models.<\/p>\n<p>Dave\u2019s contextualized query would then read like this:<\/p>\n<pre><code class=\"language-none\">\/\/ pay attention to the the following relevant information.\nto the colors and blinking pattern.\n\n\/\/ pay attention to the following relevant information.\n\n\/\/ The following is an IT complaint from, Dave Anderson, IT support expert.\nAnswers to Dave's questions should serve as an example of the excellent support\nprovided by the ISP to its customers.\n\n*Dave: Oh it's awful! This is the big game day. My TV was connected to my\nWi-Fi, but I bumped the counter and the Wi-Fi box fell off and broke! Now we\ncan't watch the game.\n<\/code><\/pre>\n<p>Not only do these series of prompts contextualize Dave\u2019s issue as an IT complaint, they also pull in context from the company\u2019s complaints search engine. That context includes common internet connectivity issues and solutions.<\/p>\n<p>MongoDB released a public preview of <a href=\"https:\/\/www.mongodb.com\/developer\/products\/atlas\/building-generative-ai-applications-vector-search-open-source-models\/\" target=\"_blank\" rel=\"noopener\">Vector Atlas Search<\/a>, which indexes high-dimensional vectors within MongoDB. <a href=\"https:\/\/github.com\/qdrant\" target=\"_blank\" rel=\"noopener\">Qdrant<\/a>, <a href=\"https:\/\/github.com\/pinecone-io\" target=\"_blank\" rel=\"noopener\">Pinecone<\/a>, and <a href=\"https:\/\/github.com\/milvus-io\" target=\"_blank\" rel=\"noopener\">Milvus<\/a> also provide free or open source vector databases.<\/p>\n<aside class=\"post-aside--small float-sm-right col-sm-5 col-md-6 col-lg-5 my-5 my-sm-2 ml-sm-4 ml-lg-6\"><p class=\"h6-mktg gh-aside-title\">Want to learn more about vector databases?<\/p><p>Read how the GitHub Copilot team is experimenting with them to create a <a href=\"https:\/\/github.blog\/2023-05-17-how-github-copilot-is-getting-better-at-understanding-your-code\/#improving-semantic-understanding\" target=\"_blank\" rel=\"noopener\">customized coding experience<\/a>.<\/p>\n<\/aside>\n<ul>\n<li>A <strong>data filter<\/strong> will ensure that the LLM isn\u2019t processing unauthorized data, like personal identifiable information. Preliminary projects like <a href=\"https:\/\/github.com\/amoffat\/HeimdaLLM\" target=\"_blank\" rel=\"noopener\">amoffat\/HeimdaLLM<\/a> are working to ensure LLMs access only authorized data.<\/li>\n<li>A <strong>prompt optimization tool<\/strong> will then help to package the end user\u2019s query with all this context. In other words, the tool will help to prioritize which context embeddings are most relevant, and in which order those embeddings should be organized in order for the LLM to produce the most contextually relevant response. This step is what ML researchers call prompt engineering, where a series of algorithms create a prompt. (A note that this is different from the prompt engineering that end users do, which is also known as in-context learning).<\/li>\n<\/ul>\n<p>Prompt optimization tools like <a href=\"https:\/\/github.com\/langchain-ai\/langchain\" target=\"_blank\" rel=\"noopener\">langchain-ai\/langchain<\/a> help you to compile prompts for your end users. Otherwise, you\u2019ll need to DIY a series of algorithms that retrieve embeddings from the vector database, grab snippets of the relevant context, and order them. If you go this latter route, you could use <a href=\"https:\/\/github.blog\/2023-09-20-github-copilot-chat-beta-now-available-for-all-individuals\/\" target=\"_blank\" rel=\"noopener\">GitHub Copilot Chat<\/a> or ChatGPT to assist you.<\/p>\n<div data-target=\"content-table-wrap.container\" class=\"content-table-wrap\"><content-table-wrap><table style=\"border: 1px black\">\n<tbody>\n<tr>\n<td>Learn how the GitHub Copilot team uses the <a href=\"https:\/\/github.blog\/2023-07-17-prompt-engineering-guide-generative-ai-llms\/\" target=\"_blank\" rel=\"noopener\">Jaccard similarity<\/a> to decide which pieces of context are most relevant to a user&#8217;s query &gt;<\/td>\n<\/tr>\n<\/tbody>\n<\/table><\/content-table-wrap><\/div>\n<h3 id=\"efficient-and-responsible-ai-tooling\"><a class=\"heading-link\" href=\"#efficient-and-responsible-ai-tooling\">Efficient and responsible AI tooling<span class=\"heading-hash pl-2 text-italic text-bold\" aria-hidden=\"true\"><\/span><\/a><\/h3>\n<p>To ensure that Dave doesn\u2019t become even more frustrated by waiting for the LLM assistant to generate a response, the LLM can quickly retrieve an output from a cache. And in the case that Dave does have an outburst, we can use a content classifier to make sure the LLM app doesn\u2019t respond in kind. The telemetry service will also evaluate Dave\u2019s interaction with the UI so that you, the developer, can improve the user experience based on Dave\u2019s behavior.<\/p>\n<ul>\n<li>An <strong>LLM cache<\/strong> stores outputs. This means instead of generating new responses to the same query (because Dave isn\u2019t the first person whose internet has gone down), the LLM can retrieve outputs from the cache that have been used for similar queries. Caching outputs can reduce latency, computational costs, and variability in suggestions.<\/li>\n<\/ul>\n<p>You can experiment with a tool like <a href=\"https:\/\/github.com\/zilliztech\/GPTCache\" target=\"_blank\" rel=\"noopener\">zilliztech\/GPTcache<\/a> to cache your app\u2019s responses.<\/p>\n<ul>\n<li>A <strong>content classifier or filter<\/strong> can prevent your automated assistant from responding with harmful or offensive suggestions (in the case that your end users take their frustration out on your LLM app).<\/li>\n<\/ul>\n<p>Tools like <a href=\"https:\/\/github.com\/derwiki\/llm-prompt-injection-filtering\" target=\"_blank\" rel=\"noopener\">derwiki\/llm-prompt-injection-filtering<\/a> and <a href=\"https:\/\/github.com\/laiyer-ai\/llm-guard\" target=\"_blank\" rel=\"noopener\">laiyer-ai\/llm-guard<\/a> are in their early stages but working toward preventing this problem.<\/p>\n<ul>\n<li>A <strong>telemetry service<\/strong> will allow you to evaluate how well your app is working with actual users. A service that responsibly and transparently monitors user activity (like how often they accept or change a suggestion) can share useful data to help improve your app and make it more useful.<\/li>\n<\/ul>\n<p><a href=\"https:\/\/github.com\/open-telemetry\" target=\"_blank\" rel=\"noopener\">OpenTelemetry<\/a>, for example, is an open source framework that gives developers a standardized way to collect, process, and export telemetry data across development, testing, staging, and production environments.<\/p>\n<p>Learn <a href=\"https:\/\/github.blog\/2023-10-16-measuring-git-performance-with-opentelemetry\/\" target=\"_blank\" rel=\"noopener\">how GitHub uses OpenTelemetry<\/a> to measure Git performance &gt;<\/p>\n<aside class=\"p-4 p-md-6 post-aside--large\"><p class=\"h5-mktg gh-aside-title\">Looking for more responsible AI tooling?<\/p><p>Developers are creating projects around <a href=\"https:\/\/github.com\/topics\/responsible-ai\" target=\"_blank\" rel=\"noopener\">responsible AI<\/a>, <a href=\"https:\/\/www.google.com\/url?q=https:\/\/github.com\/topics\/fairness-ai&amp;sa=D&amp;source=docs&amp;ust=1698462219410800&amp;usg=AOvVaw0iPTnTWezP8Swrv1z21GbD\" target=\"_blank\" rel=\"noopener\">fairness in AI<\/a>, <a href=\"https:\/\/github.com\/topics\/responsible-ml\" target=\"_blank\" rel=\"noopener\">responsible machine learning<\/a>, and <a href=\"https:\/\/www.google.com\/url?q=https:\/\/github.com\/topics\/ethical-artificial-intelligence&amp;sa=D&amp;source=docs&amp;ust=1698462244374929&amp;usg=AOvVaw1R1Yn23WrF65d37edoAhDb\" target=\"_blank\" rel=\"noopener\">ethical AI<\/a> on GitHub.<\/p>\n<\/aside>\n<p>Woohoo! \ud83e\udd73 Your LLM assistant has effectively answered Dave\u2019s many queries. His router is up and working, and he\u2019s ready for his World Cup watch party. Mission accomplished!<\/p>\n<hr \/>\n<h2 id=\"real-world-impact-of-llms\"><a class=\"heading-link\" href=\"#real-world-impact-of-llms\">Real-world impact of LLMs<span class=\"heading-hash pl-2 text-italic text-bold\" aria-hidden=\"true\"><\/span><\/a><\/h2>\n<p>Looking for inspiration or a problem space to start exploring? Here\u2019s a list of ongoing projects where LLM apps and models are making real-world impact.<\/p>\n<ul>\n<li>NASA and IBM recently open sourced the <a href=\"https:\/\/www.earthdata.nasa.gov\/news\/impact-ibm-hls-foundation-model\" target=\"_blank\" rel=\"noopener\">largest geospatial AI model<\/a> to increase access to NASA earth science data. The hope is to accelerate discovery and understanding of climate effects.<\/li>\n<li>Read how the Johns Hopkins Applied Physics Laboratory is designing a <a href=\"https:\/\/www.jhuapl.edu\/news\/news-releases\/230817a-cpg-ai-battlefield-medical-assistance\" target=\"_blank\" rel=\"noopener\">conversational AI agent<\/a> that provides, in plain English, medical guidance to untrained soldiers in the field based on established care procedures.<\/li>\n<li>Companies like <a href=\"https:\/\/github.com\/customer-stories\/duolingo\" target=\"_blank\" rel=\"noopener\">Duolingo<\/a> and <a href=\"https:\/\/github.com\/customer-stories\/mercado-libre\" target=\"_blank\" rel=\"noopener\">Mercado Libre<\/a> are using <a href=\"https:\/\/github.com\/features\/copilot\" target=\"_blank\" rel=\"noopener\">GitHub Copilot<\/a> to help more people learn another language (for free) and democratize ecommerce in Latin America, respectively.<\/li>\n<\/ul>\n<hr \/>\n<h3 id=\"further-reading\"><a class=\"heading-link\" href=\"#further-reading\">Further reading<span class=\"heading-hash pl-2 text-italic text-bold\" aria-hidden=\"true\"><\/span><\/a><\/h3>\n<ul>\n<li><a href=\"https:\/\/github.blog\/2023-10-05-a-developers-guide-to-open-source-llms-and-generative-ai\/#open-source-llms-available-today\">A developer\u2019s guide to open source LLMs and generative AI<\/a><\/li>\n<li><a href=\"https:\/\/github.blog\/2023-10-27-demystifying-llms-how-they-can-do-things-they-werent-trained-to-do\/\">Demystifying LLMs: How they can do things they weren\u2019t trained to do<\/a><\/li>\n<li><a href=\"https:\/\/github.blog\/2023-07-17-prompt-engineering-guide-generative-ai-llms\/\">A developer\u2019s guide to prompt engineering and LLMs<\/a><\/li>\n<li><a href=\"https:\/\/github.blog\/2023-09-06-how-to-build-an-enterprise-llm-application-lessons-from-github-copilot\/\">How to build an enterprise LLM application: Lessons from GitHub Copilot<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Here\u2019s everything you need to know to build your first LLM app and problem spaces you can start exploring today.<\/p>\n","protected":false},"author":2123,"featured_media":74991,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_gh_post_show_toc":"no","_gh_post_is_no_robots":"no","_gh_post_is_featured":"no","_gh_post_is_excluded":"no","_gh_post_is_unlisted":"no","_gh_post_related_link_1":"","_gh_post_related_link_2":"","_gh_post_related_link_3":"","_gh_post_sq_img":"https:\/\/github.blog\/wp-content\/uploads\/2023\/10\/LLMapparchitecturediagram-1.png","_gh_post_sq_img_id":"75098","_gh_post_cta_title":"","_gh_post_cta_text":"","_gh_post_cta_link":"","_gh_post_cta_button":"Click Here to Learn More","_gh_post_recirc_hide":"no","_gh_post_recirc_col_1":"","_gh_post_recirc_col_2":"","_gh_post_recirc_col_3":"","_gh_post_recirc_col_4":"","_featured_video":"","_gh_post_additional_query_params":"","_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2},"_wpas_customize_per_network":false,"_links_to":"","_links_to_target":""},"categories":[3293,3296],"tags":[2837,3241,3028,3064],"coauthors":[3118],"class_list":["post-74969","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-and-ml","category-llms","tag-ai","tag-ai-insights","tag-generative-ai","tag-llm"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.3 (Yoast SEO v27.3) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>The architecture of today&#039;s LLM applications - The GitHub Blog<\/title>\n<meta name=\"description\" content=\"Here\u2019s everything you need to know to build your first LLM app and problem spaces you can start exploring today.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/github.blog\/ai-and-ml\/llms\/the-architecture-of-todays-llm-applications\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"The architecture of today&#039;s LLM applications\" \/>\n<meta property=\"og:description\" content=\"Here\u2019s everything you need to know to build your first LLM app and problem spaces you can start exploring today.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/github.blog\/ai-and-ml\/llms\/the-architecture-of-todays-llm-applications\/\" \/>\n<meta property=\"og:site_name\" content=\"The GitHub Blog\" \/>\n<meta property=\"article:published_time\" content=\"2023-10-30T16:22:59+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-02-08T02:02:20+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/github.blog\/wp-content\/uploads\/2023\/10\/LLMapparchitecturediagram.png?fit=4088%2C2148\" \/>\n\t<meta property=\"og:image:width\" content=\"4088\" \/>\n\t<meta property=\"og:image:height\" content=\"2148\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Nicole Choi\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Nicole Choi\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"13 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/github.blog\\\/ai-and-ml\\\/llms\\\/the-architecture-of-todays-llm-applications\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/github.blog\\\/ai-and-ml\\\/llms\\\/the-architecture-of-todays-llm-applications\\\/\"},\"author\":{\"name\":\"Nicole Choi\",\"@id\":\"https:\\\/\\\/github.blog\\\/#\\\/schema\\\/person\\\/8a8cb984893a6f3fa8f80dcbe2afff20\"},\"headline\":\"The architecture of today&#8217;s LLM applications\",\"datePublished\":\"2023-10-30T16:22:59+00:00\",\"dateModified\":\"2024-02-08T02:02:20+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/github.blog\\\/ai-and-ml\\\/llms\\\/the-architecture-of-todays-llm-applications\\\/\"},\"wordCount\":2707,\"image\":{\"@id\":\"https:\\\/\\\/github.blog\\\/ai-and-ml\\\/llms\\\/the-architecture-of-todays-llm-applications\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/github.blog\\\/wp-content\\\/uploads\\\/2023\\\/10\\\/LLMapparchitecturediagram.png?fit=4088%2C2148\",\"keywords\":[\"AI\",\"AI Insights\",\"generative AI\",\"LLM\"],\"articleSection\":[\"AI &amp; ML\",\"LLMs\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/github.blog\\\/ai-and-ml\\\/llms\\\/the-architecture-of-todays-llm-applications\\\/\",\"url\":\"https:\\\/\\\/github.blog\\\/ai-and-ml\\\/llms\\\/the-architecture-of-todays-llm-applications\\\/\",\"name\":\"The architecture of today's LLM applications - The GitHub Blog\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/github.blog\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/github.blog\\\/ai-and-ml\\\/llms\\\/the-architecture-of-todays-llm-applications\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/github.blog\\\/ai-and-ml\\\/llms\\\/the-architecture-of-todays-llm-applications\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/github.blog\\\/wp-content\\\/uploads\\\/2023\\\/10\\\/LLMapparchitecturediagram.png?fit=4088%2C2148\",\"datePublished\":\"2023-10-30T16:22:59+00:00\",\"dateModified\":\"2024-02-08T02:02:20+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/github.blog\\\/#\\\/schema\\\/person\\\/8a8cb984893a6f3fa8f80dcbe2afff20\"},\"description\":\"Here\u2019s everything you need to know to build your first LLM app and problem spaces you can start exploring today.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/github.blog\\\/ai-and-ml\\\/llms\\\/the-architecture-of-todays-llm-applications\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/github.blog\\\/ai-and-ml\\\/llms\\\/the-architecture-of-todays-llm-applications\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/github.blog\\\/ai-and-ml\\\/llms\\\/the-architecture-of-todays-llm-applications\\\/#primaryimage\",\"url\":\"https:\\\/\\\/github.blog\\\/wp-content\\\/uploads\\\/2023\\\/10\\\/LLMapparchitecturediagram.png?fit=4088%2C2148\",\"contentUrl\":\"https:\\\/\\\/github.blog\\\/wp-content\\\/uploads\\\/2023\\\/10\\\/LLMapparchitecturediagram.png?fit=4088%2C2148\",\"width\":4088,\"height\":2148,\"caption\":\"Flow chart that reads from right to left, showing components of a large language model application and how they all work together.\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/github.blog\\\/ai-and-ml\\\/llms\\\/the-architecture-of-todays-llm-applications\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/github.blog\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"AI &amp; ML\",\"item\":\"https:\\\/\\\/github.blog\\\/ai-and-ml\\\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"LLMs\",\"item\":\"https:\\\/\\\/github.blog\\\/ai-and-ml\\\/llms\\\/\"},{\"@type\":\"ListItem\",\"position\":4,\"name\":\"The architecture of today&#8217;s LLM applications\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/github.blog\\\/#website\",\"url\":\"https:\\\/\\\/github.blog\\\/\",\"name\":\"The GitHub Blog\",\"description\":\"Updates, ideas, and inspiration from GitHub to help developers build and design software.\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/github.blog\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/github.blog\\\/#\\\/schema\\\/person\\\/8a8cb984893a6f3fa8f80dcbe2afff20\",\"name\":\"Nicole Choi\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/119fcc9cb84bbe63c1822706312e4272564f304917839735f4f45d53ac06e2f1?s=96&d=mm&r=gcabf0f5bcb7699cf6311cda62f32cd74\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/119fcc9cb84bbe63c1822706312e4272564f304917839735f4f45d53ac06e2f1?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/119fcc9cb84bbe63c1822706312e4272564f304917839735f4f45d53ac06e2f1?s=96&d=mm&r=g\",\"caption\":\"Nicole Choi\"},\"url\":\"https:\\\/\\\/github.blog\\\/author\\\/nicchoi29\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"The architecture of today's LLM applications - The GitHub Blog","description":"Here\u2019s everything you need to know to build your first LLM app and problem spaces you can start exploring today.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/github.blog\/ai-and-ml\/llms\/the-architecture-of-todays-llm-applications\/","og_locale":"en_US","og_type":"article","og_title":"The architecture of today's LLM applications","og_description":"Here\u2019s everything you need to know to build your first LLM app and problem spaces you can start exploring today.","og_url":"https:\/\/github.blog\/ai-and-ml\/llms\/the-architecture-of-todays-llm-applications\/","og_site_name":"The GitHub Blog","article_published_time":"2023-10-30T16:22:59+00:00","article_modified_time":"2024-02-08T02:02:20+00:00","og_image":[{"width":4088,"height":2148,"url":"https:\/\/github.blog\/wp-content\/uploads\/2023\/10\/LLMapparchitecturediagram.png?fit=4088%2C2148","type":"image\/png"}],"author":"Nicole Choi","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Nicole Choi","Est. reading time":"13 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/github.blog\/ai-and-ml\/llms\/the-architecture-of-todays-llm-applications\/#article","isPartOf":{"@id":"https:\/\/github.blog\/ai-and-ml\/llms\/the-architecture-of-todays-llm-applications\/"},"author":{"name":"Nicole Choi","@id":"https:\/\/github.blog\/#\/schema\/person\/8a8cb984893a6f3fa8f80dcbe2afff20"},"headline":"The architecture of today&#8217;s LLM applications","datePublished":"2023-10-30T16:22:59+00:00","dateModified":"2024-02-08T02:02:20+00:00","mainEntityOfPage":{"@id":"https:\/\/github.blog\/ai-and-ml\/llms\/the-architecture-of-todays-llm-applications\/"},"wordCount":2707,"image":{"@id":"https:\/\/github.blog\/ai-and-ml\/llms\/the-architecture-of-todays-llm-applications\/#primaryimage"},"thumbnailUrl":"https:\/\/github.blog\/wp-content\/uploads\/2023\/10\/LLMapparchitecturediagram.png?fit=4088%2C2148","keywords":["AI","AI Insights","generative AI","LLM"],"articleSection":["AI &amp; ML","LLMs"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/github.blog\/ai-and-ml\/llms\/the-architecture-of-todays-llm-applications\/","url":"https:\/\/github.blog\/ai-and-ml\/llms\/the-architecture-of-todays-llm-applications\/","name":"The architecture of today's LLM applications - The GitHub Blog","isPartOf":{"@id":"https:\/\/github.blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/github.blog\/ai-and-ml\/llms\/the-architecture-of-todays-llm-applications\/#primaryimage"},"image":{"@id":"https:\/\/github.blog\/ai-and-ml\/llms\/the-architecture-of-todays-llm-applications\/#primaryimage"},"thumbnailUrl":"https:\/\/github.blog\/wp-content\/uploads\/2023\/10\/LLMapparchitecturediagram.png?fit=4088%2C2148","datePublished":"2023-10-30T16:22:59+00:00","dateModified":"2024-02-08T02:02:20+00:00","author":{"@id":"https:\/\/github.blog\/#\/schema\/person\/8a8cb984893a6f3fa8f80dcbe2afff20"},"description":"Here\u2019s everything you need to know to build your first LLM app and problem spaces you can start exploring today.","breadcrumb":{"@id":"https:\/\/github.blog\/ai-and-ml\/llms\/the-architecture-of-todays-llm-applications\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/github.blog\/ai-and-ml\/llms\/the-architecture-of-todays-llm-applications\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/github.blog\/ai-and-ml\/llms\/the-architecture-of-todays-llm-applications\/#primaryimage","url":"https:\/\/github.blog\/wp-content\/uploads\/2023\/10\/LLMapparchitecturediagram.png?fit=4088%2C2148","contentUrl":"https:\/\/github.blog\/wp-content\/uploads\/2023\/10\/LLMapparchitecturediagram.png?fit=4088%2C2148","width":4088,"height":2148,"caption":"Flow chart that reads from right to left, showing components of a large language model application and how they all work together."},{"@type":"BreadcrumbList","@id":"https:\/\/github.blog\/ai-and-ml\/llms\/the-architecture-of-todays-llm-applications\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/github.blog\/"},{"@type":"ListItem","position":2,"name":"AI &amp; ML","item":"https:\/\/github.blog\/ai-and-ml\/"},{"@type":"ListItem","position":3,"name":"LLMs","item":"https:\/\/github.blog\/ai-and-ml\/llms\/"},{"@type":"ListItem","position":4,"name":"The architecture of today&#8217;s LLM applications"}]},{"@type":"WebSite","@id":"https:\/\/github.blog\/#website","url":"https:\/\/github.blog\/","name":"The GitHub Blog","description":"Updates, ideas, and inspiration from GitHub to help developers build and design software.","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/github.blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/github.blog\/#\/schema\/person\/8a8cb984893a6f3fa8f80dcbe2afff20","name":"Nicole Choi","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/119fcc9cb84bbe63c1822706312e4272564f304917839735f4f45d53ac06e2f1?s=96&d=mm&r=gcabf0f5bcb7699cf6311cda62f32cd74","url":"https:\/\/secure.gravatar.com\/avatar\/119fcc9cb84bbe63c1822706312e4272564f304917839735f4f45d53ac06e2f1?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/119fcc9cb84bbe63c1822706312e4272564f304917839735f4f45d53ac06e2f1?s=96&d=mm&r=g","caption":"Nicole Choi"},"url":"https:\/\/github.blog\/author\/nicchoi29\/"}]}},"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/github.blog\/wp-content\/uploads\/2023\/10\/LLMapparchitecturediagram.png?fit=4088%2C2148","jetpack_shortlink":"https:\/\/wp.me\/pamS32-jvb","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/github.blog\/wp-json\/wp\/v2\/posts\/74969","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/github.blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/github.blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/github.blog\/wp-json\/wp\/v2\/users\/2123"}],"replies":[{"embeddable":true,"href":"https:\/\/github.blog\/wp-json\/wp\/v2\/comments?post=74969"}],"version-history":[{"count":154,"href":"https:\/\/github.blog\/wp-json\/wp\/v2\/posts\/74969\/revisions"}],"predecessor-version":[{"id":75389,"href":"https:\/\/github.blog\/wp-json\/wp\/v2\/posts\/74969\/revisions\/75389"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/github.blog\/wp-json\/wp\/v2\/media\/74991"}],"wp:attachment":[{"href":"https:\/\/github.blog\/wp-json\/wp\/v2\/media?parent=74969"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/github.blog\/wp-json\/wp\/v2\/categories?post=74969"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/github.blog\/wp-json\/wp\/v2\/tags?post=74969"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/github.blog\/wp-json\/wp\/v2\/coauthors?post=74969"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}