{"id":66164,"date":"2022-07-27T10:00:35","date_gmt":"2022-07-27T17:00:35","guid":{"rendered":"https:\/\/github.blog\/?p=66164"},"modified":"2022-08-01T13:21:01","modified_gmt":"2022-08-01T20:21:01","slug":"corrupting-memory-without-memory-corruption","status":"publish","type":"post","link":"https:\/\/github.blog\/security\/vulnerability-research\/corrupting-memory-without-memory-corruption\/","title":{"rendered":"Corrupting memory without memory corruption"},"content":{"rendered":"<p>In this post I&#8217;ll cover the details of CVE-2022-20186, a vulnerability in the Arm Mali GPU that I reported to the Android security team, which was fixed in the <a href=\"https:\/\/source.android.com\/security\/bulletin\/pixel\/2022-06-01\">June update for Pixel<\/a>. This bug exists in the memory management code of the Arm Mali GPU kernel driver, which is exploitable to map arbitrary physical pages to the GPU memory with both read and write access. This gives a very strong primitive that allows me to gain arbitrary kernel code execution and root on a Pixel 6 with ease.<\/p>\n<p>As explained in my <a href=\"https:\/\/github.blog\/2022-06-16-the-android-kernel-mitigations-obstacle-race\/\">previous post<\/a>, the GPU driver on Android is a very attractive target for an attacker, due to the following reasons:<\/p>\n<ol>\n<li>On all Android devices, the GPU driver can be accessed from the untrusted app domain, so any compromised or malicious app can launch an attack on the kernel.<\/li>\n<li>Most Android devices use either Qualcomm&#8217;s Adreno GPU (which was covered in the <a href=\"https:\/\/github.blog\/2022-06-16-the-android-kernel-mitigations-obstacle-race\/\">previous post<\/a>), or the Arm Mali GPU. So by just attacking two GPU drivers, it is possible to gain universal root on all Android devices with relatively few bugs.<\/li>\n<li>As we&#8217;ll see in this post, a large part of the GPU driver is responsible for creating shared memory between the GPU and user applications, and to achieve this, GPU drivers often contain fairly elaborate memory management code that is complex and error prone. Errors in the GPU driver can often lead to bugs that are undetectable as memory corruptions and also immune to existing mitigations, such as the bug in this post.<\/li>\n<\/ol>\n<p>In fact, of the seven Android 0-days that were detected as exploited in the wild in 2021, five targeted GPU drivers. As of the date of writing, another bug that was exploited in the wild \u2014 <a href=\"https:\/\/source.android.com\/security\/bulletin\/pixel\/2022-03-01\">CVE-2021-39793<\/a>, disclosed in March 2022 \u2014 also targeted the GPU driver. Together, of these six exploited in-the-wild bugs that targeted Android GPU, three bugs targeted the Qualcomm GPU, while the other three targeted the Arm Mali GPU.<\/p>\n<h2 id=\"the-arm-mali-gpu\"><a class=\"heading-link\" href=\"#the-arm-mali-gpu\">The Arm Mali GPU<span class=\"heading-hash pl-2 text-italic text-bold\" aria-hidden=\"true\"><\/span><\/a><\/h2>\n<p>The Arm Mali GPU can be integrated in different chipsets (for example, see &#8220;Implementations&#8221; in the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Mali_(GPU)\">Mali(GPU) Wikipedia entry<\/a> for a list of chipsets that have the Mali GPU) and is used on Android devices. For example, all of the international versions of the Samsung S series phones up to the S21 use the Mali GPU, as well as Pixel 6 and Pixel 6 Pro.<\/p>\n<p>There are many good articles about the architecture of the Mali GPU (for example, <a href=\"https:\/\/community.arm.com\/arm-community-blogs\/b\/graphics-gaming-and-vr-blog\/posts\/the-mali-gpu-an-abstract-machine-part-1---frame-pipelining\">&#8220;The Mali GPU: An abstract machine&#8221;<\/a> series by Peter Harris, and <a href=\"https:\/\/www.anandtech.com\/show\/14385\/arm-announces-malig77-gpu\/2\">&#8220;Arm&#8217;s new Mali-G77 &amp; Valhall gpu architecture: a major leap&#8221;<\/a> by Andrei Frumusanu).<\/p>\n<p>The names of the Mali GPU architectures are inspired by Norse mythology, starting from &#8220;Utgard&#8221;, &#8220;Midgard&#8221;, &#8220;Bifrost&#8221; to the most recent &#8220;Valhall&#8221;. Most modern Android phones are running either &#8220;Valhall&#8221; or &#8220;Bifrost&#8221; architecture and their kernel drivers share much of the code. As these newer architectures are based largely on the &#8220;Midgard&#8221; architecture, there are sometimes macros in the &#8220;Valhall&#8221; or &#8220;Bifrost&#8221; driver with the &#8220;MIDGARD&#8221; prefix (e.g. <code>MIDGARD_MMU_LEVEL<\/code>). These macros may still be in active use in the newer drivers and the &#8220;MIDGARD&#8221; prefix merely reflects their historic origin.<\/p>\n<p>The Mali GPU driver consists of two different parts. The kernel driver is open source and new versions are released regularly on the <a href=\"https:\/\/developer.arm.com\/downloads\/-\/mali-drivers\/valhall-kernel\">Arm Developer page<\/a>. Apart from the open source kernel driver, there is also a proprietary user space driver responsible for compiling programs written in shading languages (e.g. OpenGL) into instruction sets of the Mali GPU. This post will only cover the open source kernel driver and will simply call it the Mali driver.<\/p>\n<p>In order to use the Mali driver, a <code><a href=\"https:\/\/android.googlesource.com\/kernel\/google-modules\/gpu\/+\/refs\/tags\/android-12.0.0_r0.42\/mali_kbase\/mali_kbase_defs.h#1747\">kbase_context<\/a><\/code> first has to be created by calling a sequence of <code>ioctl<\/code> calls. The <code>kbase_context<\/code> defines an execution environment for the user space application to interact with the GPU. Each device file that interacts with the GPU has a separate <code>kbase_context<\/code>. Amongst other things, the <code>kbase_context<\/code> defines its own GPU address space and manages user space and GPU memory sharing.<\/p>\n<h2 id=\"memory-management-in-the-mali-kernel-driver\"><a class=\"heading-link\" href=\"#memory-management-in-the-mali-kernel-driver\">Memory management in the Mali kernel driver<span class=\"heading-hash pl-2 text-italic text-bold\" aria-hidden=\"true\"><\/span><\/a><\/h2>\n<p>There are different ways to share memory between the GPU and user space process, but for the purpose of this post, I&#8217;ll only cover the case where the shared memory is managed by the driver. In this case, the user first calls the <code><a href=\"https:\/\/android.googlesource.com\/kernel\/google-modules\/gpu\/+\/refs\/tags\/android-12.0.0_r0.42\/mali_kbase\/mali_kbase_mem_linux.c#292\">KBASE_IOCTL_MEM_ALLOC<\/a><\/code> <code>ioctl<\/code> to allocate pages from the <code>kbase_context<\/code>. These pages are allocated from a per-context memory pool in the <code>kbase_context<\/code> (<code><a href=\"https:\/\/android.googlesource.com\/kernel\/google-modules\/gpu\/+\/refs\/tags\/android-12.0.0_r0.42\/mali_kbase\/mali_kbase_defs.h#1811\">mem_pools<\/a><\/code>) and do not get mapped to the GPU nor to the user space immediately. The <code>ioctl<\/code> returns a cookie to the user, which is then used as the <code>offset<\/code> to <code>mmap<\/code> the device file and map these pages to the GPU and to user space. The backing page is then recycled back to the <code>mem_pools<\/code> when the memory is unmapped with <code>munmap<\/code>.<\/p>\n<p>The <code>KBASE_IOCTL_MEM_ALLOC<\/code> <code>ioctl<\/code> is implemented in <code><a href=\"https:\/\/android.googlesource.com\/kernel\/google-modules\/gpu\/+\/refs\/tags\/android-12.0.0_r0.42\/mali_kbase\/mali_kbase_mem_linux.c#292\">kbase_mem_alloc<\/a><\/code>. This function creates a <code><a href=\"https:\/\/android.googlesource.com\/kernel\/google-modules\/gpu\/+\/refs\/tags\/android-12.0.0_r0.42\/mali_kbase\/mali_kbase_mem.h#322\">kbase_va_region<\/a><\/code> object to store data relevant to the memory region:<\/p>\n<pre><code>struct kbase_va_region *kbase_mem_alloc(struct kbase_context *kctx,\n                    u64 va_pages, u64 commit_pages,\n                    u64 extension, u64 *flags, u64 *gpu_va)\n{\n    ...\n    struct kbase_va_region *reg;\n    ...\n    reg = kbase_alloc_free_region(rbtree, PFN_DOWN(*gpu_va),\n            va_pages, zone);\n    ...\n<\/code><\/pre>\n<p>It also allocates backing pages for the memory region from <code>mem_pools<\/code> of the <code>kbase_context<\/code> by calling <code><a href=\"https:\/\/android.googlesource.com\/kernel\/google-modules\/gpu\/+\/refs\/tags\/android-12.0.0_r0.42\/mali_kbase\/mali_kbase_mem.c#2139\">kbase_alloc_phy_pages<\/a><\/code>.<\/p>\n<p>When calling from a 64 bit process, the created region is stored in the <code><a href=\"https:\/\/android.googlesource.com\/kernel\/google-modules\/gpu\/+\/refs\/tags\/android-12.0.0_r0.42\/mali_kbase\/mali_kbase_defs.h#1802\">pending_regions<\/a><\/code> of <code>kbase_context<\/code>, instead of mapping it immediately:<\/p>\n<pre><code>    if (*flags &amp; BASE_MEM_SAME_VA) {\n        ...\n        kctx-&gt;pending_regions[cookie_nr] = reg;\n\n        \/* relocate to correct base *\/\n        cookie = cookie_nr + PFN_DOWN(BASE_MEM_COOKIE_BASE);\n        cookie &lt;&lt;= PAGE_SHIFT;\n\n        *gpu_va = (u64) cookie;\n    }...\n<\/code><\/pre>\n<p>The <code>cookie<\/code> from the above is then returned to the user, which can then be used as the <code>offset<\/code> parameter in <code>mmap<\/code> to map this memory.<\/p>\n<h3 ><a class=\"heading-link\" href=\"#\"><strong><span id=\"mapping\">Mapping pages to user space<\/span><\/strong><span class=\"heading-hash pl-2 text-italic text-bold\" aria-hidden=\"true\"><\/span><\/a><\/h3>\n<p>Although I&#8217;ll not be accessing the memory region through the user space mapping, when exploiting the vulnerability, it is important to understand how the virtual addresses are assigned when <code>mmap<\/code> is called to map the region, so I&#8217;ll go through the user space mapping here briefly. When <code>mmap<\/code> is called, <code><a href=\"https:\/\/android.googlesource.com\/kernel\/google-modules\/gpu\/+\/refs\/tags\/android-12.0.0_r0.42\/mali_kbase\/thirdparty\/mali_kbase_mmap.c#237\">kbase_context_get_unmapped_area<\/a><\/code> is used to find a free region for the mapping:<\/p>\n<pre><code>unsigned long kbase_context_get_unmapped_area(struct kbase_context *const kctx,\n        const unsigned long addr, const unsigned long len,\n        const unsigned long pgoff, const unsigned long flags)\n{\n    ...\n    ret = kbase_unmapped_area_topdown(&amp;info, is_shader_code,\n            is_same_4gb_page);\n    ...\n    return ret;\n}\n<\/code><\/pre>\n<p>This function does not allow mapping the region to a fixed virtual address with the <code>MAP_FIXED<\/code> flag . Instead, it uses <code>kbase_unmapped_area_topdown<\/code> to look for a free region large enough to fit the requested memory and returns its address. As its name suggests, <code>kbase_unmapped_area_topdown<\/code> returns the highest available address. The mapped address is then stored as the <code><a href=\"https:\/\/android.googlesource.com\/kernel\/google-modules\/gpu\/+\/refs\/tags\/android-12.0.0_r0.42\/mali_kbase\/mali_kbase_mem.h#326\">start_pfn<\/a><\/code> field in the <code>kbase_va_region<\/code>. In particular, this means that relative addresses of consecutively mmapped regions are predictable:<\/p>\n<pre><code>int fd = open(\"\/dev\/mali0\", O_RDWR);\nunion kbase_ioctl_mem_alloc alloc;\nunion kbase_ioctl_mem_alloc alloc2;\n...\nioctl(fd, KBASE_IOCTL_MEM_ALLOC, alloc);\nioctl(fd, KBASE_IOCTL_MEM_ALLOC, alloc2);\nvoid* region1 = mmap(NULL, 0x1000, prot, MAP_SHARED, fd, alloc.out.gpu_va);\nvoid* region2 = mmap(NULL, 0x1000, prot, MAP_SHARED, fd, alloc2.out.gpu_va);\n<\/code><\/pre>\n<p>In the above, the <code>region2<\/code> will be <code>region1 - 0x1000<\/code> because of how <code>kbase_unmapped_area_topdown<\/code> works.<\/p>\n<h3 ><a class=\"heading-link\" href=\"#\"><strong><span id=\"gpu\">Mapping pages to the GPU<\/span><\/strong><span class=\"heading-hash pl-2 text-italic text-bold\" aria-hidden=\"true\"><\/span><\/a><\/h3>\n<p>The GPU mapping is the more interesting part. Each <code>kbase_context<\/code> maintains its own GPU address space and also manages its own GPU page table. Each <code>kbase_context<\/code> maintains a four-level page table that is used for translating the GPU address to the backing physical page. It has a <code><a href=\"https:\/\/android.googlesource.com\/kernel\/google-modules\/gpu\/+\/refs\/tags\/android-12.0.0_r0.42\/mali_kbase\/mali_kbase_defs.h#1751\">mmut<\/a><\/code> field that stores the top level <a href=\"https:\/\/www.kernel.org\/doc\/gorman\/html\/understand\/understand006.html\">page table global directory (PGD)<\/a> as the <code><a href=\"https:\/\/android.googlesource.com\/kernel\/google-modules\/gpu\/+\/refs\/tags\/android-12.0.0_r0.42\/mali_kbase\/mali_kbase_defs.h#293\">pgd<\/a><\/code> field. The implementation is standard, with <code>mmut->pgd<\/code> being a page interpreted as a 512 element <code>int64_t<\/code> array whose entries point to the page frames that store the next level PGD, until it reaches the bottom level, where the page table entries (PTE) store the backing physical page (as well as page permissions) instead.<\/p>\n<p>As most of the addresses are unused, the various PGD and PTE of the page table are only created when they are needed for an access:<\/p>\n<pre><code>static int mmu_get_next_pgd(struct kbase_device *kbdev,\n        struct kbase_mmu_table *mmut,\n        phys_addr_t *pgd, u64 vpfn, int level)\n{\n    ...\n    p = pfn_to_page(PFN_DOWN(*pgd));\n    page = kmap(p);\n    ...\n    target_pgd = kbdev-&gt;mmu_mode-&gt;pte_to_phy_addr(page[vpfn]);   \/\/&lt;------- 1.\n\n    if (!target_pgd) {\n        target_pgd = kbase_mmu_alloc_pgd(kbdev, mmut);           \/\/&lt;------- 2.\n        ...\n        kbdev-&gt;mmu_mode-&gt;entry_set_pte(&amp;page[vpfn], target_pgd); \/\/&lt;------- 3.\n<\/code><\/pre>\n<p>When an access requires a certain PGD, it&#8217;ll look for the entry from the previous level PGD (1 in the above). As all entries of a PGD are initialized to a magic value that indicates the entry is invalid, if the entry had not been accessed before, then 1 will return a <code>NULL<\/code> pointer, which would lead to <code>target_pgd<\/code> being allocated (2 in the above). The address of <code>target_pgd<\/code> is then added as an entry in the previous PGD (3 in the above). The page frame that is backing <code>target_pgd<\/code> is allocated via the <code>mem_pools<\/code> of the global <code><a href=\"https:\/\/android.googlesource.com\/kernel\/google-modules\/gpu\/+\/refs\/tags\/android-12.0.0_r0.42\/mali_kbase\/mali_kbase_defs.h#969\">kbase_device<\/a><\/code> <code>kbdev<\/code>, which is a global memory pool shared by all contexts.<\/p>\n<p>When mapping memory to GPU, <code><a href=\"https:\/\/android.googlesource.com\/kernel\/google-modules\/gpu\/+\/refs\/tags\/android-12.0.0_r0.42\/mali_kbase\/mali_kbase_mem.c#1461\">kbase_gpu_mmap<\/a><\/code> will call <code>kbase_mmu_insert_pages<\/code> to add the backing pages to the GPU page table:<\/p>\n<pre><code>int kbase_gpu_mmap(struct kbase_context *kctx, struct kbase_va_region *reg, u64 addr, size_t nr_pages, size_t align)\n{\n    ...\n    alloc = reg-&gt;gpu_alloc;\n    ...\n    if (reg-&gt;gpu_alloc-&gt;type == KBASE_MEM_TYPE_ALIAS) {\n      ...\n    } else {\n        err = kbase_mmu_insert_pages(kctx-&gt;kbdev,\n                &amp;kctx-&gt;mmu,\n                reg-&gt;start_pfn,                        \/\/&lt;------ virtual address\n                kbase_get_gpu_phy_pages(reg),          \/\/&lt;------ backing pages\n                kbase_reg_current_backed_size(reg),    \n                reg-&gt;flags &amp; gwt_mask,\n                kctx-&gt;as_nr,\n                group_id);\n        ...\n    }\n    ...\n}\n<\/code><\/pre>\n<p>This will insert the backing pages at the address specified by <code>reg-&gt;start_pfn<\/code>, which is also the address of the memory region in the user space (see <a href=\"#mapping\">Mapping pages to user space<\/a>).<\/p>\n<h3 id=\"memory-alias\"><a class=\"heading-link\" href=\"#memory-alias\">Memory alias<span class=\"heading-hash pl-2 text-italic text-bold\" aria-hidden=\"true\"><\/span><\/a><\/h3>\n<p>The <code>KBASE_IOCTL_MEM_ALIAS<\/code> is an interesting <code>ioctl<\/code> that allows multiple memory regions to share the same underlying backing pages. It is implemented in <code><a href=\"https:\/\/android.googlesource.com\/kernel\/google-modules\/gpu\/+\/refs\/tags\/android-12.0.0_r0.42\/mali_kbase\/mali_kbase_mem_linux.c#1710\">kbase_mem_alias<\/a><\/code>. It accepts a <code>stride<\/code> parameter, as well as an array of <code><a href=\"https:\/\/android.googlesource.com\/kernel\/google-modules\/gpu\/+\/refs\/tags\/android-12.0.0_r0.42\/common\/include\/uapi\/gpu\/arm\/midgard\/mali_base_kernel.h#206\">base_mem_aliasing_info<\/a><\/code> to specify the memory regions that back the alias region:<\/p>\n<pre><code> union kbase_ioctl_mem_alias alias = {0};\n  alias.in.flags = BASE_MEM_PROT_CPU_RD | BASE_MEM_PROT_GPU_RD | BASE_MEM_PROT_CPU_WR | BASE_MEM_PROT_GPU_WR;\n  alias.in.stride = 4;\n  alias.in.nents = 2;\n  struct base_mem_aliasing_info ai[2];\n  ai[0].handle.basep.handle = region1;\n  ai[1].handle.basep.handle = region2;\n  ai[0].length = 0x3;\n  ai[1].length = 0x3;\n  ai[0].offset = 0;\n  ai[1].offset = 0;\n  alias.in.aliasing_info = (uint64_t)(&amp;(ai[0]));\n  ioctl(mali_fd, KBASE_IOCTL_MEM_ALIAS, &amp;alias);\n<\/code><\/pre>\n<p>In the above, an alias region backed by <code>region1<\/code> and <code>region2<\/code> (both are regions that are already mapped to the GPU) is created by passing the addresses of these regions as <code>base_mem_aliasing_info::handle::basep::handle<\/code>. The stride parameter indicates the gap between these two alias regions (in pages) and <code>nents<\/code> is the number of backing regions. The resulting region that is created is of size <code>stride * nents<\/code> pages:<\/p>\n<p><img data-recalc-dims=\"1\" decoding=\"async\" loading=\"lazy\" src=\"https:\/\/github.blog\/wp-content\/uploads\/2022\/07\/blog-1.png?resize=585%2C435\" alt=\"\" width=\"585\" height=\"435\" class=\"aligncenter size-full wp-image-66165 width-fit\" srcset=\"https:\/\/github.blog\/wp-content\/uploads\/2022\/07\/blog-1.png?w=585 585w, https:\/\/github.blog\/wp-content\/uploads\/2022\/07\/blog-1.png?w=300 300w\" sizes=\"auto, (max-width: 585px) 100vw, 585px\" \/><\/p>\n<p>The orange region indicates the entire alias region, which is of <code>2 * 4 = 8<\/code> pages. Only six pages are actually mapped and are backed by the pages of <code>region1<\/code> and <code>region2<\/code> respectively. If the starting address of the alias region is <code>alias_start<\/code>, then the addresses between <code>alias_start<\/code> and <code>alias_start + 0x3000<\/code> (three pages) are aliased with <code>region1<\/code>, while <code>region2<\/code> is aliased with the addresses between <code>alias_start + stride * 0x1000<\/code> and <code>alias_start + (stride + 3) * 0x1000<\/code>. This leaves some gaps in the alias region unmapped. This can be seen from the handling of a <code>KBASE_MEM_TYPE_ALIAS<\/code> memory region in <code>kbase_gpu_mmap<\/code>:<\/p>\n<pre><code>    if (reg-&gt;gpu_alloc-&gt;type == KBASE_MEM_TYPE_ALIAS) {\n        u64 const stride = alloc-&gt;imported.alias.stride;\n\n        KBASE_DEBUG_ASSERT(alloc-&gt;imported.alias.aliased);\n        for (i = 0; i &lt; alloc-&gt;imported.alias.nents; i++) {\n            if (alloc-&gt;imported.alias.aliased[i].alloc) {\n                err = kbase_mmu_insert_pages(kctx-&gt;kbdev,\n                        &amp;kctx-&gt;mmu,\n                        reg-&gt;start_pfn + (i * stride),                  \/\/&lt;------ each region maps at reg-&gt;start_pfn + (i * stride)\n                        alloc-&gt;imported.alias.aliased[i].alloc-&gt;pages + alloc-&gt;imported.alias.aliased[i].offset,\n                        alloc-&gt;imported.alias.aliased[i].length,\n                        reg-&gt;flags &amp; gwt_mask,\n                        kctx-&gt;as_nr,\n                        group_id);\n                ...\n       }\n       ...\n    }\n<\/code><\/pre>\n<p>From the above code, we can see that page table entries are inserted at <code>reg-&gt;start_pfn + (i * stride)<\/code>, where <code>reg-&gt;start_pfn<\/code> is the starting address of the alias region.<\/p>\n<h2 ><a class=\"heading-link\" href=\"#\"><strong><span id=\"vulnerability\">The vulnerability<\/span><\/strong><span class=\"heading-hash pl-2 text-italic text-bold\" aria-hidden=\"true\"><\/span><\/a><\/h2>\n<p>As explained in the previous section, the size of an alias region is <code>stride * nents<\/code>, which can be seen from <code>kbase_mem_alias<\/code>:<\/p>\n<pre><code>u64 kbase_mem_alias(struct kbase_context *kctx, u64 *flags, u64 stride,\n            u64 nents, struct base_mem_aliasing_info *ai,\n            u64 *num_pages)\n{\n    ...\n    if ((nents * stride) &gt; (U64_MAX \/ PAGE_SIZE))\n        \/* 64-bit address range is the max *\/\n        goto bad_size;\n\n    \/* calculate the number of pages this alias will cover *\/\n    *num_pages = nents * stride;         \/\/&lt;---- size of region\n<\/code><\/pre>\n<p>Although there is a check to make sure <code>nents * stride<\/code> is within a limit, there is no integer overflow check which means a large <code>stride<\/code> can be used to overflow <code>nents * stride<\/code>. This means that the resulting alias region may be smaller than its backing region. Let&#8217;s see what it means in practice. First allocate and map three three-page regions (<code>region1<\/code>, <code>region2<\/code> and <code>region3<\/code>) to the GPU and denote their start addresses as <code>region1_start<\/code>, <code>region2_start<\/code> and <code>region3_start<\/code>. Then create and map an alias region with <code>stride = 2 ** 63 + 1<\/code> and <code>nents = 2<\/code>. Then because of the integer overflow, the size of the alias region becomes <code>2<\/code> (pages). In particular, the starting address of the alias region, <code>alias_start<\/code> will be <code>region3_start - 0x2000<\/code>, where <code>0x2000<\/code> is the size of the alias region, so the virtual addresses of the alias region and <code>region3<\/code> are contiguous. However, when the alias region is mapped to GPU, <code>kbase_gpu_mmap<\/code> will insert three pages at <code>alias_start<\/code> (which is the size of the backing region (<code>region1<\/code>)):<\/p>\n<pre><code>    if (reg-&gt;gpu_alloc-&gt;type == KBASE_MEM_TYPE_ALIAS) {\n        u64 const stride = alloc-&gt;imported.alias.stride;\n\n        KBASE_DEBUG_ASSERT(alloc-&gt;imported.alias.aliased);\n        for (i = 0; i &lt; alloc-&gt;imported.alias.nents; i++) {\n            if (alloc-&gt;imported.alias.aliased[i].alloc) {\n                err = kbase_mmu_insert_pages(kctx-&gt;kbdev,\n                        &amp;kctx-&gt;mmu,\n                        reg-&gt;start_pfn + (i * stride),           \/\/&lt;------- insert pages at reg-&gt;start_pfn, which is alias_start\n                        alloc-&gt;imported.alias.aliased[i].alloc-&gt;pages + alloc-&gt;imported.alias.aliased[i].offset,\n                        alloc-&gt;imported.alias.aliased[i].length, \/\/&lt;------- length is the length of the alias region, which is 3\n                        reg-&gt;flags &amp; gwt_mask,\n                        kctx-&gt;as_nr,\n                        group_id);\n                ...\n       }\n       ...\n    }\n<\/code><\/pre>\n<p>This, in particular, means that the address <code>region3_start = alias_start + 0x2000<\/code> gets remapped and is now backed by a page in <code>region1<\/code>:<\/p>\n<p><img data-recalc-dims=\"1\" decoding=\"async\" loading=\"lazy\" src=\"https:\/\/github.blog\/wp-content\/uploads\/2022\/07\/blog-2.png?resize=758%2C323\" alt=\"\" width=\"758\" height=\"323\" class=\"aligncenter size-full wp-image-66166 width-fit\" srcset=\"https:\/\/github.blog\/wp-content\/uploads\/2022\/07\/blog-2.png?w=758 758w, https:\/\/github.blog\/wp-content\/uploads\/2022\/07\/blog-2.png?w=300 300w\" sizes=\"auto, (max-width: 758px) 100vw, 758px\" \/><\/p>\n<p>The red rectangle in the right hand side of the figure indicates a page that is backing both <code>region1_start + 0x2000<\/code> and <code>region3_start<\/code> after remapping took place. This is interesting because the backing page marked in red is &#8220;owned&#8221; by <code>region1<\/code> and the alias region jointly, in the sense that if both regions are unmapped, then the page will get freed and recycled to the memory pool. So if I now unmap both regions, the GPU address corresponding to <code>region3_start<\/code> will be backed by a free&#8217;d page, meaning that the GPU can still access this free&#8217;d page by accessing the address at <code>region3_start<\/code>.<\/p>\n<p>While this allows free&#8217;d pages to be accessed, it is not entirely clear how this may lead to security problems at this point. Recall that backing pages for a memory region are allocated from the <code>mem_pools<\/code> of the <code>kbase_context<\/code> that is associated with the device file. This means that when a page is free&#8217;d, it&#8217;ll go back to the <code>mem_pools<\/code> and only be used again as a backing page for another region in the same <code>kbase_context<\/code>, which is only used by the calling process. So at this point, it is not totally clear what an attacker can gain from this vulnerability.<\/p>\n<h2 id=\"breaking-out-of-the-context\"><a class=\"heading-link\" href=\"#breaking-out-of-the-context\">Breaking out of the context<span class=\"heading-hash pl-2 text-italic text-bold\" aria-hidden=\"true\"><\/span><\/a><\/h2>\n<p>To understand how this bug can be exploited, we need to take a look at how <code>kbase_mem_pool<\/code> works in more detail. To begin with, let&#8217;s take a look at how <code>kbase_mem_pool<\/code> allocates and free pages. The function <code><a href=\"https:\/\/android.googlesource.com\/kernel\/google-modules\/gpu\/+\/refs\/tags\/android-12.0.0_r0.42\/mali_kbase\/mali_kbase_mem_pool.c#529\">kbase_mem_pool_alloc_pages<\/a><\/code> is used to allocate pages from a <code>kbase_mem_pool<\/code>:<\/p>\n<pre><code>int kbase_mem_pool_alloc_pages(struct kbase_mem_pool *pool, size_t nr_4k_pages,\n        struct tagged_addr *pages, bool partial_allowed)\n{\n    ...\n    \/* Get pages from this pool *\/\n    while (nr_from_pool--) {\n        p = kbase_mem_pool_remove_locked(pool);     \/\/&lt;------- 1.\n        ...\n    }\n    ...\n    if (i != nr_4k_pages &amp;&amp; pool-&gt;next_pool) {\n        \/* Allocate via next pool *\/\n        err = kbase_mem_pool_alloc_pages(pool-&gt;next_pool,      \/\/&lt;----- 2.\n                nr_4k_pages - i, pages + i, partial_allowed);\n        ...\n    } else {\n        \/* Get any remaining pages from kernel *\/\n        while (i != nr_4k_pages) {\n            p = kbase_mem_alloc_page(pool);     \/\/&lt;------- 3.\n            ...\n        }\n        ...\n    }\n    ...\n}\n<\/code><\/pre>\n<p>As the comments suggest, the allocation is actually done in tiers. First the pages will be allocated from the current <code>kbase_mem_pool<\/code> using <code><a href=\"https:\/\/android.googlesource.com\/kernel\/google-modules\/gpu\/+\/refs\/tags\/android-12.0.0_r0.42\/mali_kbase\/mali_kbase_mem_pool.c#96\">kbase_mem_pool_remove_locked<\/a><\/code> (1 in the above). If there is not enough capacity in the current <code>kbase_mem_pool<\/code> to meet the request, then <code>pool->next_pool<\/code>, is used to allocate the pages (2 in the above). If even <code>pool->next_pool<\/code> does not have the capacity, then <code><a href=\"https:\/\/android.googlesource.com\/kernel\/google-modules\/gpu\/+\/refs\/tags\/android-12.0.0_r0.42\/mali_kbase\/mali_kbase_mem_pool.c#153\">kbase_mem_alloc_page<\/a><\/code> is used to allocate pages directly from the kernel via the buddy allocator (the page allocator in the kernel).<\/p>\n<p>When freeing a page, the opposite happens: <code><a href=\"https:\/\/android.googlesource.com\/kernel\/google-modules\/gpu\/+\/refs\/tags\/android-12.0.0_r0.42\/mali_kbase\/mali_kbase_mem_pool.c#738\">kbase_mem_pool_free_pages<\/a><\/code> first tries to return the pages to the current <code>kbase_mem_pool<\/code> (1. in the below), if the current pool is full, it&#8217;ll try to return the remaining pages to <code>pool->next_pool<\/code>. If the next pool is also full, then the remaining pages are returned to the kernel by freeing them via the buddy allocator.<\/p>\n<pre><code>void kbase_mem_pool_free_pages(struct kbase_mem_pool *pool, size_t nr_pages,\n        struct tagged_addr *pages, bool dirty, bool reclaimed)\n{\n    struct kbase_mem_pool *next_pool = pool-&gt;next_pool;\n    ...\n    if (!reclaimed) {\n        \/* Add to this pool *\/\n        ...\n        kbase_mem_pool_add_array(pool, nr_to_pool, pages, false, dirty);     \/\/&lt;------- 1.\n        ...\n        if (i != nr_pages &amp;&amp; next_pool) {\n            \/* Spill to next pool (may overspill) *\/\n            ...\n            kbase_mem_pool_add_array(next_pool, nr_to_pool,     \/\/&lt;------ 2.\n                    pages + i, true, dirty);\n            ...\n        }\n    }\n    \/* Free any remaining pages to kernel *\/\n    for (; i &lt; nr_pages; i++) {\n        ...\n        kbase_mem_pool_free_page(pool, p);    \/\/&lt;------ 3.\n        ...\n    }\n    ...\n}\n<\/code><\/pre>\n<p>So it seems that, by freeing a large number of pages to fill out both the <code>kbase_mem_pool<\/code> and <code>next_pool<\/code>, it is possible to return a page allocated from the per context <code>kbase_mem_pool<\/code> back to kernel memory. Then by using the bug, and some heap feng shui in the buddy allocator, I should be able to access kernel memory via the GPU. As we shall see, to exploit the bug, I only need to return a page to <code>next_pool<\/code> instead of the kernel. So in what follows, I&#8217;ll aim to find a reliable way to return a free&#8217;d page to <code>next_pool<\/code>.<\/p>\n<p>First, let&#8217;s find out what is <code>next_pool<\/code> for the per context <code>kbase_mem_pool<\/code>. The <code>mem_pools<\/code> in a <code>kbase_context<\/code> is initialized in <code><a href=\"https:\/\/android.googlesource.com\/kernel\/google-modules\/gpu\/+\/refs\/tags\/android-12.0.0_r0.42\/mali_kbase\/context\/mali_kbase_context.c#318\">kbase_context_mem_pool_group_init<\/a><\/code>:<\/p>\n<pre><code>int kbase_context_mem_pool_group_init(struct kbase_context *kctx)\n{\n    return kbase_mem_pool_group_init(&amp;kctx-&gt;mem_pools,\n        kctx-&gt;kbdev,\n        &amp;kctx-&gt;kbdev-&gt;mem_pool_defaults,\n        &amp;kctx-&gt;kbdev-&gt;mem_pools);         \/\/&lt;----- becomes next_pool\n}\n<\/code><\/pre>\n<p>The last argument, <code>kctx-&gt;kbdev-&gt;mem_pools<\/code> passed to <code><a href=\"https:\/\/android.googlesource.com\/kernel\/google-modules\/gpu\/+\/refs\/tags\/android-12.0.0_r0.42\/mali_kbase\/mali_kbase_mem_pool_group.c#46\">kbase_mem_pool_group_init<\/a><\/code> becomes the <code>next_pool<\/code> of <code>kctx->mem_pools<\/code>. This is the global <code>mem_pools<\/code> of the <code>kbase_device<\/code> memory pool that is also used for allocating the GPU page table global directories in section <a href=\"#gpu\">Mapping pages to the GPU<\/a>. This means that, by freeing the doubly mapped page caused by the bug to <code>next_pool<\/code>, it is possible to have the page reused as a PGD. Then by modifying it from the GPU, I can install arbitrary backing pages to the PGD, which would allow arbitrary memory access. This is the path that I&#8217;m going to take to exploit the bug. In order to free a page into <code>next_pool<\/code>, I first need to know the capacity of the memory pools.<\/p>\n<p>On a Pixel 6, the capacity of the memory pool can be found using the <code>debugfs<\/code> <code>\/sys\/module\/mali_kbase\/drivers\/platform\\:mali\/1c500000.mali\/mempool\/max_size<\/code> (file name may differ slightly depending on devices). This can be read from a rooted phone and gives the capacity of the memory pool:<\/p>\n<pre><code>oriole:\/ # cat \/sys\/module\/mali_kbase\/drivers\/platform\\:mali\/1c500000.mali\/mempool\/max_size\n16384\n<\/code><\/pre>\n<p>This is the capacity of the memory pool configured for the device. As explained before, a <code>kbase_mem_pool<\/code> is empty when it is first created. This means when pages are first allocated from the memory pool, they are allocated from <code>next_pool<\/code>, but when those pages are freed, they are returned to the <code>kbase_mem_pool<\/code>, (which is empty).<\/p>\n<p><img data-recalc-dims=\"1\" decoding=\"async\" loading=\"lazy\" src=\"https:\/\/github.blog\/wp-content\/uploads\/2022\/07\/blog-3.png?resize=864%2C482\" alt=\"\" width=\"864\" height=\"482\" class=\"aligncenter size-full wp-image-66167 width-fit\" srcset=\"https:\/\/github.blog\/wp-content\/uploads\/2022\/07\/blog-3.png?w=864 864w, https:\/\/github.blog\/wp-content\/uploads\/2022\/07\/blog-3.png?w=300 300w, https:\/\/github.blog\/wp-content\/uploads\/2022\/07\/blog-3.png?w=768 768w\" sizes=\"auto, (max-width: 864px) 100vw, 864px\" \/><\/p>\n<p>In the above, the gray boxes indicate the full capacities of the memory pools and green regions indicate available pages in the pool. A memory pool is full when the available pages reach its capacity (no gray region left).<\/p>\n<p>While the per context memory pool is used by my process only and I can control its size precisely, the same cannot be said of the device memory pool (<code>next_pool<\/code>). At any time, I must assume there is an unknown number of pages available in the memory pool. It is, however, not difficult to drain the device memory pool and to manipulate its layout.<\/p>\n<ol>\n<li>From an empty per context memory pool, such as when it is first created, allocate a page that I want to place in the device memory pool. As the context memory pool is empty, this page will either be allocated from the device memory pool (<code>next_pool<\/code>) or from kernel (if the device memory pool is full). After this, the context memory pool will still be empty.<\/li>\n<li>Allocate <code>16384<\/code> (capacity of the memory pool) pages from the context memory pool. As the context memory pool is empty, these pages will be allocated from the device memory pool. As the device memory pool has at most 16384 free pages, it will become empty after the allocation.<\/li>\n<li>At this point, both the context memory pool and the device memory pool are empty. If I now free the 16384 pages allocated in step two, the context memory pool will be full and none of the pages is returned to the device memory pool. So after this, the context memory pool is full and the device memory pool is empty.<\/li>\n<li>Free the page created in step one and it&#8217;ll be returned to an empty device memory pool.<\/li>\n<\/ol>\n<p><img data-recalc-dims=\"1\" decoding=\"async\" loading=\"lazy\" src=\"https:\/\/github.blog\/wp-content\/uploads\/2022\/07\/blog-4.png?resize=960%2C540\" alt=\"\" width=\"960\" height=\"540\" class=\"aligncenter size-full wp-image-66168 width-fit\" srcset=\"https:\/\/github.blog\/wp-content\/uploads\/2022\/07\/blog-4.png?w=960 960w, https:\/\/github.blog\/wp-content\/uploads\/2022\/07\/blog-4.png?w=300 300w, https:\/\/github.blog\/wp-content\/uploads\/2022\/07\/blog-4.png?w=768 768w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/><\/p>\n<p>In the figure, green regions indicate free pages in the memory pool, while red regions indicate pages that are taken by the allocation.<\/p>\n<p>After these steps, the device memory pool will only contain the page that I just freed. In particular, I can use the bug to hold on to a page, and then follow these steps so that it becomes the only page in the otherwise empty device memory pool. This means that when a GPU PGD is next allocated, the page that I freed and am still able to access will be used for the allocation and I&#8217;ll be able to write to this PGD.<\/p>\n<p>To cause an allocation of a PGD, recall that the entries in the GPU page table are allocated lazily and when mapping pages to the GPU, the driver will allocate addresses in a continuous and descending manner (See <a href=\"#mapping\">Mapping pages to user space<\/a>). As each PGD contains 512 entries, by mapping 512 pages, I can guarantee to reach addresses that require the allocation of a new PGD.<\/p>\n<p><img data-recalc-dims=\"1\" decoding=\"async\" loading=\"lazy\" src=\"https:\/\/github.blog\/wp-content\/uploads\/2022\/07\/blog-5.png?resize=591%2C495\" alt=\"\" width=\"591\" height=\"495\" class=\"aligncenter size-full wp-image-66169 width-fit\" srcset=\"https:\/\/github.blog\/wp-content\/uploads\/2022\/07\/blog-5.png?w=591 591w, https:\/\/github.blog\/wp-content\/uploads\/2022\/07\/blog-5.png?w=300 300w\" sizes=\"auto, (max-width: 591px) 100vw, 591px\" \/><\/p>\n<p>In the figure, the gray boxes indicate PGDs in different levels of the page table with arrows showing the indices at each level for the address of the same color. The indices in level 0 and level 1 PGD are the same for all the addresses shown (computed as <code>((addresses &gt;&gt; 12) &gt;&gt; (3 - level)) &amp; 0x1FF<\/code>), but addresses separated by 512 pages, such as the orange address and the black address, are guaranteed to be in a different level 3 PGD. So by allocating 512 pages, a new level 3 PGD is needed. Moreover, as the context memory pool is now full, these new 512 pages allocated are taken from the context memory pool, without affecting the device memory pool. (These 512 pages can also be allocated in advance and only map to the GPU at this stage, which will still create a new PGD, so these pages do not actually need to be allocated at this stage). This means that the new PGD entry will be allocated using the page that I still have access to because of the bug. I can then rewrite this PGD entry and map GPU addresses to arbitrary physical pages. This allows me to read and write arbitrary kernel memory.<\/p>\n<p>To recap, the exploit involves the following steps:<\/p>\n<ol>\n<li>Allocate and map three three-page memory regions (<code>region1<\/code>, <code>region2<\/code> and <code>region3<\/code>), and an alias region with <code>stride<\/code> <code>2 ** 63 + 1<\/code> and <code>nents<\/code> 2 backed by <code>region1<\/code> and <code>region2<\/code>.<\/li>\n<li>Allocate 16384 pages to drain the device memory pool.<\/li>\n<li>Free 16384 pages to fill up the context memory pool.<\/li>\n<li>Then unmap both <code>region1<\/code> and the alias region. This will put three pages in the device memory pool as the context memory pool is full. As explained in the section <a href=\"#vulnerability\">&#8220;The vulnerability&#8221;<\/a> one of these pages is still used as the backing page in <code>region3<\/code> and can be accessed from the GPU.<\/li>\n<li>Allocate and map <code>512 * 3 = 1536<\/code> pages to ensure three new PGDs are created. (In fact only two new PGDs are sufficient, which is what is used in the actual exploit). One of these PGDs will use the page that I can access via GPU addresses in <code>region3<\/code>.<\/li>\n<\/ol>\n<h2 id=\"writing-to-gpu-memory\"><a class=\"heading-link\" href=\"#writing-to-gpu-memory\">Writing to GPU memory<span class=\"heading-hash pl-2 text-italic text-bold\" aria-hidden=\"true\"><\/span><\/a><\/h2>\n<p>The question now is: how do I access memory using the GPU? While this is certainly achievable by compiling a shader program and running it on the GPU, it seems rather overkill for the task and it would be good if I could use the kernel driver to do it directly.<\/p>\n<p>The <code>ioctl<\/code> for running GPU instructions is the <code>KBASE_IOCTL_JOB_SUBMIT<\/code>. I can use this <code>ioctl<\/code> to submit a &#8220;job chain&#8221; to the GPU for processing. Each job chain is basically a list of jobs, which are opaque data structures that contain job headers, followed by payloads that contain the specific instructions. Although ARM never releases any details about the format of these data structures, nor of the GPU instruction sets, there is an extensive amount of research on reverse-engineering the Mali GPU \u2014 mostly for creating an open source Mali user space driver, <a href=\"https:\/\/gitlab.freedesktop.org\/panfrost\">Panfrost<\/a>. In particular, the instruction sets for the Bifrost and Midgard architectures were reversed by <a href=\"https:\/\/github.com\/cwabbott0\/mali-isa-docs\">Connor Abbott<\/a> and the Valhall instruction set was reversed by <a href=\"https:\/\/www.collabora.com\/news-and-blog\/news-and-events\/reverse-engineering-the-mali-g78.html\">Alyssa Rosenzweig<\/a>. Their work, as well as the Panfrost driver, was indispensable to this current work.<\/p>\n<p>Within the Panfrost driver, the <code><a href=\"https:\/\/gitlab.freedesktop.org\/panfrost\/pandecode-standalone\">pandecode-standalone<\/a><\/code> project is a tool that can be used to decode the jobs that have been submitted to the GPU. In particular, the project contains the data format for GPU jobs that can be used with the <code>KBASE_IOCTL_JOB_SUBMIT<\/code>. Each job contains a header and a payload, and the type of the job is specified in the header. The structure of the payload differs depending on the type of the job, and the following types of jobs are available:<\/p>\n<pre><code>enum mali_job_type {\n        MALI_JOB_TYPE_NOT_STARTED            =      0,\n        MALI_JOB_TYPE_NULL                   =      1,\n        MALI_JOB_TYPE_WRITE_VALUE            =      2,\n        MALI_JOB_TYPE_CACHE_FLUSH            =      3,\n        MALI_JOB_TYPE_COMPUTE                =      4,\n        MALI_JOB_TYPE_VERTEX                 =      5,\n        MALI_JOB_TYPE_GEOMETRY               =      6,\n        MALI_JOB_TYPE_TILER                  =      7,\n        MALI_JOB_TYPE_FUSED                  =      8,\n        MALI_JOB_TYPE_FRAGMENT               =      9,\n};\n<\/code><\/pre>\n<p>Many of these jobs are related to specific types of shaders, but the <code>MALI_JOB_TYPE_WRITE_VALUE<\/code> provides a simple way to write to a GPU address without the need to write any GPU assembly. The payload of this job type has the following structure:<\/p>\n<pre><code>struct MALI_WRITE_VALUE_JOB_PAYLOAD {\n   uint64_t                             address;\n   enum mali_write_value_type           type;\n   uint64_t                             immediate_value;\n};\n<\/code><\/pre>\n<p>The fields are fairly self explanatory: The <code>address<\/code> field is the GPU address to write to, <code>immediate_value<\/code> is the value to write, and <code>type<\/code> specifies the size of the write:<\/p>\n<pre><code>enum mali_write_value_type {\n        MALI_WRITE_VALUE_TYPE_CYCLE_COUNTER  =      1,\n        MALI_WRITE_VALUE_TYPE_SYSTEM_TIMESTAMP =    2,\n        MALI_WRITE_VALUE_TYPE_ZERO           =      3,\n        MALI_WRITE_VALUE_TYPE_IMMEDIATE_8    =      4,\n        MALI_WRITE_VALUE_TYPE_IMMEDIATE_16   =      5,\n        MALI_WRITE_VALUE_TYPE_IMMEDIATE_32   =      6,\n        MALI_WRITE_VALUE_TYPE_IMMEDIATE_64   =      7,\n};\n<\/code><\/pre>\n<p>with <code>MALI_WRITE_VALUE_TYPE_IMMEDIATE_8<\/code> writing 8 bits to the address, for example. Note that the memory layout and padding of the structure used by the GPU are not always the same as their representation in <code>C<\/code>, and a simple packing\/unpacking is needed to convert the job struct in <code>C<\/code> to data that can be consumed by the GPU. This packing\/unpacking code is also available in <code>pandecode-standalone<\/code>.<\/p>\n<h2 id=\"arbitrary-kernel-code-execution\"><a class=\"heading-link\" href=\"#arbitrary-kernel-code-execution\">Arbitrary kernel code execution<span class=\"heading-hash pl-2 text-italic text-bold\" aria-hidden=\"true\"><\/span><\/a><\/h2>\n<p>At this point, it is fairly easy to achieve arbitrary kernel code execution. As page tables specify the backing page using their page frame, which is a simple shift of the physical address, I can simply write a page frame to the GPU PGD that I control to gain write access to any physical address. On non -Samsung devices, physical addresses in the kernel image are fixed and depend on the firmware only, so having arbitrary physical address write allows me to overwrite any kernel function with my own shell code. I can then use this to disable SELinux and overwrite the credentials of my own process to become root. On Samsung devices, I can follow the steps from my <a href=\"https:\/\/github.blog\/2022-06-16-the-android-kernel-mitigations-obstacle-race\/\">previous post<\/a> to disable SELinux and then hijack a <code>kworker<\/code> to gain root.<\/p>\n<p>The exploit for Pixel 6 can be found <a href=\"https:\/\/github.com\/github\/securitylab\/tree\/main\/SecurityExploits\/Android\/Mali\/CVE_2022_20186\">here<\/a> with some setup notes.<\/p>\n<h2 id=\"conclusions\"><a class=\"heading-link\" href=\"#conclusions\">Conclusions<span class=\"heading-hash pl-2 text-italic text-bold\" aria-hidden=\"true\"><\/span><\/a><\/h2>\n<p>In this post I exploited CVE-2022-20186 in the Mali GPU driver. What is interesting about this bug is that the exploit abuses the memory management logic in the GPU to achieve arbitrary physical memory access, and because of this, there is no control flow hijacking involved in exploiting this bug, which renders mitigations such as <a href=\"https:\/\/source.android.com\/devices\/tech\/debug\/kcfi\">kernel control flow integrity<\/a> ineffective.<\/p>\n<p>What is perhaps more important and unusual is that this bug does not involve the usual type of memory corruption that is associated with memory safety. During the exploit, objects are corrupted in the following places:<\/p>\n<ol>\n<li>When a different backing page is rewritten to an existing page table entry by mapping the alias region.<\/li>\n<li>When the above backing page is freed and then reused as a page table entry.<\/li>\n<\/ol>\n<p>As the first point above involves only overwriting the page table entry using existing kernel functions and there is no invalid memory access or type confusion, it could happen even if the code is written in a memory safe language. Similarly, while point two can be considered a use-after-free, there is no unsafe dereferencing of pointer addresses involved, but rather, a stale physical address is used, and access to the address is done via the GPU. As such, these problems could very well happen even if the code is written in a memory safe language or when mitigations targeting memory safety (such as <a href=\"https:\/\/community.arm.com\/arm-community-blogs\/b\/architectures-and-processors-blog\/posts\/enhancing-memory-safety\">Memory Tagging Extension (MTE)<\/a>) are enabled. When dealing with code that is responsible for accessing physical memory directly, the margin of error is very small and strong attack primitives can often be gained without exploiting memory corruptions, as both this and previous vulnerabilities in the GPU drivers have shown.<\/p>\n<h2 id=\"patching-time-and-patch-gapping\"><a class=\"heading-link\" href=\"#patching-time-and-patch-gapping\">Patching time and patch gapping<span class=\"heading-hash pl-2 text-italic text-bold\" aria-hidden=\"true\"><\/span><\/a><\/h2>\n<p>The bug was reported to the Android security team on January 15, 2022 and was fixed in the June update for Pixel, which was released on June 6, 2022. The time it took is similar to that for fixing issues in the Qualcomm GPU (see, for example the section &#8220;Disclosure practices, patching time and patch gapping&#8221; in my <a href=\"https:\/\/github.blog\/2022-06-16-the-android-kernel-mitigations-obstacle-race\/\">previous article<\/a>), though is past the 90 day disclosure standard set by Google&#8217;s Project Zero team. It is not clear to me on the cause of such delays, although this kind of disclosure time frame is not uncommon for Android kernel drivers.<\/p>\n<p>Just like the Qualcomm GPU bug in my previous post, the patch for this bug was publicly visible before an official release. I first noticed the <code><a href=\"https:\/\/android.googlesource.com\/kernel\/google-modules\/gpu\/+\/86e5f385e9d8d83c040c7104df0fc7046c713323%5E%21\/#F0\">patch<\/a><\/code> appearing in the <code><a href=\"https:\/\/android.googlesource.com\/kernel\/google-modules\/gpu\/+\/refs\/heads\/android-gs-raviole-5.10-s-qpr3-beta-3\">android-gs-raviole-5.10-s-qpr3-beta-3<\/a><\/code> branch on May 24, 2022. Unfortunately, I hadn&#8217;t checked this branch before so I cannot verify when the patch was first made visible. (Although the commit date was March 18, 2022, that is unlikely to be the date when the patch was first publicly visible). At the very least, this leaves a two week gap between the patch being visible and the official release. Once again, this highlights the complexity of the branching system in the Android kernel and the potential of exploiting one day vulnerabilities via patch gapping.<\/p>\n<p><em><strong>August 1, 2022 Update: <\/strong> Shortly after the initial publication of this blog post, we were informed by Vitaly Nikolenko and Jann Horne that a different CVE ID, <a href=\"https:\/\/github.com\/advisories\/GHSA-r85c-7543-8wq6\">CVE-2022-28348<\/a> may have been used by Arm in their <a href=\"https:\/\/developer.arm.com\/Arm%20Security%20Center\/Mali%20GPU%20Driver%20Vulnerabilities\">vulnerability list<\/a>. Judging from the affected software version, release date and patch analysis, it seems likely that <a href=\"https:\/\/github.com\/advisories\/GHSA-r85c-7543-8wq6\">CVE-2022-28348<\/a> and <a href=\"https:\/\/github.com\/advisories\/GHSA-f396-p774-5c2p\">CVE-2022-20186<\/a> do indeed refer to the same bug. It is unclear to me why a separate CVE ID was assigned. Judging from the date on Arm&#8217;s website, they may well have released a public patch for the bug in April 2022, while the Pixel devices were only patched in June (I have tested that the May patch level of Pixel 6 was still vulnerable to the bug). However, since the only CVE ID that the vendor communicated to me is <a href=\"https:\/\/github.com\/advisories\/GHSA-f396-p774-5c2p\">CVE-2022-20186<\/a>, this is the only CVE ID that I&#8217;m certain of that is associated to this bug and as such, I decided to keep using this CVE ID throughout the post and in our <a href=\"https:\/\/securitylab.github.com\/advisories\/GHSL-2022-053_Arm_Mali\/\">advisory<\/a>.<\/em><\/p>\n<hr>\n","protected":false},"excerpt":{"rendered":"<p>In this post I\u2019ll exploit CVE-2022-20186, a vulnerability in the Arm Mali GPU kernel driver and use it to gain arbitrary kernel memory access from an untrusted app on a Pixel 6. This then allows me to gain root and disable SELinux. This vulnerability highlights the strong primitives that an attacker may gain by exploiting errors in the memory management code of GPU drivers.<\/p>\n","protected":false},"author":1878,"featured_media":61932,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_gh_post_show_toc":"no","_gh_post_is_no_robots":"","_gh_post_is_featured":"no","_gh_post_is_excluded":"no","_gh_post_is_unlisted":"","_gh_post_related_link_1":"","_gh_post_related_link_2":"","_gh_post_related_link_3":"","_gh_post_sq_img":"https:\/\/github.blog\/wp-content\/uploads\/2022\/01\/GitHub-Security_orange-square-icon-e1644630762962.png","_gh_post_sq_img_id":"62566","_gh_post_cta_title":"","_gh_post_cta_text":"","_gh_post_cta_link":"","_gh_post_cta_button":"Click Here to Learn More","_gh_post_recirc_hide":"no","_gh_post_recirc_col_1":"gh-auto-select","_gh_post_recirc_col_2":"65301","_gh_post_recirc_col_3":"65308","_gh_post_recirc_col_4":"65316","_featured_video":"","_gh_post_additional_query_params":"","_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2},"_wpas_customize_per_network":false,"_links_to":"","_links_to_target":""},"categories":[91,3336],"tags":[1915],"coauthors":[2081],"class_list":["post-66164","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-security","category-vulnerability-research","tag-github-security-lab"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.3 (Yoast SEO v27.3) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Corrupting memory without memory corruption - The GitHub Blog<\/title>\n<meta name=\"description\" content=\"In this post I\u2019ll exploit CVE-2022-20186, a vulnerability in the Arm Mali GPU kernel driver and use it to gain arbitrary kernel memory access from an untrusted app on a Pixel 6. This then allows me to gain root and disable SELinux. This vulnerability highlights the strong primitives that an attacker may gain by exploiting errors in the memory management code of GPU drivers.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/github.blog\/security\/vulnerability-research\/corrupting-memory-without-memory-corruption\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Corrupting memory without memory corruption\" \/>\n<meta property=\"og:description\" content=\"In this post I\u2019ll exploit CVE-2022-20186, a vulnerability in the Arm Mali GPU kernel driver and use it to gain arbitrary kernel memory access from an untrusted app on a Pixel 6. This then allows me to gain root and disable SELinux. This vulnerability highlights the strong primitives that an attacker may gain by exploiting errors in the memory management code of GPU drivers.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/github.blog\/security\/vulnerability-research\/corrupting-memory-without-memory-corruption\/\" \/>\n<meta property=\"og:site_name\" content=\"The GitHub Blog\" \/>\n<meta property=\"article:published_time\" content=\"2022-07-27T17:00:35+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2022-08-01T20:21:01+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/github.blog\/wp-content\/uploads\/2021\/12\/github-security_orange-banner.png?fit=1200%2C630\" \/>\n\t<meta property=\"og:image:width\" content=\"1200\" \/>\n\t<meta property=\"og:image:height\" content=\"630\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Man Yue Mo\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Man Yue Mo\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"29 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/github.blog\\\/security\\\/vulnerability-research\\\/corrupting-memory-without-memory-corruption\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/github.blog\\\/security\\\/vulnerability-research\\\/corrupting-memory-without-memory-corruption\\\/\"},\"author\":{\"name\":\"Man Yue Mo\",\"@id\":\"https:\\\/\\\/github.blog\\\/#\\\/schema\\\/person\\\/0ac0c5700a6f36214989d4391dbf21b1\"},\"headline\":\"Corrupting memory without memory corruption\",\"datePublished\":\"2022-07-27T17:00:35+00:00\",\"dateModified\":\"2022-08-01T20:21:01+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/github.blog\\\/security\\\/vulnerability-research\\\/corrupting-memory-without-memory-corruption\\\/\"},\"wordCount\":4375,\"image\":{\"@id\":\"https:\\\/\\\/github.blog\\\/security\\\/vulnerability-research\\\/corrupting-memory-without-memory-corruption\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/github.blog\\\/wp-content\\\/uploads\\\/2021\\\/12\\\/github-security_orange-banner.png?fit=1200%2C630\",\"keywords\":[\"GitHub Security Lab\"],\"articleSection\":[\"Security\",\"Vulnerability research\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/github.blog\\\/security\\\/vulnerability-research\\\/corrupting-memory-without-memory-corruption\\\/\",\"url\":\"https:\\\/\\\/github.blog\\\/security\\\/vulnerability-research\\\/corrupting-memory-without-memory-corruption\\\/\",\"name\":\"Corrupting memory without memory corruption - The GitHub Blog\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/github.blog\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/github.blog\\\/security\\\/vulnerability-research\\\/corrupting-memory-without-memory-corruption\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/github.blog\\\/security\\\/vulnerability-research\\\/corrupting-memory-without-memory-corruption\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/github.blog\\\/wp-content\\\/uploads\\\/2021\\\/12\\\/github-security_orange-banner.png?fit=1200%2C630\",\"datePublished\":\"2022-07-27T17:00:35+00:00\",\"dateModified\":\"2022-08-01T20:21:01+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/github.blog\\\/#\\\/schema\\\/person\\\/0ac0c5700a6f36214989d4391dbf21b1\"},\"description\":\"In this post I\u2019ll exploit CVE-2022-20186, a vulnerability in the Arm Mali GPU kernel driver and use it to gain arbitrary kernel memory access from an untrusted app on a Pixel 6. This then allows me to gain root and disable SELinux. This vulnerability highlights the strong primitives that an attacker may gain by exploiting errors in the memory management code of GPU drivers.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/github.blog\\\/security\\\/vulnerability-research\\\/corrupting-memory-without-memory-corruption\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/github.blog\\\/security\\\/vulnerability-research\\\/corrupting-memory-without-memory-corruption\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/github.blog\\\/security\\\/vulnerability-research\\\/corrupting-memory-without-memory-corruption\\\/#primaryimage\",\"url\":\"https:\\\/\\\/github.blog\\\/wp-content\\\/uploads\\\/2021\\\/12\\\/github-security_orange-banner.png?fit=1200%2C630\",\"contentUrl\":\"https:\\\/\\\/github.blog\\\/wp-content\\\/uploads\\\/2021\\\/12\\\/github-security_orange-banner.png?fit=1200%2C630\",\"width\":1200,\"height\":630},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/github.blog\\\/security\\\/vulnerability-research\\\/corrupting-memory-without-memory-corruption\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/github.blog\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Security\",\"item\":\"https:\\\/\\\/github.blog\\\/security\\\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Vulnerability research\",\"item\":\"https:\\\/\\\/github.blog\\\/security\\\/vulnerability-research\\\/\"},{\"@type\":\"ListItem\",\"position\":4,\"name\":\"Corrupting memory without memory corruption\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/github.blog\\\/#website\",\"url\":\"https:\\\/\\\/github.blog\\\/\",\"name\":\"The GitHub Blog\",\"description\":\"Updates, ideas, and inspiration from GitHub to help developers build and design software.\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/github.blog\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/github.blog\\\/#\\\/schema\\\/person\\\/0ac0c5700a6f36214989d4391dbf21b1\",\"name\":\"Man Yue Mo\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/b1e7e4060b562f42554cb744fa738e3d3e31d04437c4faf6232bfb29b6d0b6e7?s=96&d=mm&r=g58cb61bd120c032429ede6698f35624c\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/b1e7e4060b562f42554cb744fa738e3d3e31d04437c4faf6232bfb29b6d0b6e7?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/b1e7e4060b562f42554cb744fa738e3d3e31d04437c4faf6232bfb29b6d0b6e7?s=96&d=mm&r=g\",\"caption\":\"Man Yue Mo\"},\"url\":\"https:\\\/\\\/github.blog\\\/author\\\/mymo\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Corrupting memory without memory corruption - The GitHub Blog","description":"In this post I\u2019ll exploit CVE-2022-20186, a vulnerability in the Arm Mali GPU kernel driver and use it to gain arbitrary kernel memory access from an untrusted app on a Pixel 6. This then allows me to gain root and disable SELinux. This vulnerability highlights the strong primitives that an attacker may gain by exploiting errors in the memory management code of GPU drivers.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/github.blog\/security\/vulnerability-research\/corrupting-memory-without-memory-corruption\/","og_locale":"en_US","og_type":"article","og_title":"Corrupting memory without memory corruption","og_description":"In this post I\u2019ll exploit CVE-2022-20186, a vulnerability in the Arm Mali GPU kernel driver and use it to gain arbitrary kernel memory access from an untrusted app on a Pixel 6. This then allows me to gain root and disable SELinux. This vulnerability highlights the strong primitives that an attacker may gain by exploiting errors in the memory management code of GPU drivers.","og_url":"https:\/\/github.blog\/security\/vulnerability-research\/corrupting-memory-without-memory-corruption\/","og_site_name":"The GitHub Blog","article_published_time":"2022-07-27T17:00:35+00:00","article_modified_time":"2022-08-01T20:21:01+00:00","og_image":[{"width":1200,"height":630,"url":"https:\/\/github.blog\/wp-content\/uploads\/2021\/12\/github-security_orange-banner.png?fit=1200%2C630","type":"image\/png"}],"author":"Man Yue Mo","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Man Yue Mo","Est. reading time":"29 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/github.blog\/security\/vulnerability-research\/corrupting-memory-without-memory-corruption\/#article","isPartOf":{"@id":"https:\/\/github.blog\/security\/vulnerability-research\/corrupting-memory-without-memory-corruption\/"},"author":{"name":"Man Yue Mo","@id":"https:\/\/github.blog\/#\/schema\/person\/0ac0c5700a6f36214989d4391dbf21b1"},"headline":"Corrupting memory without memory corruption","datePublished":"2022-07-27T17:00:35+00:00","dateModified":"2022-08-01T20:21:01+00:00","mainEntityOfPage":{"@id":"https:\/\/github.blog\/security\/vulnerability-research\/corrupting-memory-without-memory-corruption\/"},"wordCount":4375,"image":{"@id":"https:\/\/github.blog\/security\/vulnerability-research\/corrupting-memory-without-memory-corruption\/#primaryimage"},"thumbnailUrl":"https:\/\/github.blog\/wp-content\/uploads\/2021\/12\/github-security_orange-banner.png?fit=1200%2C630","keywords":["GitHub Security Lab"],"articleSection":["Security","Vulnerability research"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/github.blog\/security\/vulnerability-research\/corrupting-memory-without-memory-corruption\/","url":"https:\/\/github.blog\/security\/vulnerability-research\/corrupting-memory-without-memory-corruption\/","name":"Corrupting memory without memory corruption - The GitHub Blog","isPartOf":{"@id":"https:\/\/github.blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/github.blog\/security\/vulnerability-research\/corrupting-memory-without-memory-corruption\/#primaryimage"},"image":{"@id":"https:\/\/github.blog\/security\/vulnerability-research\/corrupting-memory-without-memory-corruption\/#primaryimage"},"thumbnailUrl":"https:\/\/github.blog\/wp-content\/uploads\/2021\/12\/github-security_orange-banner.png?fit=1200%2C630","datePublished":"2022-07-27T17:00:35+00:00","dateModified":"2022-08-01T20:21:01+00:00","author":{"@id":"https:\/\/github.blog\/#\/schema\/person\/0ac0c5700a6f36214989d4391dbf21b1"},"description":"In this post I\u2019ll exploit CVE-2022-20186, a vulnerability in the Arm Mali GPU kernel driver and use it to gain arbitrary kernel memory access from an untrusted app on a Pixel 6. This then allows me to gain root and disable SELinux. This vulnerability highlights the strong primitives that an attacker may gain by exploiting errors in the memory management code of GPU drivers.","breadcrumb":{"@id":"https:\/\/github.blog\/security\/vulnerability-research\/corrupting-memory-without-memory-corruption\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/github.blog\/security\/vulnerability-research\/corrupting-memory-without-memory-corruption\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/github.blog\/security\/vulnerability-research\/corrupting-memory-without-memory-corruption\/#primaryimage","url":"https:\/\/github.blog\/wp-content\/uploads\/2021\/12\/github-security_orange-banner.png?fit=1200%2C630","contentUrl":"https:\/\/github.blog\/wp-content\/uploads\/2021\/12\/github-security_orange-banner.png?fit=1200%2C630","width":1200,"height":630},{"@type":"BreadcrumbList","@id":"https:\/\/github.blog\/security\/vulnerability-research\/corrupting-memory-without-memory-corruption\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/github.blog\/"},{"@type":"ListItem","position":2,"name":"Security","item":"https:\/\/github.blog\/security\/"},{"@type":"ListItem","position":3,"name":"Vulnerability research","item":"https:\/\/github.blog\/security\/vulnerability-research\/"},{"@type":"ListItem","position":4,"name":"Corrupting memory without memory corruption"}]},{"@type":"WebSite","@id":"https:\/\/github.blog\/#website","url":"https:\/\/github.blog\/","name":"The GitHub Blog","description":"Updates, ideas, and inspiration from GitHub to help developers build and design software.","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/github.blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/github.blog\/#\/schema\/person\/0ac0c5700a6f36214989d4391dbf21b1","name":"Man Yue Mo","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/b1e7e4060b562f42554cb744fa738e3d3e31d04437c4faf6232bfb29b6d0b6e7?s=96&d=mm&r=g58cb61bd120c032429ede6698f35624c","url":"https:\/\/secure.gravatar.com\/avatar\/b1e7e4060b562f42554cb744fa738e3d3e31d04437c4faf6232bfb29b6d0b6e7?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/b1e7e4060b562f42554cb744fa738e3d3e31d04437c4faf6232bfb29b6d0b6e7?s=96&d=mm&r=g","caption":"Man Yue Mo"},"url":"https:\/\/github.blog\/author\/mymo\/"}]}},"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/github.blog\/wp-content\/uploads\/2021\/12\/github-security_orange-banner.png?fit=1200%2C630","jetpack_shortlink":"https:\/\/wp.me\/pamS32-hda","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/github.blog\/wp-json\/wp\/v2\/posts\/66164","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/github.blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/github.blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/github.blog\/wp-json\/wp\/v2\/users\/1878"}],"replies":[{"embeddable":true,"href":"https:\/\/github.blog\/wp-json\/wp\/v2\/comments?post=66164"}],"version-history":[{"count":8,"href":"https:\/\/github.blog\/wp-json\/wp\/v2\/posts\/66164\/revisions"}],"predecessor-version":[{"id":66367,"href":"https:\/\/github.blog\/wp-json\/wp\/v2\/posts\/66164\/revisions\/66367"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/github.blog\/wp-json\/wp\/v2\/media\/61932"}],"wp:attachment":[{"href":"https:\/\/github.blog\/wp-json\/wp\/v2\/media?parent=66164"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/github.blog\/wp-json\/wp\/v2\/categories?post=66164"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/github.blog\/wp-json\/wp\/v2\/tags?post=66164"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/github.blog\/wp-json\/wp\/v2\/coauthors?post=66164"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}