Skip to content

Conversation

@hnyls2002
Copy link
Collaborator

No description provided.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @hnyls2002, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the speculative decoding mechanism, specifically for the Eagle algorithm, by introducing robust support for memory page sizes greater than one. This change allows the system to manage memory more efficiently when dealing with larger page allocations, improving performance and resource utilization in scenarios where speculative decoding is employed, and removing previous limitations for paged memory usage.

Highlights

  • Paged Memory Support for Eagle Speculative Decoding: The pull request introduces support for page_size > 1 within the Eagle speculative decoding algorithm, enabling more flexible memory management.
  • Refined Token Freeing Logic: The logic for freeing token indices has been updated to correctly handle paged memory allocations when requests are finished and overlap is enabled.
  • Enhanced Token Allocation for Paged Memory: Token allocation within the Eagle speculative decoding process now differentiates between page_size = 1 and page_size > 1, utilizing specific paged memory management functions for the latter.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for page_size > 1 in speculative decoding with overlap, which is a great enhancement. The changes correctly handle paged KV cache allocation and deallocation by aligning to page boundaries. I've included a couple of suggestions to improve performance by moving an import out of a loop and optimizing a tensor calculation to reduce GPU-CPU synchronization.

last_loc = get_last_loc(
batch.req_to_token_pool.req_to_token,
batch.req_pool_indices,
self.allocate_lens,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if self.allocate_lens is None (first call), then I have cuda IMA. The fix:
self.allocate_lens = batch.seq_lens.clone() if self.allocate_lens is None
Test to validate.
so topk=1, pagesize=2,4,8 etc.

You can test it like:
SGLANG_ALLLOW_OVERWRITE_LONGER_CONTEXT_LEN=1 then
python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-8B-Instruct --speculative-algorithm EAGLE3 --speculative-draft-model-path lmsys/sglang-EAGLE3-LLama3.1-Instruct-8B --speculative-num-steps 3 --speculative-eagle-topk 1 --speculative-num-draft-tokens 4 --page-size 2 --enable-beta-spec --dtype float16

I am try to impl topk > 1 and pagesize >1; based off this branch.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The allocate_lens would not be None even when it is the first call (first decode step after prefill). The allocate_lens is initialized here

allocate_lens=batch.seq_lens,

Or there are some corner cases I haven't considered. Please provide more reproduction scripts. The code on this branch can pass the unit test of test_eagle_infer_beta.py (by setting page to 64).

@hnyls2002 hnyls2002 merged commit a93f10a into main Oct 18, 2025
69 of 74 checks passed
@hnyls2002 hnyls2002 deleted the lsyin/spec-overlap-page-size branch October 18, 2025 18:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants