Skip to content

Comments

Add 6 new LLM systems papers from 1/29#52

Merged
AmberLJC merged 1 commit intomainfrom
claude/issue-51-20260130-0722
Jan 30, 2026
Merged

Add 6 new LLM systems papers from 1/29#52
AmberLJC merged 1 commit intomainfrom
claude/issue-51-20260130-0722

Conversation

@AmberLJC
Copy link
Owner

@AmberLJC AmberLJC commented Jan 30, 2026

Added 6 new papers from the 1/29 submission:

Training papers:

  • Kareus: Energy optimization in large model training
  • AXLearn: Modular training on heterogeneous infrastructure (MLSys' 26)

Serving papers:

  • SuperInfer: SLO-aware scheduling for LLM inference (MLSys' 26)
  • Scaling Up Small LMs: Efficient serving and deployment (MLSys' 26)
  • BestServe: Serving strategies with optimal goodput
  • OptiKIT: Automated enterprise LLM optimization (MLSys' 26)

All entries follow the repository's formatting guidelines.

Resolves #51

Generated with Claude Code


Note

Low Risk
Documentation-only changes (new links/entries in README.md) with no code or runtime impact; main risk is broken/incorrect citations or formatting.

Overview
Adds six new paper entries to README.md to expand the curated LLM systems list.

Two papers are added under Training / Pre-training (Kareus, AXLearn) and four under Serving / LLM serving (SuperInfer, Scaling Up Efficient Small Language Models Serving, BestServe, OptiKIT).

Written by Cursor Bugbot for commit 19fd6d3. This will update automatically on new commits. Configure here.

Added papers:
- Kareus: Energy optimization in large model training
- AXLearn: Modular training on heterogeneous infrastructure (MLSys' 26)
- SuperInfer: SLO-aware scheduling for LLM inference (MLSys' 26)
- Scaling Up Small LMs: Efficient serving and deployment (MLSys' 26)
- BestServe: Serving strategies with optimal goodput
- OptiKIT: Automated enterprise LLM optimization (MLSys' 26)

Co-authored-by: Jiachen LIU <AmberLJC@users.noreply.github.com>
@AmberLJC AmberLJC merged commit 921b3f2 into main Jan 30, 2026
2 checks passed
@AmberLJC AmberLJC deleted the claude/issue-51-20260130-0722 branch January 30, 2026 07:26
@claude
Copy link

claude bot commented Jan 30, 2026

Pull Request Review

Thank you for the contribution! I've reviewed PR #52 and overall the changes look good. Here's my detailed feedback:

Strengths

  1. Excellent Formatting Compliance: All entries follow the repository's formatting guidelines from CLAUDE.md perfectly:

    • Correct arXiv URL format (no www. prefix)
    • Proper structure: [Title](URL): Description | Venue
    • Consistent spacing and punctuation
    • Venue notation follows convention (e.g., MLSys' 26)
  2. Appropriate Section Placement: Papers are correctly categorized:

    • Training papers → Pre-training section
    • Serving papers → LLM serving section
  3. Good Descriptions: Brief, informative descriptions that highlight each paper's contribution

  4. Chronological Ordering: New entries are appended appropriately, maintaining the repository's structure

🔍 Minor Observations

  1. Title Length Consistency (Line 245):

    • Scaling Up Efficient Small Language Models Serving has a somewhat verbose title. While this appears to be the actual paper title, consider verifying if the official title matches exactly.
  2. Description Specificity (Line 245):

    • "Serving and Deployment for Semantic Job Search" - This description is quite application-specific. Consider whether a more general description of the serving techniques would be more valuable to readers (e.g., highlighting the efficiency methods rather than the specific use case).
  3. Venue Consistency:

    • Lines 85, 86, 244, 245, 247: MLSys' 26 papers are mixed with arXiv-only papers. This is fine, but once the official MLSys proceedings are published, consider updating the links to use the official conference URLs for consistency with other venues (e.g., how SOSP and OSDI papers link to conference sites).

📋 Code Quality Assessment

Since this is a documentation-only change:

  • No code bugs: N/A
  • No security concerns: All links are to arxiv.org, a trusted domain
  • No performance impact: Documentation-only
  • No test coverage needed: N/A for documentation

🔗 URL Verification

I recommend verifying these arXiv links are accessible:

Overall Assessment

Status: ✅ Approved (Post-merge review)

This is a high-quality contribution that adds valuable recent papers to the repository. The formatting is exemplary and follows all guidelines. The minor observations above are suggestions for potential future improvements but do not detract from the quality of this PR.

Great work maintaining the repository's high standards! 🎉


Review generated via Claude Code

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1/29 New papers

1 participant