Quantifying prompt quality using information theory: entropy and mutual information analysis of 1,800 LLM generations
-
Updated
Nov 20, 2025 - Jupyter Notebook
Quantifying prompt quality using information theory: entropy and mutual information analysis of 1,800 LLM generations
lawhead-extractor parses legal headlines, extracting parties, claim type and outcome using an LLM with pattern matching for accuracy.
A new package that takes user-provided text (such as a blog post title or a short article snippet) and generates a structured summary highlighting key advantages or claims. It uses an LLM to analyze t
A new package would process user complaints or descriptions about logging systems, extracting structured insights such as common pain points, root causes, or improvement suggestions. It uses an LLM to
A Python CLI tool that collects and analyzes Discourse forum discussions using Claude AI to identify common problems, categorize issues by severity, and provide natural language querying of forum insights.
A Python-based tool for comparing translated .docx documents against their original versions. It highlights differences, calculates similarity metrics, and generates detailed comparison reports, including suggested corrections.
📊 Explore how Shannon entropy and mutual information can quantify prompt quality in generative AI systems across various temperature settings.
Add a description, image, and links to the llm-analysis topic page so that developers can more easily learn about it.
To associate your repository with the llm-analysis topic, visit your repo's landing page and select "manage topics."