Conversation
…esume # Conflicts: # README.md
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📝 WalkthroughWalkthroughIntroduces a complete LaTeX resume builder feature enabling users to generate tailored resumes from templates, with multi-step workflow, version management, AI-assisted compilation fixes, and integration into search history. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Client
participant GenerateAPI as /api/generate-resume-latex
participant GeminiAI as Gemini AI
participant Database
participant RenderAPI as /api/render-latex
participant LatexService as LaTeX Render Service
User->>Client: Submit resume & job description
Client->>GenerateAPI: POST (resume, jobDesc, template)
GenerateAPI->>Database: Check cached tailored resume
alt Resume cached
Database-->>GenerateAPI: Return cached result
GenerateAPI-->>Client: Return cached LaTeX (200)
else Resume not cached
GenerateAPI->>GeminiAI: generateTailoredResumeData()
GeminiAI-->>GenerateAPI: Structured resume data
GenerateAPI->>GenerateAPI: buildLatexResume(template, data)
GenerateAPI->>Database: saveTailoredResume()
Database-->>GenerateAPI: Saved document ID
GenerateAPI-->>Client: Return generated LaTeX (200)
end
Client->>RenderAPI: POST LaTeX source
RenderAPI->>LatexService: POST LaTeX / buildCompileUrl
alt LaTeX too long
RenderAPI->>LatexService: compileLatexViaUpload (tar)
else LaTeX short
RenderAPI->>LatexService: GET compile URL
end
LatexService-->>RenderAPI: PDF bytes
RenderAPI-->>Client: PDF (application/pdf)
Client->>User: Display PDF preview
sequenceDiagram
participant Editor as Editor Page
participant FixAPI as /api/fix-latex
participant GeminiAI as Gemini AI
participant RenderAPI as /api/render-latex
Editor->>RenderAPI: POST current LaTeX
RenderAPI-->>Editor: Compile error with log
Editor->>Editor: Display error to user
Editor->>FixAPI: POST (latexSource, compileLog)
FixAPI->>GeminiAI: fixLatexCompilationError()
GeminiAI-->>FixAPI: Corrected LaTeX
FixAPI-->>Editor: Fixed LaTeX (200)
Editor->>Editor: Update editor with fixed code
Editor->>RenderAPI: POST fixed LaTeX
RenderAPI-->>Editor: PDF bytes
Editor->>Editor: Display updated PDF
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
…nce and Technical Skills
There was a problem hiding this comment.
Actionable comments posted: 14
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/app/dashboard/page.tsx (1)
161-165:⚠️ Potential issue | 🟡 MinorTip section doesn't mention the new Resume Builder workflow.
The onboarding tip still only references Upload → Analysis → Cover Letter. Consider updating it to include the new resume builder step so new users discover the feature.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/dashboard/page.tsx` around lines 161 - 165, Update the onboarding tip JSX in the section inside src/app/dashboard/page.tsx to include the new Resume Builder step: modify the sentence that currently lists "Upload → Analysis → Cover Letter" to insert the Resume Builder link (e.g., <Link href="/dashboard/resume-builder" ...>Resume Builder</Link>) in the correct order (suggested: Upload → Resume Builder → Analysis → Cover Letter), preserving the existing classes and styling for links and the surrounding <section> and <p> elements.src/app/dashboard/history/page.tsx (1)
264-267:⚠️ Potential issue | 🟡 MinorEllipsis shown after short preview for resume items.
getPreviewreturns the short string"Generated LaTeX resume source"for LaTeX content, but the...suffix is controlled byitem.result.length > 200, which checks the full LaTeX source. This means the preview will display as:"Generated LaTeX resume source..."— the trailing ellipsis is misleading since the preview is already a complete summary, not a truncation.Suggested fix
<p className="text-sm text-foreground/80 font-medium leading-relaxed whitespace-pre-wrap"> - {getPreview(item.result)} - {item.result.length > 200 && '...'} + {(() => { + const preview = getPreview(item.result) + const isTruncated = preview !== item.result && item.result.length > 200 + return `${preview}${isTruncated ? '...' : ''}` + })()} </p>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/dashboard/history/page.tsx` around lines 264 - 267, getPreview currently returns a short summary for LaTeX content but the ellipsis is gated by item.result.length, causing a misleading "..." after a complete preview; modify the rendering to call getPreview(item.result) once into a local variable (e.g., preview) and use preview.length (or a defined previewMax) to decide whether to append '...' so the ellipsis reflects truncation of the shown preview (update the JSX around getPreview and the conditional that appends '...').
🧹 Nitpick comments (26)
test/app/dashboard/cover-letter/[slug]/page.test.tsx (1)
83-103: Add amockFetchcall-count assertion here too.Same concern as in the analysis test: verifying the redirect target alone doesn't prove the retry loop fired exactly once. Adding
expect(mockFetch).toHaveBeenCalledTimes(2)after thewaitForblock closes this gap.♻️ Proposed addition
await waitFor(() => { expect(mockReplace).toHaveBeenCalledWith('/dashboard/cover-letter/cover-retry-1'); }); + + expect(mockFetch).toHaveBeenCalledTimes(2); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/app/dashboard/cover-letter/`[slug]/page.test.tsx around lines 83 - 103, The test lacks an assertion that mockFetch was called twice to confirm the retry occurred; after rendering CoverLetterSlugPage and awaiting the waitFor that asserts mockReplace was called with '/dashboard/cover-letter/cover-retry-1', add an assertion expect(mockFetch).toHaveBeenCalledTimes(2) so the test verifies the first transient 500 response and the subsequent successful retry; locate this near the existing waitFor/assertion in the test block using the mockFetch and mockReplace symbols.test/app/dashboard/analysis/[slug]/page.test.tsx (2)
83-103: Add amockFetchcall-count assertion to confirm exactly one retry occurred.The test only verifies the redirect destination, but doesn't confirm that the component called
fetchexactly twice (initial attempt + one retry). Without this, the test would still pass if the retry count is higher (e.g., the second successful mock is consumed on the third call, leaving the earlier calls returningundefined).♻️ Proposed addition
await waitFor(() => { expect(mockReplace).toHaveBeenCalledWith('/dashboard/analysis/analysis-retry-1'); }); + + expect(mockFetch).toHaveBeenCalledTimes(2); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/app/dashboard/analysis/`[slug]/page.test.tsx around lines 83 - 103, Add an assertion that mockFetch was called exactly twice to verify one initial attempt plus one retry: after rendering AnalysisSlugPage and before the waitFor redirect assertion, assert mockFetch (the mocked global fetch used in the test) has been calledTimes(2) or that mockFetch.mock.calls.length === 2 so the test fails if more or fewer fetch attempts occur; keep the existing mockResolvedValueOnce setup and mockReplace assertion unchanged.
23-29: ExtractcreateResponseto a shared test utility.This helper is duplicated verbatim in the cover-letter test file. Moving it to a shared file (e.g.,
test/helpers/response.ts) would eliminate the duplication.♻️ Proposed extraction
Create
test/helpers/response.ts:import { vi } from 'vitest'; export function createResponse(ok: boolean, status: number, payload: unknown): Response { return { ok, status, json: vi.fn().mockResolvedValue(payload), } as unknown as Response; }Then in both test files:
-function createResponse(ok: boolean, status: number, payload: unknown): Response { - return { - ok, - status, - json: vi.fn().mockResolvedValue(payload), - } as unknown as Response; -} +import { createResponse } from '@/test/helpers/response';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/app/dashboard/analysis/`[slug]/page.test.tsx around lines 23 - 29, The createResponse helper is duplicated; extract the createResponse function into a shared test helper module and replace the in-file copies with an import. Specifically, move the function definition for createResponse(ok: boolean, status: number, payload: unknown): Response into a single shared helper file, export it, and update the tests (including the one in cover-letter tests and this dashboard test) to import and use createResponse instead of defining it inline. Ensure the exported helper still uses vi.fn().mockResolvedValue for json so callers behave the same.README.md (1)
72-73: New environment variables documented — looks good.Minor note:
LATEX_RENDER_API_BASEdefaults tohttps://latexonline.cc, a third-party service. Consider adding a brief comment in the README or.env.examplenoting this external dependency and that the service's availability is outside your control.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 72 - 73, Add a short comment/documentation near the LATEX_RENDER_API_BASE env var (and in .env.example if present) indicating it defaults to the third‑party service https://latexonline.cc, that this is an external dependency whose uptime and behavior are out of our control, and suggest that users may override LATEX_RENDER_API_BASE to point to a self‑hosted or alternative renderer; update references to LATEX_RENDER_API_BASE in README.md (and .env.example) accordingly to include this note.src/lib/contracts/api.ts (1)
56-67: Consider reusing field definitions fromanalyzeRequestSchemato reduce duplication.
resumeTextandjobDescriptionvalidators are duplicated verbatim fromanalyzeRequestSchema(lines 24–25). You could extract these into shared constants or use.pick()+.extend()likecoverLetterRequestSchemadoes, to keep field constraints in sync.♻️ Example refactor
-export const tailoredResumeRequestSchema = z.object({ - resumeText: z.string().trim().min(1, 'Resume text is required').max(50000, 'Resume text is too long (max 50,000 characters)'), - jobDescription: z.string().trim().min(1, 'Job description is required').max(15000, 'Job description is too long (max 15,000 characters)'), - templateId: resumeTemplateIdSchema.default('awesome-classic'), +export const tailoredResumeRequestSchema = analyzeRequestSchema + .pick({ + resumeText: true, + jobDescription: true, + idempotencyKey: true, + forceRegenerate: true, + resumeName: true, + }) + .extend({ + templateId: resumeTemplateIdSchema.default('awesome-classic'), + builderSlug: z.string().trim().regex(/^[a-z0-9]+(?:-[a-z0-9]+)*$/, 'Invalid builder slug format').min(4).max(120).optional(), + sourceAnalysisId: z.string().trim().min(1).max(128).optional(), + customTemplateName: optionalFreeTextSchema, + customTemplateLatex: z.string().trim().min(1).max(180000).optional(), + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/contracts/api.ts` around lines 56 - 67, The resumeText and jobDescription validators in tailoredResumeRequestSchema are duplicated from analyzeRequestSchema; refactor to reuse those definitions by either extracting shared schemas (e.g., resumeTextSchema and jobDescriptionSchema) and replacing the inline z.string() validators in tailoredResumeRequestSchema, or by composing from analyzeRequestSchema via analyzeRequestSchema.pick({ resumeText: true, jobDescription: true }).extend({...}) (similar to how coverLetterRequestSchema is composed) so constraints stay in sync.convex/functions.ts (1)
464-476: History returns fulllatexSource— consider truncating or omitting for list views.
result: doc.latexSourcesends the entire LaTeX source (up to 180 KB per resume) in every history listing item. When a user has many tailored resumes, this significantly bloats the response payload. Analysis and cover letter results are also included, but LaTeX sources tend to be larger.Consider returning a truncated preview or a flag (e.g.,
hasLatexSource: true) in the history list, and let the detail view fetch the full source on demand.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@convex/functions.ts` around lines 464 - 476, The history list is returning the full LaTeX source via the mapping in tailoredResumes.map (see the property result: doc.latexSource), which bloats payloads; change the mapping to avoid sending full latexSource — either include a truncated preview (e.g., preview field with a small substring of doc.latexSource) or replace result with a boolean flag (e.g., hasLatexSource: !!doc.latexSource) and remove the full source from the list; ensure the detail endpoint (or a dedicated fetch) returns the complete doc.latexSource on demand instead of in the list view.test/app/api/generate-resume-latex/route.test.ts (1)
57-81: Consider also assertingsaveTailoredResumewas not called on cache hit.The cached-resume test correctly verifies that
generateTailoredResumeDatais skipped, but does not assert thatsaveTailoredResumeis also skipped. Adding this would strengthen the cache-hit contract.Proposed addition
expect(generateTailoredResumeData).not.toHaveBeenCalled(); + expect(saveTailoredResume).not.toHaveBeenCalled(); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/app/api/generate-resume-latex/route.test.ts` around lines 57 - 81, In the 'returns cached tailored resume when available' test add an assertion that saveTailoredResume was not invoked to ensure cache hits don't persist again: after calling POST(req) and asserting generateTailoredResumeData wasn't called, also assert vi.mocked(saveTailoredResume).not.toHaveBeenCalled() (references: getTailoredResume, generateTailoredResumeData, saveTailoredResume, and the POST test case).src/app/api/fix-latex/route.ts (1)
52-58: String-based error classification is fragile and case-sensitive.
error.message.includes('quota')andincludes('rate')are case-sensitive and could miss variations like"Quota exceeded"or"Rate limit". The word"rate"is also quite broad. Consider using a case-insensitive regex or matching on error codes/types from the upstream SDK if available.Proposed approach
- if (error.message.includes('quota') || error.message.includes('rate')) { + if (/quota|rate.limit/i.test(error.message)) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/api/fix-latex/route.ts` around lines 52 - 58, The current error classification in the LATEX fix route uses fragile, case-sensitive substring checks on error.message (the block that returns apiError using requestId), so change those checks to use case-insensitive matching and tighter patterns: prefer checking error.code or error.type from the upstream SDK first (if present) and fall back to a case-insensitive regex like /\bquota\b/i or /\brate limit\b|\brate-limited\b/i on error.message rather than plain includes('quota')/includes('rate'); update the branch that returns 429 ('UPSTREAM_RATE_LIMITED') accordingly and ensure the timed-out check (includes('timed out')) is also made robust (e.g., case-insensitive) to avoid false negatives.src/app/api/render-latex/route.ts (2)
32-38: No timeout on upstream LaTeX service calls.Both
compileLatexViaUploadand the directfetchlack timeouts. If the upstream LaTeX rendering service is slow or unresponsive, this request will hang indefinitely until the infrastructure-level timeout kicks in (if any). Thegenerate-resume-latexroute already useswithTimeoutfor its AI call — consider wrapping the upstream call here similarly.Proposed fix
+import { withTimeout } from '@/lib/async-timeout'; + +const RENDER_TIMEOUT_MS = Number(process.env.LATEX_RENDER_TIMEOUT_MS || 30000); + ... - const upstream = shouldUseLatexUploadMode(latexSource) - ? await compileLatexViaUpload(latexSource) - : await fetch(buildLatexCompileUrl(latexSource), { - method: 'GET', - headers: { Accept: 'application/pdf' }, - cache: 'no-store', - }); + const upstream = await withTimeout( + shouldUseLatexUploadMode(latexSource) + ? compileLatexViaUpload(latexSource) + : fetch(buildLatexCompileUrl(latexSource), { + method: 'GET', + headers: { Accept: 'application/pdf' }, + cache: 'no-store', + }), + RENDER_TIMEOUT_MS, + 'LaTeX rendering timed out', + );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/api/render-latex/route.ts` around lines 32 - 38, The upstream LaTeX calls currently lack a timeout and can hang; wrap the call path used in route.ts so both branches (when shouldUseLatexUploadMode(latexSource) is true and false) are executed through the same timeout helper (e.g., use the existing withTimeout pattern used in generate-resume-latex). Specifically, call withTimeout around compileLatexViaUpload(latexSource) and around the fetch(buildLatexCompileUrl(latexSource), {...}) branch (or wrap the whole conditional result) so the request aborts on timeout and surfaces a controlled error; reference shouldUseLatexUploadMode, compileLatexViaUpload, buildLatexCompileUrl and reuse the same timeout duration/abort semantics as generate-resume-latex.
17-58: No rate limiting on the render endpoint.Unlike
generate-resume-latex(which enforces per-user rate limits), this endpoint has no rate limiting. Since it proxies to an external LaTeX compilation service, an authenticated user could send a high volume of requests and overwhelm the upstream service or incur costs. Consider adding a rate limit similar to the one in the generation route.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/api/render-latex/route.ts` around lines 17 - 58, Add a per-user rate limit check in the POST handler: after obtaining userId via getAuthenticatedUser() and before starting any upstream compile/fetch (i.e., before calling shouldUseLatexUploadMode / compileLatexViaUpload / fetch(buildLatexCompileUrl...)), invoke the same rate-limiting helper used in the generate-resume-latex flow (or implement a check that returns a 429 via apiError when the user has exceeded their quota). If the limit is exceeded return apiError(requestId, 429, 'RATE_LIMIT_EXCEEDED', 'Too many requests') immediately; otherwise proceed with the existing compilation logic.src/app/api/resume-builder/[slug]/route.ts (1)
61-118: POST endpoint lacks rate limiting.Unlike the generation route, this endpoint has no per-user rate limit. Since each POST creates a new Convex document, an attacker (or buggy client) could rapidly create a large number of versions. Consider adding a rate limit similar to the generation route.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/api/resume-builder/`[slug]/route.ts around lines 61 - 118, The POST handler creates new Convex documents without any per-user rate limiting; add the same per-user rate-limit check you use in the generation route at the start of POST (after getAuthenticatedUser() succeeds) and reject requests that exceed the limit with an appropriate apiError. Concretely: after obtaining userId in POST, call the shared rate-limit helper used by the generation endpoint (the project’s rateLimiter or allowRequestForUser helper) and return a 429 (or apiError with rate-limit code) if the check denies the request, ensuring you perform this check before calling getTailoredResumeVersionsBySlug and saveTailoredResume so attackers cannot spam version creation.src/app/dashboard/resume-builder/page.tsx (1)
19-19: LocalHistoryTypeshadows the domain type and is incomplete.This file defines a local
HistoryType = 'analysis' | 'cover-letter'that omits'resume', whilesrc/types/domain.tsexports the canonicalHistoryTypethat now includes'resume'. The local type is only used forSearchHistoryItem, but shadowing the domain type is confusing. Consider importing and using the domain type, or renaming this local type to avoid confusion.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/dashboard/resume-builder/page.tsx` at line 19, The local type alias HistoryType ('analysis' | 'cover-letter') shadows and is incomplete compared to the canonical HistoryType exported from the domain; remove the local declaration and import the domain's exported HistoryType, then update the SearchHistoryItem type to use that imported HistoryType (so it includes 'resume'); alternatively, if you intentionally need a different narrower type, rename the local alias (e.g., LocalHistoryType) and keep it consistent where only that narrower set is required.src/app/dashboard/resume-builder/new/page.tsx (1)
118-122: Redundant null check fortemplate.
templateis already guaranteed to be truthy by the guard on line 111 (!draft.template). The check on lines 120–122 is dead code.Suggested fix
const source = draft.source const template = draft.template - if (!template) { - throw new Error('Template is missing. Return to Step 2.') - } + // template is guaranteed non-null by the guard on line 111🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/dashboard/resume-builder/new/page.tsx` around lines 118 - 122, Remove the redundant null-check: the template variable is already guaranteed by the earlier guard that checks !draft.template, so delete the subsequent if (!template) { throw new Error('Template is missing. Return to Step 2.') } block (the const template = draft.template may remain); ensure any error handling relies on the existing guard (the draft and template variables are the identifiers to edit).src/app/api/generate-resume-latex/route.ts (2)
194-205: Error classification via string matching is fragile.Matching on substrings like
'timed out','quota','rate', and'API key'couples this handler to the exact wording of upstream error messages. If the Gemini SDK orwithTimeoutchanges its message text, the classification silently degrades to a generic 500. Consider using typed error classes or error codes instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/api/generate-resume-latex/route.ts` around lines 194 - 205, The current error handler in the generate-resume-latex route uses fragile substring checks on error.message; update the error classification logic in the catch block (the block that currently calls apiError(requestId, ...)) to detect specific error types or codes instead — e.g., check instanceof TimeoutError or a withTimeout-specific error, check for SDK error.code values like "RATE_LIMIT" or "INSUFFICIENT_CREDITS", or detect authentication errors via an AuthenticationError/invalid_api_key code — and map those to the corresponding apiError responses (504, 429, 500) while keeping a safe default (RESUME_GENERATION_FAILED) for unknown errors; locate this logic where apiError is called and replace message substring checks with type/code checks and preserve existing requestId usage and logging.
69-74: In-memory idempotency store won't survive restarts or scale horizontally.The idempotency layer uses an in-process
Map(persrc/lib/idempotency.ts). This means idempotency keys are lost on server restart and aren't shared across instances. Fine for an initial implementation, but worth aTODOor follow-up task to move to a durable store (e.g., Redis or Convex) if this route sees production traffic.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/api/generate-resume-latex/route.ts` around lines 69 - 74, The current idempotency check uses an in-memory Map via getIdempotentResponse which means effectiveIdempotencyKey values are lost on process restart and aren't shared across instances; add a TODO and change the design to use a durable, pluggable backing store (e.g., Redis or Convex) for the idempotency layer: refactor getIdempotentResponse (and its counterpart setIdempotentResponse) into an interface/adapter and update the route that uses effectiveIdempotencyKey and apiSuccess to call the adapter so production deployments can swap in a persistent store, or at minimum log a warning and bypass idempotency when no durable store is configured.src/app/dashboard/resume-builder/step-2/page.tsx (3)
52-68:readDraftuses an unsafe type assertion on parsed JSON.
JSON.parse(raw) as ResumeBuilderDraftprovides no runtime guarantees. If sessionStorage contains stale data from a previous schema version (e.g., thesourcefield is missing or has a different shape), downstream code at line 132 (loaded?.source) may not catch all malformed cases, potentially leading to subtle bugs.At minimum, consider validating the presence of
sourceandsource.kind:Sketch
try { - return JSON.parse(raw) as ResumeBuilderDraft + const parsed = JSON.parse(raw) + if (!parsed?.source?.kind) { + window.sessionStorage.removeItem(RESUME_BUILDER_DRAFT_KEY) + return null + } + return parsed as ResumeBuilderDraft } catch {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/dashboard/resume-builder/step-2/page.tsx` around lines 52 - 68, readDraft currently casts JSON.parse(raw) to ResumeBuilderDraft without runtime checks; parse the raw payload, verify it is an object and that required fields exist (at minimum check that parsed.source is an object and parsed.source.kind exists and is a valid string/enum) before returning—if validation fails, remove RESUME_BUILDER_DRAFT_KEY from sessionStorage and return null so downstream access like loaded?.source is safe; update the readDraft function to perform these shape checks for resilience against stale/malformed drafts.
92-128: DuplicatedtrackUrl/revokeUrl/renderLatexpattern across pages.The
trackUrl,revokeUrl, and render-to-blob-URL logic is nearly identical tosrc/app/dashboard/resume-builder/[slug]/page.tsx(lines 92–138). Consider extracting a shared hook (e.g.,useLatexRenderer) to avoid maintaining the same logic in multiple places.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/dashboard/resume-builder/step-2/page.tsx` around lines 92 - 128, Extract the duplicated URL-tracking and LaTeX-rendering logic into a shared hook (e.g., useLatexRenderer) that encapsulates objectUrlsRef, trackUrl, revokeUrl and a renderLatexToPdfUrl function; replace the inline implementations in both page components with calls to this hook so they call const { trackUrl, revokeUrl, renderLatexToPdfUrl } = useLatexRenderer(), ensure the hook performs the same fetch('/api/render-latex') error handling and URL.createObjectURL + tracking behavior and exposes a stable API for revoking tracked URLs when components unmount.
435-457: Modal lacks keyboard dismiss — same issue as the[slug]page.The template preview modal (lines 435–457) doesn't support Escape-to-close or backdrop-click-to-close, mirroring the same accessibility gap noted in the
[slug]/page.tsxmodal.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/dashboard/resume-builder/step-2/page.tsx` around lines 435 - 457, The modal opened when viewerUrl is truthy doesn't close on Escape or backdrop clicks; add keyboard and backdrop dismiss behavior by (1) wrapping the backdrop div click handler to call setViewerUrl(null) and stop inner container propagation (ensure the inner container div handling the modal content prevents propagation so clicks inside the modal don't close it), and (2) in this component add a useEffect that registers a keydown listener on mount which calls setViewerUrl(null) when event.key === 'Escape' and cleans up on unmount; reference the viewerUrl conditional, setViewerUrl state setter, viewerEmbedSrc/iframe and the outer backdrop div and inner modal container to apply these changes.src/lib/gemini.ts (4)
170-179:maxOutputTokensis hardcoded for all analysis types — consider tuning per type.All analysis types share
maxOutputTokens: 16384. ThetailoredResumetype generates a large structured JSON and may benefit from a higher limit, while simpler types likekeywordscould use less. This is unlikely to cause immediate issues but worth noting for future tuning.More importantly, the
'tailoredResume'and'latexFix'literal types are only accepted here but not part of the exportedAnalysisTypeunion at line 133. This is fine since the parameter type is explicitly widened, but consider whether a single union type would be cleaner.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/gemini.ts` around lines 170 - 179, Adjust createGeminiModel so maxOutputTokens is chosen per analysisType (e.g., increase for 'tailoredResume', decrease for simple types like 'keywords') by branching on analysisType inside generationConfig and setting appropriate values instead of the constant 16384; also reconcile the types by adding 'tailoredResume' and 'latexFix' to the exported AnalysisType union (or remove the literal widening in the createGeminiModel signature) so the parameter type matches the exported type.
353-389:fixLatexCompilationErrordoesn't check for empty response before stripping fences.
result.response.text()could potentially return an empty string. The fence-stripping is applied, then the emptiness check follows. This is fine logically sincestripMarkdownCodeFencehandles empty/non-fenced input, but worth noting that there's no error handling around themodel.generateContentcall itself — if it throws, the error propagates raw to the caller, unlikeanalyzeResumewhich wraps the call in try/catch with logging.Proposed improvement
- const result = await model.generateContent(prompt); - const text = result.response.text(); - const normalized = stripMarkdownCodeFence(text); + let normalized: string; + try { + const result = await model.generateContent(prompt); + const text = result.response.text(); + normalized = stripMarkdownCodeFence(text); + } catch (error) { + console.error('Gemini API error (latexFix):', error); + throw error; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/gemini.ts` around lines 353 - 389, Wrap the call to model.generateContent in a try/catch inside fixLatexCompilationError and handle failures consistently: catch errors from createGeminiModel(...).generateContent and log or rethrow a descriptive Error so raw exceptions don't propagate; after a successful call, verify that the returned result object (the variable result) and result.response exist before calling result.response.text(), then proceed to call stripMarkdownCodeFence(text) and validate non-empty normalized output as before (reference generateContent, result, result.response.text(), stripMarkdownCodeFence).
337-351: No runtime validation of the parsed JSON againstTailoredResumeData.
JSON.parse(normalized) as TailoredResumeDatais a type assertion with no runtime guarantee. If the AI omits required fields likesummary,skills,experience, etc., downstream code (e.g.,buildLatexResumeinresume-latex.ts) will receive unexpectedundefinedvalues. Theresume-latex.tscode does have null-coalescing (??) for arrays, butsummaryis typed as requiredstring— if the AI returnsnullor omits it, you'll getundefinedat runtime.Consider adding lightweight validation for the critical required fields after parsing:
Proposed validation
const normalized = stripMarkdownJsonFence(text); try { - return JSON.parse(normalized) as TailoredResumeData; + const parsed = JSON.parse(normalized); + if (typeof parsed !== 'object' || parsed === null) { + throw new Error('Expected a JSON object'); + } + return { + ...parsed, + summary: typeof parsed.summary === 'string' ? parsed.summary : '', + skills: Array.isArray(parsed.skills) ? parsed.skills : [], + experience: Array.isArray(parsed.experience) ? parsed.experience : [], + projects: Array.isArray(parsed.projects) ? parsed.projects : [], + education: Array.isArray(parsed.education) ? parsed.education : [], + certifications: Array.isArray(parsed.certifications) ? parsed.certifications : [], + additional: Array.isArray(parsed.additional) ? parsed.additional : [], + keywordsUsed: Array.isArray(parsed.keywordsUsed) ? parsed.keywordsUsed : [], + } as TailoredResumeData; } catch (error) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/gemini.ts` around lines 337 - 351, After parsing the AI response in the function that calls model.generateContent and uses stripMarkdownJsonFence, add a runtime validation step for the parsed object (instead of relying on the TypeScript assertion JSON.parse(...) as TailoredResumeData) by implementing and invoking a lightweight type-guard like validateTailoredResumeData(parsed): boolean that checks required fields such as summary (non-empty string), skills (array of strings), experience (array of objects with at least role/company/dates or similar required keys), and education (if required); if validation fails, log the parsed payload and throw a clear Error('AI returned invalid structured resume data: missing or invalid fields') so downstream consumers like buildLatexResume in resume-latex.ts never receive undefined for required properties.
181-199: Fence-stripping regexes only handle a single code fence block.If the AI response contains multiple code fence blocks (e.g., explanation followed by a fenced block), these regexes will strip the first opening fence and the last closing fence, potentially including non-JSON/non-LaTeX content in between. In practice, the prompts instruct the model to return only the payload, so this is low risk.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/gemini.ts` around lines 181 - 199, The current stripMarkdownJsonFence and stripMarkdownCodeFence implementations naïvely remove the first opening fence and the last closing fence, which can collapse multiple fenced blocks; change each to explicitly extract the first fenced code block's inner content using a non-greedy multi-line regex that captures between a matching opening and closing triple-backtick fence for the intended language (e.g., /^```(?:json)?\s*([\s\S]*?)\s*```/i for JSON and /^```(?:latex|tex|text)?\s*([\s\S]*?)\s*```/i for code/LaTeX) and return the captured group or the trimmed input when no such single fenced block is found.src/lib/resume-latex.ts (2)
91-103:renderEntrycomputesleft(line 93) but discards it in theright.length > 0branch.When
rightis non-empty (line 97–98), the header is rebuilt differently — using\textbf+\textitformatting instead of reusingleft. TheheaderParts/leftcomputation at lines 92–93 is wasted work in that case.This is a minor clarity issue, not a bug, but the dual formatting paths are confusing to maintain.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/resume-latex.ts` around lines 91 - 103, renderEntry currently builds headerParts/left then ignores left when right is non-empty, rebuilding the header with different formatting; change the header construction to reuse left so the title/subtitle formatting is consistent and avoid wasted work: compute headerParts (as currently) and set header = right.length > 0 ? `${left} \\hfill ${right}` : left, updating renderEntry (and related variables headerParts, left, right) so you don't reconstruct or re-escape title/subtitle in the right-branch.
185-207:renderJakeContactdoesn't escape themailto:href target.At line 194, the email address is placed directly into
\href{mailto:...}without escaping special LaTeX characters. If an email contains_(e.g.,john_doe@example.com), it could break the\hrefcommand. However,\hreffromhyperrefpackage treats its first argument somewhat literally, so_in URLs typically works. Still, characters like%or#in an email address would break compilation.Since the
hyperrefpackage handles URL escaping differently from text escaping, the current approach is fragile but works for typical email addresses.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/resume-latex.ts` around lines 185 - 207, The email address is inserted into the \href target without escaping, which can break LaTeX for addresses containing characters like % or #; in renderJakeContact replace the raw email in the mailto target with a URL-escaped version (use the same URL-safe helper used for other links—e.g. call buildHref(email) or an equivalent percent-encoding routine) while still using escapeLatex(email) for the visible underlined text so the \href target is safe and the label remains properly escaped.src/app/dashboard/resume-builder/[slug]/page.tsx (2)
43-64: DRY:downloadTextanddownloadBlobshare identical logic.Both functions create an object URL, append a temporary link, click it, then clean up. Only the Blob creation differs. Consider consolidating into a single helper.
Proposed refactor
-function downloadText(content: string, fileName: string) { - const blob = new Blob([content], { type: 'application/x-tex;charset=utf-8' }) - const url = URL.createObjectURL(blob) - const link = document.createElement('a') - link.href = url - link.download = fileName - document.body.appendChild(link) - link.click() - link.remove() - URL.revokeObjectURL(url) -} - -function downloadBlob(blob: Blob, fileName: string) { - const url = URL.createObjectURL(blob) +function downloadBlob(blob: Blob, fileName: string) { + const url = URL.createObjectURL(blob) const link = document.createElement('a') link.href = url link.download = fileName document.body.appendChild(link) link.click() link.remove() URL.revokeObjectURL(url) } + +function downloadText(content: string, fileName: string) { + downloadBlob(new Blob([content], { type: 'application/x-tex;charset=utf-8' }), fileName) +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/dashboard/resume-builder/`[slug]/page.tsx around lines 43 - 64, Both downloadText and downloadBlob duplicate the same DOM/download logic; consolidate them into a single helper (e.g., downloadBlobFromUrl or createDownload) that accepts either a Blob or a string+MIME type and a fileName, creates the object URL, builds and clicks the temporary anchor, then revokes the URL and removes the anchor; replace calls to downloadText and downloadBlob with the new helper and move the Blob creation (previously in downloadText) into the call site or into the helper when given raw content so all URL creation/cleanup is centralized (ensure URL.revokeObjectURL runs after click).
481-503: Modal lacks keyboard dismiss and backdrop click handling.The full PDF viewer modal cannot be closed with the Escape key or by clicking the backdrop overlay. This is an accessibility and UX gap.
Suggested approach
+ useEffect(() => { + if (!fullViewerOpen) return + const handler = (e: KeyboardEvent) => { + if (e.key === 'Escape') setFullViewerOpen(false) + } + window.addEventListener('keydown', handler) + return () => window.removeEventListener('keydown', handler) + }, [fullViewerOpen])And for backdrop click, add
onClick={() => setFullViewerOpen(false)}to the outer overlaydiv(line 482), andonClick={(e) => e.stopPropagation()}on the inner contentdiv(line 483) to prevent closing when clicking the modal content.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/dashboard/resume-builder/`[slug]/page.tsx around lines 481 - 503, The modal overlay (rendered when fullViewerOpen && previewUrl) should close on Escape and on backdrop click: add an onClick handler to the outer overlay div to call setFullViewerOpen(false) and add onClick={(e) => e.stopPropagation()} to the inner content div to prevent clicks inside from closing it; also add a useEffect that registers a keydown listener which calls setFullViewerOpen(false) when event.key === 'Escape' and cleans up the listener on unmount/update (reference fullViewerOpen, previewUrl, setFullViewerOpen, and the overlay/inner divs).
| export const getTailoredResume = query({ | ||
| args: { | ||
| userId: v.string(), | ||
| resumeHash: v.string(), | ||
| jobDescriptionHash: v.string(), | ||
| templateId: v.string(), | ||
| }, | ||
| handler: async (ctx, args) => { | ||
| const doc = await ctx.db | ||
| .query("tailoredResumes") | ||
| .filter((q) => | ||
| q.and( | ||
| q.eq(q.field("userId"), args.userId), | ||
| q.eq(q.field("resumeHash"), args.resumeHash), | ||
| q.eq(q.field("jobDescriptionHash"), args.jobDescriptionHash), | ||
| q.eq(q.field("templateId"), args.templateId), | ||
| ) | ||
| ) | ||
| .order("desc") | ||
| .first(); | ||
| return doc; | ||
| }, | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find schema files
fd -e ts -e js | grep -i schema | head -20Repository: Aditya190803/Application-Tracking-System
Length of output: 100
🏁 Script executed:
# Search for tailoredResumes table definition
rg -n "tailoredResumes" --type=ts -B2 -A10 | head -100Repository: Aditya190803/Application-Tracking-System
Length of output: 5460
🏁 Script executed:
# Look for by_lookup index definition
rg -n "by_lookup" --type=ts -C5Repository: Aditya190803/Application-Tracking-System
Length of output: 1887
getTailoredResume performs a full table scan — use the by_lookup index instead.
The schema defines a by_lookup index on [userId, resumeHash, jobDescriptionHash, templateId], but this query uses .filter() which scans the entire table. For any non-trivial data volume this will degrade performance.
⚡ Proposed fix
handler: async (ctx, args) => {
const doc = await ctx.db
.query("tailoredResumes")
- .filter((q) =>
- q.and(
- q.eq(q.field("userId"), args.userId),
- q.eq(q.field("resumeHash"), args.resumeHash),
- q.eq(q.field("jobDescriptionHash"), args.jobDescriptionHash),
- q.eq(q.field("templateId"), args.templateId),
- )
- )
+ .withIndex("by_lookup", (q) =>
+ q.eq("userId", args.userId)
+ .eq("resumeHash", args.resumeHash)
+ .eq("jobDescriptionHash", args.jobDescriptionHash)
+ .eq("templateId", args.templateId)
+ )
.order("desc")
.first();
return doc;
},🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@convex/functions.ts` around lines 275 - 297, The query in getTailoredResume
on tailoredResumes does a full table scan via .filter(); change it to use the
by_lookup indexed lookup by calling the Convex index-based API (e.g.,
ctx.db.query("tailoredResumes").withIndex("by_lookup").get([...]) or equivalent)
passing the tuple [args.userId, args.resumeHash, args.jobDescriptionHash,
args.templateId] in the same order as the index, and return the single result
(or .first() if using a query wrapper) so the lookup uses the by_lookup index
instead of scanning the table.
| export const getTailoredResumeVersionsBySlug = query({ | ||
| args: { | ||
| userId: v.string(), | ||
| builderSlug: v.string(), | ||
| limit: v.optional(v.number()), | ||
| }, | ||
| handler: async (ctx, args) => { | ||
| const limit = args.limit ?? 30; | ||
| const docs = await ctx.db | ||
| .query("tailoredResumes") | ||
| .withIndex("by_userId_builderSlug", (q) => | ||
| q.eq("userId", args.userId) | ||
| ) | ||
| .filter((q) => q.eq(q.field("builderSlug"), args.builderSlug)) | ||
| .order("desc") | ||
| .take(limit); | ||
| return docs; | ||
| }, | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find the by_userId_builderSlug index definition
rg -n 'by_userId_builderSlug' --type=ts -C 5Repository: Aditya190803/Application-Tracking-System
Length of output: 1201
🏁 Script executed:
#!/bin/bash
# Look for index definitions in the schema/database setup
fd . -type f -name "*.ts" | head -20
git ls-files | grep -E "(schema|index|table)" | head -20Repository: Aditya190803/Application-Tracking-System
Length of output: 353
🏁 Script executed:
#!/bin/bash
# Find other withIndex usages to see the pattern
rg -n 'withIndex' --type=ts -A 3 | head -50Repository: Aditya190803/Application-Tracking-System
Length of output: 2518
🏁 Script executed:
#!/bin/bash
# Search for examples of multiple .eq() chaining in the codebase
rg -n '\.eq\(' --type=ts -B 1 -A 1 | grep -A 2 "\.eq" | head -40Repository: Aditya190803/Application-Tracking-System
Length of output: 2121
🏁 Script executed:
#!/bin/bash
# Look for the complete context around lines 126-130 to understand the .and() pattern
sed -n '124,131p' convex/functions.tsRepository: Aditya190803/Application-Tracking-System
Length of output: 441
🏁 Script executed:
#!/bin/bash
# Search for .and() usage with compound index or withIndex
rg -n '\.and\(' convex/functions.ts -B 2 -A 2 | head -60Repository: Aditya190803/Application-Tracking-System
Length of output: 775
🏁 Script executed:
#!/bin/bash
# Search for any examples of withIndex with multiple conditions
rg -n 'withIndex.*\(.*q\s*=>' convex/functions.ts -A 5 | head -80Repository: Aditya190803/Application-Tracking-System
Length of output: 66
🏁 Script executed:
#!/bin/bash
# Check if there's any documentation or test files showing Convex API usage
fd . -type f \( -name "*.md" -o -name "*.json" \) | xargs grep -l "withIndex" 2>/dev/nullRepository: Aditya190803/Application-Tracking-System
Length of output: 256
🏁 Script executed:
#!/bin/bash
# Search for documentation in package.json or convex config
cat convex.json 2>/dev/null || echo "No convex.json"
cat package.json | grep -A 5 "convex" || echo "No convex in package.json"Repository: Aditya190803/Application-Tracking-System
Length of output: 451
🏁 Script executed:
#!/bin/bash
# Look at the complete function to understand the context better
sed -n '330,350p' convex/functions.tsRepository: Aditya190803/Application-Tracking-System
Length of output: 752
🏁 Script executed:
#!/bin/bash
# Check if there are any README files or documentation
find . -name "README*" -o -name "*.md" | head -10Repository: Aditya190803/Application-Tracking-System
Length of output: 731
🏁 Script executed:
#!/bin/bash
# Search online for Convex compound index documentation pattern
# Let's also check the actual Convex version to understand capabilities
cat package.json | grep '"convex"'Repository: Aditya190803/Application-Tracking-System
Length of output: 108
🌐 Web query:
Convex withIndex compound index multiple eq chaining API
💡 Result:
In Convex, compound (multi-field) indexes are queried by chaining comparisons inside withIndex, and you can chain multiple .eq() calls as long as you do it in the same order as the index fields.
1) Define a compound index
defineTable({
teamId: v.id("teams"),
status: v.string(),
}).index("by_teamId_status", ["teamId", "status"]);(Example pattern shown in Convex guidance.) [1]
2) Query it with multiple .eq() (chaining)
const rows = await ctx.db
.query("members")
.withIndex("by_teamId_status", (q) =>
q.eq("teamId", args.teamId).eq("status", "active")
)
.collect();This is the intended “multiple eq chaining” API for compound indexes. [1][2]
3) Rules: what chaining is allowed in an index range expression
An index range expression is a chained list of: (a) zero or more .eq(), then (b) optional .gt/.gte, then (c) optional .lt/.lte, and you must step through fields in index order. [2][3]
So if your index is ["a","b","c"], these are valid:
q.eq("a", 1)q.eq("a", 1).eq("b", 2)q.eq("a", 1).eq("b", 2).gte("c", 10).lt("c", 20)
And these are not valid:
q.eq("b", 2)(skipsa)q.eq("a", 1).eq("c", 3)(skipsb)q.gt("a", 1).eq("b", 2)(.eqmust come before range bounds) [2]
4) If you need extra predicates not supported by the index range
Use .filter(...) after .withIndex(...), but note performance is then proportional to how many docs are in the indexed range you selected. [2][3]
Sources: Convex docs on indexes and withIndex/index range expressions, plus examples of multi-field indexes and chained .eq. [1][2][3]
🌐 Web query:
Convex database query withIndex documentation compound index
💡 Result:
To do a compound (multi-field) index query in Convex, you:
- Define a compound index in your schema (field order matters):
// convex/schema.ts
defineTable({
author: v.string(),
title: v.string(),
text: v.string(),
}).index("by_author_title", ["author", "title"]);[1]
- Query it with
withIndex, matching fields from left to right in the index definition (typicallyeqon a prefix, then optional range on the next field):
const book = await ctx.db
.query("books")
.withIndex("by_author_title", (q) =>
q.eq("author", "Isaac Asimov").eq("title", "Foundation")
)
.unique();[2]
- Range query on the next field after the equality prefix (still in index order):
const books = await ctx.db
.query("books")
.withIndex("by_author_title", (q) =>
q.eq("author", "Isaac Asimov").gte("title", "F").lt("title", "G")
)
.collect();[2]
Key rule from the docs: your withIndex “range expression” is a chain of 0+ eq, then optionally gt/gte, then optionally lt/lte, and you must step through fields in index order (you can’t skip the leading field and range on a later one). [1]
getTailoredResumeVersionsBySlug under-utilizes the compound index.
The by_userId_builderSlug index covers both userId and builderSlug, but only userId is matched via .withIndex() while builderSlug falls back to .filter(). Chain both .eq() calls in the index range expression to leverage the full compound index and avoid scanning all of the user's resumes.
⚡ Proposed fix
const docs = await ctx.db
.query("tailoredResumes")
.withIndex("by_userId_builderSlug", (q) =>
- q.eq("userId", args.userId)
+ q.eq("userId", args.userId).eq("builderSlug", args.builderSlug)
)
- .filter((q) => q.eq(q.field("builderSlug"), args.builderSlug))
.order("desc")
.take(limit);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@convex/functions.ts` around lines 330 - 348, getTailoredResumeVersionsBySlug
is not using the full compound index by_userId_builderSlug because builderSlug
is applied in a .filter() instead of the index range; update the .withIndex call
to chain both equality predicates (e.g., in the withIndex callback call
q.eq("userId", args.userId).eq("builderSlug", args.builderSlug)) and remove the
separate .filter(...) so the query leverages the compound index and then keep
the .order("desc").take(limit) and return docs as before.
| @@ -0,0 +1,218 @@ | |||
| %------------------------- | |||
There was a problem hiding this comment.
Apostrophe in filename (jake's_resume.tex) may cause cross-platform and URL-encoding issues.
The ' character in the filename can break shell scripts, some build tools, and requires percent-encoding in URLs (jake%27s_resume.tex). Consider renaming to jakes_resume.tex or jake-resume.tex.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@public/jake`'s_resume.tex at line 1, The filename public/jake's_resume.tex
contains an apostrophe which can break shells, URLs and build tools; rename the
file to a safe name (e.g., jakes_resume.tex or jake-resume.tex) and update all
references to that filename across the repo (build scripts, CI configs,
documentation, HTML links, and any imports) so they point to the new symbol (the
new filename) to avoid URL-encoding and quoting issues.
| {\textbf{Simple Paintball} $|$ \emph{Spigot API, Java, Maven, TravisCI, Git}}{May 2018 -- May 2020} | ||
| \resumeItemListStart | ||
| \resumeItem{Developed a Minecraft server plugin to entertain kids during free time for a previous job} | ||
| \resumeItem{Published plugin to websites gaining 2K+ downloads and an average 4.5/5-star review} | ||
| \resumeItem{Implemented continuous delivery using TravisCI to build the plugin upon new a release} | ||
| \resumeItem{Collaborated with Minecraft server administrators to suggest features and get feedback about the plugin} |
There was a problem hiding this comment.
Minor text issues in sample content.
Line 173: double space in "Conducted a human subject study".
Line 197: "upon new a release" → "upon a new release".
These are in sample/template content, so low priority, but worth fixing if the file is user-facing.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@public/jake`'s_resume.tex around lines 193 - 198, Fix two minor typos in the
resume LaTeX content: change the double space in "Conducted a human subject
study" to a single space, and in the Simple Paintball entry (the line containing
"Implemented continuous delivery using TravisCI to build the plugin upon new a
release") change "upon new a release" to "upon a new release". Locate and edit
the corresponding text in the resume content (the string "Conducted a human
subject study" and the Simple Paintball resumeItem line) to correct spacing and
word order.
| if (error instanceof Error) { | ||
| if (error.message.includes('timed out')) { | ||
| return apiError(requestId, 504, 'LATEX_FIX_TIMEOUT', error.message); | ||
| } | ||
| if (error.message.includes('quota') || error.message.includes('rate')) { | ||
| return apiError(requestId, 429, 'UPSTREAM_RATE_LIMITED', 'AI rate limit exceeded. Please try again shortly.'); | ||
| } | ||
| if (error.message.includes('API key')) { | ||
| return apiError(requestId, 500, 'AI_CONFIG_ERROR', 'AI service configuration error. Please contact support.'); | ||
| } | ||
| return apiError(requestId, 500, 'LATEX_FIX_FAILED', error.message); |
There was a problem hiding this comment.
Raw error.message may leak internal details to clients.
Line 62 forwards the raw error message to the API response. If fixLatexCompilationError throws with an unexpected exception (e.g., a network-level error or library internals), the message will be exposed verbatim. Consider using a generic message for the fallback case, consistent with the render-latex route's 'Failed to render LaTeX' pattern at Line 65.
Proposed fix
- return apiError(requestId, 500, 'LATEX_FIX_FAILED', error.message);
+ return apiError(requestId, 500, 'LATEX_FIX_FAILED', 'Failed to auto-fix LaTeX. Please try again.');
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/app/api/fix-latex/route.ts` around lines 52 - 62, The fallback branch in
the error handling for fixLatexCompilationError currently returns raw
error.message to clients; change the final return in route.ts (the block that
calls apiError(requestId, 500, 'LATEX_FIX_FAILED', ...)) to return a generic
message like 'Failed to fix LaTeX compilation' instead of error.message to avoid
leaking internal details, and log the original Error internally (e.g., via
existing logger or console.error) before returning the generic apiError; keep
the existing specific checks (timeout, quota/rate, API key) intact.
| import { useResumes } from '@/hooks/useResumes' | ||
|
|
||
| type HistoryType = 'analysis' | 'cover-letter' | ||
| type InputMode = 'manual' | 'analysis' | ||
|
|
||
| const RESUME_BUILDER_DRAFT_KEY = 'resumeBuilderFlowDraftV1' | ||
|
|
||
| interface SearchHistoryItem { | ||
| id: string | ||
| type: HistoryType | ||
| analysisType?: string | ||
| companyName?: string | ||
| resumeName?: string | ||
| jobTitle?: string | ||
| jobDescription?: string | ||
| createdAt: string | ||
| } | ||
|
|
||
| type RecentAnalysis = SearchHistoryItem & { type: 'analysis' } | ||
|
|
||
| type ResumeBuilderSourceDraft = | ||
| | { | ||
| kind: 'manual' | ||
| resumeText: string | ||
| resumeName: string | ||
| jobDescription: string | ||
| } | ||
| | { | ||
| kind: 'analysis' | ||
| analysisId: string | ||
| resumeName: string | ||
| jobDescription: string | ||
| jobTitle?: string | ||
| companyName?: string | ||
| } | ||
|
|
||
| interface ResumeBuilderDraft { | ||
| source: ResumeBuilderSourceDraft | ||
| template?: { | ||
| templateId: 'awesome-classic' | 'deedy-modern' | 'sb2nov-ats' | 'custom' | ||
| customTemplateName?: string | ||
| customTemplateLatex?: string | ||
| } | ||
| } | ||
|
|
||
| function readDraft(): ResumeBuilderDraft | null { | ||
| if (typeof window === 'undefined') { | ||
| return null | ||
| } | ||
|
|
||
| const raw = window.sessionStorage.getItem(RESUME_BUILDER_DRAFT_KEY) | ||
| if (!raw) { | ||
| return null | ||
| } | ||
|
|
||
| try { | ||
| return JSON.parse(raw) as ResumeBuilderDraft | ||
| } catch { | ||
| window.sessionStorage.removeItem(RESUME_BUILDER_DRAFT_KEY) | ||
| return null | ||
| } | ||
| } | ||
|
|
||
| function writeDraft(draft: ResumeBuilderDraft) { | ||
| if (typeof window === 'undefined') { | ||
| return | ||
| } | ||
| window.sessionStorage.setItem(RESUME_BUILDER_DRAFT_KEY, JSON.stringify(draft)) | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Duplicated types and helpers across resume-builder pages.
ResumeBuilderSourceDraft, ResumeBuilderDraft, readDraft, writeDraft, and RESUME_BUILDER_DRAFT_KEY are copy-pasted between this file and src/app/dashboard/resume-builder/new/page.tsx. If the draft shape changes, only one file may be updated, causing subtle bugs. Extract these into a shared module (e.g., src/lib/resume-builder-draft.ts).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/app/dashboard/resume-builder/page.tsx` around lines 17 - 85, Extract the
duplicated constants, types, and helpers into a single shared module: export
RESUME_BUILDER_DRAFT_KEY, types ResumeBuilderSourceDraft and ResumeBuilderDraft,
and functions readDraft and writeDraft from the new module, then remove their
local definitions in both page.tsx and new/page.tsx and replace them with
imports from that module; ensure all references (e.g., ResumeBuilderSourceDraft,
ResumeBuilderDraft, readDraft, writeDraft, RESUME_BUILDER_DRAFT_KEY) are updated
to the imported symbols and that JSON parse/serialise behavior is preserved so
the code compiles and runtime behavior is unchanged.
| const handleCustomTemplateUpload = async (file: File) => { | ||
| if (!file.name.toLowerCase().endsWith('.tex')) { | ||
| setError('Please upload a .tex file') | ||
| return | ||
| } | ||
|
|
||
| const content = await file.text() | ||
| setCustomTemplateName(file.name) | ||
| setCustomTemplateLatex(content) | ||
| setTemplateId('custom') | ||
| setError(null) | ||
| } |
There was a problem hiding this comment.
No file size validation on custom .tex upload.
handleCustomTemplateUpload validates the file extension but not its size. A user could upload a very large file, which would be read entirely into memory (line 252) and stored in sessionStorage (which has a ~5 MB limit). If the file exceeds the sessionStorage quota, writeDraft at line 74 will throw silently.
Consider adding a size check:
Proposed fix
const handleCustomTemplateUpload = async (file: File) => {
if (!file.name.toLowerCase().endsWith('.tex')) {
setError('Please upload a .tex file')
return
}
+ if (file.size > 512 * 1024) {
+ setError('Template file must be under 512 KB')
+ return
+ }
const content = await file.text()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const handleCustomTemplateUpload = async (file: File) => { | |
| if (!file.name.toLowerCase().endsWith('.tex')) { | |
| setError('Please upload a .tex file') | |
| return | |
| } | |
| const content = await file.text() | |
| setCustomTemplateName(file.name) | |
| setCustomTemplateLatex(content) | |
| setTemplateId('custom') | |
| setError(null) | |
| } | |
| const handleCustomTemplateUpload = async (file: File) => { | |
| if (!file.name.toLowerCase().endsWith('.tex')) { | |
| setError('Please upload a .tex file') | |
| return | |
| } | |
| if (file.size > 512 * 1024) { | |
| setError('Template file must be under 512 KB') | |
| return | |
| } | |
| const content = await file.text() | |
| setCustomTemplateName(file.name) | |
| setCustomTemplateLatex(content) | |
| setTemplateId('custom') | |
| setError(null) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/app/dashboard/resume-builder/step-2/page.tsx` around lines 246 - 257,
handleCustomTemplateUpload currently only checks extension and reads the entire
file into memory and sessionStorage via setCustomTemplateLatex/writeDraft; add a
file.size check at the start of handleCustomTemplateUpload (e.g., enforce a
maxBytes constant such as 1-4 MB) and if file.size > maxBytes setError('Template
is too large; max X MB') and return without calling file.text(),
setCustomTemplateLatex, setCustomTemplateName, or setTemplateId; also wrap the
subsequent writeDraft/sessionStorage actions in a try/catch and surface any
storage errors via setError so oversized writes don't fail silently.
| @@ -0,0 +1,67 @@ | |||
| const DEFAULT_RENDER_API_BASE = 'https://latexonline.cc'; | |||
| const URL_MODE_MAX_SOURCE_LENGTH = 6000; | |||
There was a problem hiding this comment.
URL length may exceed server limits for sources near the threshold.
URL_MODE_MAX_SOURCE_LENGTH = 6000 allows up to ~6000 chars in a query parameter. After encodeURIComponent, LaTeX source with many special characters (backslashes, braces, etc.) can expand significantly — potentially to 18KB+. Many HTTP servers and proxies enforce URL length limits around 8KB. Consider lowering the threshold (e.g., to ~2000 chars) or measuring encoded length instead of raw length.
Suggested fix
-const URL_MODE_MAX_SOURCE_LENGTH = 6000;
+const URL_MODE_MAX_SOURCE_LENGTH = 2000;Or, alternatively, check the encoded length:
export function shouldUseLatexUploadMode(latexSource: string): boolean {
- return latexSource.length > URL_MODE_MAX_SOURCE_LENGTH;
+ return encodeURIComponent(latexSource).length > URL_MODE_MAX_SOURCE_LENGTH;
}Also applies to: 8-11
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/latex-render.ts` at line 2, The constant URL_MODE_MAX_SOURCE_LENGTH
(and any logic that decides "URL mode") currently uses raw source length = 6000
which can produce much longer encoded URLs; update the check to either lower the
raw threshold to ~2000 (replace 6000 with 2000) or, better, compute
encodeURIComponent(source).length and compare that encoded length against a safe
URL byte limit (e.g., 8000) before selecting URL mode; modify the code paths
that reference URL_MODE_MAX_SOURCE_LENGTH so they use the new threshold or
encoded-length check.
| function normalizeLatexText(input: string): string { | ||
| return input | ||
| .replace(/\r\n?/g, '\n') | ||
| .replace(/\u00A0/g, ' ') | ||
| .replace(/[‘’]/g, '\'') | ||
| .replace(/[“”]/g, '"') | ||
| .replace(/[–—]/g, '-') | ||
| .replace(/•/g, '-') | ||
| .replace(/…/g, '...') | ||
| .replace(/[^\x09\x0A\x0D\x20-\x7E]/g, ''); | ||
| } |
There was a problem hiding this comment.
normalizeLatexText silently strips all non-ASCII characters, including accented letters.
The regex at line 65 ([^\x09\x0A\x0D\x20-\x7E]) removes everything outside the ASCII printable range. This means names like "José García" become "Jos Garca" and locations like "São Paulo" become "So Paulo". For a resume builder, this is likely to affect many users with international names or credentials.
Consider either preserving accented characters (the [utf8]{inputenc} and [T1]{fontenc} packages in the preambles already support them), or transliterating them instead of dropping them.
Proposed fix — preserve Latin-1 Supplement range at minimum
- .replace(/[^\x09\x0A\x0D\x20-\x7E]/g, '');
+ .replace(/[^\x09\x0A\x0D\x20-\x7E\u00A1-\u00FF]/g, '');Note: The preamble already includes \usepackage[utf8]{inputenc} and \usepackage[T1]{fontenc}, so pdflatex should handle Latin-1 characters. For broader Unicode support, consider lualatex or xelatex, but that's a larger scope change.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| function normalizeLatexText(input: string): string { | |
| return input | |
| .replace(/\r\n?/g, '\n') | |
| .replace(/\u00A0/g, ' ') | |
| .replace(/[‘’]/g, '\'') | |
| .replace(/[“”]/g, '"') | |
| .replace(/[–—]/g, '-') | |
| .replace(/•/g, '-') | |
| .replace(/…/g, '...') | |
| .replace(/[^\x09\x0A\x0D\x20-\x7E]/g, ''); | |
| } | |
| function normalizeLatexText(input: string): string { | |
| return input | |
| .replace(/\r\n?/g, '\n') | |
| .replace(/\u00A0/g, ' ') | |
| .replace(/['']/g, '\'') | |
| .replace(/[""]/g, '"') | |
| .replace(/[–—]/g, '-') | |
| .replace(/•/g, '-') | |
| .replace(/…/g, '...') | |
| .replace(/[^\x09\x0A\x0D\x20-\x7E\u00A1-\u00FF]/g, ''); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/resume-latex.ts` around lines 56 - 66, The current normalizeLatexText
function strips all non-ASCII chars via the regex [^\x09\x0A\x0D\x20-\x7E],
removing accented letters; update the regex in normalizeLatexText to preserve
accented/Unicode letters — either expand the allowed range to include Latin-1
(e.g. include \u00A0-\u00FF) or, better, use Unicode property escapes to keep
letters/numbers/punctuation/space like /[^\p{L}\p{N}\p{P}\p{Zs}\x09\x0A\x0D]/gu
so names like "José" and "São" are retained. Ensure the new regex uses the u
flag if using \p{} escapes and replace the old pattern in normalizeLatexText
accordingly.
| export function buildLatexResumeFromCustomTemplate(templateSource: string, rawData: TailoredResumeData): string { | ||
| const builtInFallback = buildLatexResume('awesome-classic', rawData); | ||
| const experience = cleanSectionItems(rawData.experience, 6); | ||
| const projects = cleanSectionItems(rawData.projects, 6); | ||
| const education = cleanSectionItems(rawData.education, 4); | ||
| const skills = cleanList(rawData.skills, 30); | ||
| const certifications = cleanList(rawData.certifications, 15); | ||
| const additional = cleanList(rawData.additional, 15); | ||
|
|
||
| const replacements: Record<string, string> = { | ||
| '{{fullName}}': escapeLatex(rawData.fullName?.trim() || ''), | ||
| '{{email}}': escapeLatex(rawData.email?.trim() || ''), | ||
| '{{phone}}': escapeLatex(rawData.phone?.trim() || ''), | ||
| '{{location}}': escapeLatex(rawData.location?.trim() || ''), | ||
| '{{linkedin}}': escapeLatex(rawData.linkedin?.trim() || ''), | ||
| '{{github}}': escapeLatex(rawData.github?.trim() || ''), | ||
| '{{website}}': escapeLatex(rawData.website?.trim() || ''), | ||
| '{{summary}}': escapeLatex(rawData.summary?.trim() || ''), | ||
| '{{targetTitle}}': escapeLatex(rawData.targetTitle?.trim() || ''), | ||
| '{{skills}}': escapeLatex(skills.join(', ')), | ||
| '{{skills_latex}}': renderSkills(skills), | ||
| '{{experience_entries}}': experience.map(renderEntry).join('\n\n'), | ||
| '{{projects_entries}}': projects.map(renderEntry).join('\n\n'), | ||
| '{{education_entries}}': education.map(renderEntry).join('\n\n'), | ||
| '{{certifications}}': certifications.map((item) => `\\textbullet{} ${escapeLatex(item)}`).join('\\\\\n'), | ||
| '{{additional}}': additional.map((item) => `\\textbullet{} ${escapeLatex(item)}`).join('\\\\\n'), | ||
| '{{keywordsUsed}}': escapeLatex((rawData.keywordsUsed ?? []).join(', ')), | ||
| '{{structuredDataJson}}': escapeLatex(toJsonString(rawData)), | ||
| '{{generated_resume}}': builtInFallback, | ||
| }; | ||
|
|
||
| let output = templateSource; | ||
| for (const [placeholder, value] of Object.entries(replacements)) { | ||
| output = output.split(placeholder).join(value); | ||
| } | ||
| return output; | ||
| } |
There was a problem hiding this comment.
{{generated_resume}} injects a full \documentclass...\end{document} into the custom template.
Line 404 sets {{generated_resume}} to builtInFallback, which is the output of buildLatexResume('awesome-classic', rawData) — a complete LaTeX document including \documentclass, \begin{document}, and \end{document}. If a custom template uses this placeholder inside its own \begin{document}...\end{document}, the result will be invalid LaTeX with nested document environments.
Consider either:
- Stripping the preamble/document wrapper from the fallback before injection, or
- Documenting that
{{generated_resume}}replaces the entire template content (i.e., the custom template should contain only{{generated_resume}}and nothing else).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/resume-latex.ts` around lines 376 - 412, The injected built-in resume
(builtInFallback) contains a full LaTeX preamble and document wrapper which will
produce nested \documentclass/\begin{document}/\end{document} when placed inside
a custom template; in buildLatexResumeFromCustomTemplate replace the assignment
for '{{generated_resume}}' with a preamble-stripped version of builtInFallback
(create or call a small helper to remove the \documentclass... preamble and the
outer \begin{document}...\end{document} block via a targeted regex or parser) so
the placeholder injects only the document body, or alternatively detect if the
template contains no \begin{document} and keep the full document—target symbols:
buildLatexResumeFromCustomTemplate, builtInFallback, and '{{generated_resume}}'.
This pull request introduces tailored LaTeX resume generation and management functionality, including new database schema, API endpoints, and integration with user history. It also adds a sample LaTeX resume template and updates documentation and environment configuration to support these features.
Tailored Resume Generation & Management
tailoredResumeswith fields for user, resume/job hashes, template, LaTeX source, versioning, and custom template support, plus relevant indexes for efficient querying. (convex/schema.ts)convex/functions.ts)convex/functions.ts) [1] [2] [3]LaTeX Resume Support
public/jake's_resume.tex)src/app/api/fix-latex/route.ts)Documentation & Configuration
.env.exampleandREADME.mdto document new environment variables for LaTeX resume generation and API base URL, and to highlight tailored LaTeX resume generation as a feature. (.env.example,README.md) [1] [2] [3]README.mdfor clarity. (README.md)Summary by CodeRabbit
Release Notes
New Features
Documentation