Add LTI page for Canvas iframe (INDG 100 self-assessment)#504
Add LTI page for Canvas iframe (INDG 100 self-assessment)#504frasermuller wants to merge 7 commits intomainfrom
Conversation
Replaces the old self-assessment page (which had question text hardcoded in URL params) with a proper database-backed system. Professors can now create and manage questions through a settings page, copy an iframe embed code, and paste it into Canvas. Students see the question, type a response, and get AI feedback via the existing chatbot endpoint. - New IframeQuestion entity, migration, service, controller, and module - Professor settings page (CRUD + copy embed code) - Student iframe page with AI feedback component - Seed data for dev environment - Cleaned up old self-assessment files
The iframe was under the (embed) layout which wraps everything in LtiContextProvider. That was sending lti.frameResize to Canvas and blowing up the iframe height. Moved it to /lti/iframe/[cid] outside of (embed) so it doesn't get that context at all. Also added public endpoints so students don't need to be logged into HelpMe to use the iframe: - GET /iframe-question/public/:courseId/:questionId - POST /iframe-question/public/:courseId/:questionId/feedback Other fixes: - Fixed LtiContext crashing when not actually embedded in Canvas - Added chatbot/query to LTI restrictPaths - Exported ChatbotApiService from ChatbotModule - Updated middleware to allow /lti/iframe/* as public pages
|
A few thoughts:
|
Yeah i know its definitely not ideal having it public. Idk i could add some sort of like rate limiting or something that could help with the potential "abuse". but i honestly dont really know how to fix this problem without it being public. For citations, since this was mainly self-assessments where students are reflecting on their own experience, there's nothing to cite really, but if i were to change it to the /ask, im pretty sure it needs a user chat token so like they would have to be logged in which is back to the original problem. i guess a solution might be to make some sort of like general token maybe? like a shared token for this feature? but im not sure if thats even possible or how to do that. and for the course prompt. i can definitely add that so that it explicitly says it. thats honestly the easiest fix. |
Okay I actually went and double checked the chatbot repo. The So yeah I guess maybe adjust the "Criteria" tooltip to say as much. Basically that the text they put in is the entirety of the prompt. No course prompt, no HelpMe system prompt, and the chatbot knowledge base is not used. If we did want to make it use the And yeah as for the public endpoint problem, I don't really have a good idea to fix it. If we had some way of making it so that if the iframe detects that it's embedded in a canvas page it will auto log the user into helpme that would be amazing (since then we can also keep track of who is asking what). But something tells me with general web fundamentals that that wouldn't be allowed unless there was a way to establish trust between Canvas and the HelpMe iframe. And also yeah bridgette already spent like multiple months basically doing that for her LTI embed of the chatbot to get it as good as she could and even then it's not perfect (you still need to login once the first time, and I believe trust needs to be established by the prof authorizing a HelpMe page in their course, and i imagine the process is different for just inserted iframes). |
AdamFipke
left a comment
There was a problem hiding this comment.
Finally some code that isn't just completely AI generated slop! There's no huge gaping logic holes or completely unfinished code and I can tell that you actually spent time problem solving here to solve the root issue rather than feed the issue to the AI and watch it pump out a steaming pile of garbage that I'll be spending the next half-day picking apart when it turns out there's an entirely better solution to the root issue that no one will think of because the person that was assigned to the issue never actually put in the thought and effort and since AI is stupid it never would've come up with it. And if it turns out I didn't need to spend the next half-day picking it apart, it's because the issue was so small and simple and would only have taken me a few hours to do (and meanwhile they will spend weeks worth of time to get the same amount done and learn fuckall in the process). It's miserable working with AIs; small talk is eliminated, problems are solved poorly, skills aren't improved, nothing is learnt. And I don't get how people who are reliant on AI will expect to get anywhere except from being mistakenly hired - all they will do is take all the junior-level issues that the seniors were saving for training purposes and then pump out garbage once they get anything harder and waste the seniors time. I'd just fire them since I don't need someone there to prompt the LLM for me. But then you might get seen as rude for calling them out on it, so it's either tolerate AI-parrots and be miserable or get seen as rude (especially from those with management backgrounds who probably lack the domain expertise to see how garbage the work is) and then have your social/career opportunities be hampered and be miserable. I think that's the same reason why you don't see people posting negative comments about other people like on linkdin- people will see you as mean or judgy or afraid to work with you i think. Like it feels wrong to, but maybe people should. Or maybe it just needs to be phrased in the right way, since it does feel disrespectful to just get given a bunch of AI generated garbage since now I'm the one that needs to filter through all of it. Repeat ad infinitum since the person never bothered to learn and thus never became better than the AI, and it just feels like and endless cycle of disrespect.
It's so very lonely out here
Anywho, just some small comments.
| <style>{` | ||
| html, body, #html { | ||
| height: auto !important; | ||
| min-height: 0 !important; | ||
| background: transparent !important; | ||
| } | ||
| body { | ||
| display: block !important; | ||
| flex-grow: 0 !important; | ||
| } | ||
| `}</style> |
| No question specified. The iframe URL should include a question ID | ||
| (e.g. ?q=3). |
There was a problem hiding this comment.
I would maybe go into more detail with this (y'know, just in case). Something like "Please consider refreshing the page if possible (copy anything important elsewhere first), and/or let your professor know"
(although, it's been a while idk what would actually happen if a student were to refresh the page on a canvas quiz. Since like it'd be bad if canvas would like block them from getting back in the quiz if they refresh the page).
There was a problem hiding this comment.
I updated that error text so it’s not just “missing question ID” anymore. It still tells them the URL is missing ?q=..., but now it also gives them actual next steps: if possible, refresh after copying anything important first, and if it keeps happening, let their professor know. So it’s a bit more actionable for students instead of just a technical message.
| if (error || !question) { | ||
| return ( | ||
| <div className="flex min-h-32 flex-col items-center justify-center px-3 py-2"> | ||
| <p className="text-zinc-600">{error || 'Question not found.'}</p> |
There was a problem hiding this comment.
Similarly, I would also add a "Please let your professor know" at the end of this error message
There was a problem hiding this comment.
when the question fails to load (or isn’t found), the message ends with “Please let your professor know.” So it’s clearer what the student should do instead of just showing a dead-end error.
| @Post('public/:courseId/:questionId/feedback') | ||
| async getFeedbackPublic( |
There was a problem hiding this comment.
I would really add a throttler to this. I think our default global throttler is something like 10requests per second (i think it's probably more than that). Example here: https://github.com/ubco-db/helpme/pull/476/changes#diff-6fe8c3cf86e5e0cda0cfb874144a8de2e1b2a4cd390e4daa0756ae6a5c62ed0cR245
For this endpoint in particular, you'd maybe want to throttle it to be something like 2 or 3 requests per minute. Or like 5 requests per 5 minutes.
But since this throttle would be triggerable by our users (by spamming it), you might want to handle the error on the frontend saying like "woah buddy calm down. No need to spam it. We allow x requests every x minutes"
Though, come to think of it, if for some reason someone were to stick a LOT of iframes in a single quiz, maybe 5 requests per 5 minutes might be too little. Maybe 10 requests per 5 minutes.
There was a problem hiding this comment.
already replied to a diff comment with what i changed, but basically its now 10 req / 5 mins and added frontend handling for 429 so users get a clear “wait a few minutes” message
| @Body() body: { responseText?: string }, | ||
| ): Promise<{ feedback: string }> { |
There was a problem hiding this comment.
As much as it's literally only an object with a single attribute, the body type should still be put in a DTO so that we can ensure that it's a string (since otherwise someone could pass a number or array etc. and cause the backend to error, or worse yet do sql injection). Details how to do so here: https://github.com/ubco-db/helpme/pull/474/changes#diff-5c8dab1e31177771ccba285054bc148e65cf55192857db8b5e648a280dc8e18fR303
Same goes for the create() endpoint and update() endpoint.
It would also be good to make it so the endpoint has a proper return DTO as well. Not necessarily to validate the response from our backend (though technically that's still a good idea in case our server gets compromised), but moreso for the maintainability benefits (if we ever want the endpoint to return something else, we only have to change the code in 1 spot instead of multiple). But admittedly this is a bit less important compared to a request DTO.
There was a problem hiding this comment.
I changed the inline body types into DTOs for create, update, and public feedback, and added the feedback return type too. so now the endpoints enforce the expected shape/types properly instead of just trusting whatever gets sent in.
| throw new BadRequestException('responseText is required'); | ||
| } | ||
|
|
||
| const question = await this.iframeQuestionService.findOne( |
There was a problem hiding this comment.
as a minor point, I'm getting whiplash a bit here since we already have 3 different types of questions (chatbot, queue, anytime), and this now makes it 4.
I really want to name it something else (on the frontend, backend, everywhere) since I imagine this might also get confusing for professors. But godam what would it be? On every quiz, test, assignment, etc. they're always called "questions" I think.
And when I see the text "iframe questions", idk if it's questions that I (as the professor) would set or student questions that came from an iframe (whatever that would mean, I would just be a silly professor that's new to the system idk what helpme does). Same goes if we call them "quizIframeQuestions" - are they questions that students ask or questions that I set?
Maybe if it's called profSetIframeQuestions? But then that's too long for some areas (like the nav). But maybe generally that would work? Like in the changelog n parts of the UI call them Professor-set Iframe Questions. Maybe. Maybe.
okay i thought of some synonyms but none of them work
- "Exercise" - they're not really exercises
- "Prompt" - since we're an AI system that'd get confusing
- "Problem" - not really problems
- "Item" - feels to generic, i wouldn't know what that is
man this is kinda bs actually. I blame english for my woes. I will spend the next amount of time pondering this
There was a problem hiding this comment.
lol yeah, i just kept it the same, i honestly dont really know what to call it either, other than "iframe questions". maybe something with like embedding and AI feedback, like "embeddable question with AI feedback." gives a bit more context as to what it actually is but it sounds pretty bad and is quite long... maybe "embeddable question"? still sounds not great. yeah i haven't got a clue either
| questionText: string; | ||
|
|
||
| // criteria that the AI uses to evaluate the student's response | ||
| // if empty, the course-level default criteria can apply |
There was a problem hiding this comment.
From what I could tell, I don't think any default criteria is getting applied anywhere. So without any criteria, the entire prompt to the chatbot would just be the question text.
There was a problem hiding this comment.
I updated this. I removed the “course-level default criteria” comment and made criteria required end-to-end, so we’re no longer implying fallback behavior that doesn’t exist and we don’t allow empty/omitted criteria anymore.
| 'r^\\/api\\/v1\\/semesters\\/[0-9]+$', | ||
| 'r^\\/api\\/v1\\/chatbot\\/ask\\/[0-9]+$', | ||
| 'r^\\/api\\/v1\\/chatbot\\/askSuggested\\/[0-9]+$', | ||
| 'r^\\/api\\/v1\\/chatbot\\/query\\/[0-9]+$', |
There was a problem hiding this comment.
wait i might be a little confused here. But i don't believe the frontend calls this endpoint directly, does it? So you probably don't want to allow this here. But I could also be totally misunderstanding what's actually going on here
There was a problem hiding this comment.
yeah you're right oops. this was from before i moved it to public endpoints and i forgot to delete it.
Add stronger validation and throttling around the public iframe feedback flow, clarify iframe UX copy, and align iframe question criteria handling across frontend, backend, tests, and migration history.
Okay, so I updated the criteria helper text to make this explicit: for iframe feedback, the prompt uses only the question text + criteria entered there, and does not use course prompt, HelpMe system prompt, or chatbot knowledge base and made it red so its very clear to profs. Also its still on /query About the public endpoint thing, for POST /api/v1/iframe-question/public/:courseId/:questionId/feedback, i added route-level throttling so 10 requests every 5 minutes, so if someone (or a script) spams it, they hit 429 instead of being able to spam it. i also added frontend handling for 429 in the iframe feedback component so users get a clear “wait a few minutes” message. so basically: still public (since iframe flow needs no-login), but less abusable and less confusing when limits are hit. |
Backfill existing null criteriaText rows before applying NOT NULL so local and shared environments can run the migration without failing on legacy data.
Description
Closes #not an issue
This is a big rework from what I was originally making before. Instead of passing the question text in the URL, this adds a proper iframe question system for Canvas embeds. Profs can create and manage questions (with optional grading criteria) in the course settings, and get a copy-paste iframe embed code. All questions are stored in the database now so the URLs are clean.
Students see the question in the iframe, type their response, and get instant AI feedback right there. The feedback comes from the existing HelpMe chatbot, and the backend builds a prompt with the question + criteria + student response and sends it to the chatbot query endpoint.
The iframe endpoints are public (no HelpMe login required) so students in Canvas don't need to be logged into their HelpMe account to use it (though idk maybe there could be some big changes to this where like it would save the feedback for students so it can like build off previous feedback, idk NOT DOING THIS NOW THO). This was super annoying to figure out because whenever i would put the Iframe link in a canvas quiz it would just take me to the login screen or the main page.
There are two public endpoints:
GET /api/v1/iframe-question/public/:courseId/:questionId
fetches the question text
POST /api/v1/iframe-question/public/:courseId/:questionId/feedback
the actual iframe route is at: /lti/iframe/[cid]?q=[questionId] so whenever you make a question, you can go to the specific URL and that question will be there.
I want to get this on staging so I can test the full AI feedback loop and iframe embed in Canvas, since I can’t fully verify the AI part locally.
heres a copilot generated flow which might help:
Student types answer
↓
POST /api/v1/iframe-question/public/7/8/feedback
{ responseText: "student's text" }
↓
Backend looks up question # from DB
↓
Builds combined prompt (question + criteria + response)
↓
Forwards to chatbot API: POST /chatbot/query
{ query: "Question: ...\nCriteria: ...\nStudent's response: ...", type: "default" }
↓
AI generates feedback
↓
Returns { feedback: "Your response..." }
↓
Rendered in the iframe
HERES SOME SCREENSHOTS:
in canvas:

in helpMe from profs perspective

Type of change
yarn installnew migration added.
How Has This Been Tested?
Tested locally with seeded data. The frontend correctly loads questions, renders the form, and sends requests through to the backend chatbot endpoint. Actual AI feedback still needs to be tested on staging where the chatbot service is running.
Iframe page loads at /lti/iframe/[cid]?q=[id] without requiring login
Question text and criteria are fetched from the public endpoint
Form submission hits the public feedback endpoint correctly
Iframe settings page (CRUD) works for profs/TAs
Copy embed code generates correct iframe HTML
Iframe sizing stays compact in Canvas (no more height blowup)
Integration tests pass for all iframe question endpoints
End-to-end AI feedback test (needs staging)
Checklist:
console.logs, leftover unused logic, or anything else that was accidentally committed)