This application is designed, implemented, and maintained by James Ligeralde, Frances Michelle Uy, Jonathan Sapolu and Michelle Ho.
Our project follows the Issue Driven Project Management (IDPM) guidelines. We will be holding our meetings once a week on Mondays and Wednesdays from 6-7 PM via discord. Additionally, each team member will provide frequent updates regarding their assigned tasks via our discord chat. Tasks will be assigned as GitHub issues for each respective member, though other members are encouraged to help as needed. To track our development, we will establish milestones every 7-14 days.
The goal of this project is improve the searching effectiveness of the University of Hawaii's Ask Us search engine, which takes in user queries and attempts to return a list of IT-related articles that may help the users resolve their IT issues. We will be attempting to implement an AI search engine which will hopefully alleviate the need to contact the IT help desk.
We want this AI search engine to be able to to respond to queries as helpfully as possible. This means being able to ask follow-up questions to unclear queries and being conversational.
We plan on providing the interface for all users at the landing page, but also want to provide login capabilities in order for the AI to be able to store previous chat sessions.
When you retrieve the app at http://localhost:3000, this is what should be displayed:
Eventually a chat prompt will be displayed on this page with no previous conversations saved.
The blue text boxes are previous questions asked by the user, and the green text boxes are the responses generated by the AI. Currently, the AI is not connected to our database of articles.
Articles are split into sections of relevant paragraphs. They are then given an embedding by the OPENAI API which is then stored in a vector database. The user submits a question which receives an embedding as well, which is then compared to the vector database returning article embeddings with the highest cosine similarity. These embeddings are then processed by GPT 3.5 which answers the users question in a conversational format.

