Skip to content

Abhijit85/AI-Credit-Scoring

Repository files navigation

Credit Application

This is a full-stack AI-powered credit scoring system built with:

  • FastAPI (backend)
  • AWS Bedrock for LLM risk summarization
  • AWS Fraud Detector or Amazon SageMaker for anomaly detection
  • React (frontend with TailwindCSS and lucide-react)
  • MongoDB (recommended for storage and optional vector indexing)

Features

  • AI-generated credit score and breakdown
  • Visual sliders and tabbed UI
  • LLM-generated risk summary and suggestions
  • Anomaly detection service for suspicious applications
  • Modular FastAPI backend

Architecture

User Input → React UI → FastAPI (/score) → Rule-based scoring + Bedrock LLM + Anomaly detection → JSON → UI rendering How a Credit Score Is Generated

Flow Diagram

Credit Score Flow

Architectural Diagram

Architectural Diagram

AWS Configuration

Before running the application, configure AWS access and model settings.

Authenticate to AWS using SSO if required:

aws sso login --profile default
  1. Install the AWS CLI and run aws configure or set environment variables.
AWS_ACCESS_KEY_ID=your-access-key
AWS_SECRET_ACCESS_KEY=your-secret-key
AWS_REGION=us-east-1

3mbfqo-codex/fix-model-invocation-with-on-demand-throughput

4xgegp-codex/fix-model-invocation-with-on-demand-throughput main 2. Enable Amazon Bedrock and configure cross-region inference with an inference profile:

# Region hosting the text inference profile
BEDROCK_TEXT_REGION=us-west-2

# Provide one of the following for the profile
BEDROCK_TEXT_INFERENCE_PROFILE_ARN=arn:aws:bedrock:us-west-2:ACCOUNT_ID:inference-profile/my-text-profile
BEDROCK_TEXT_INFERENCE_PROFILE_ID=ip-1234567890abcdef

# Optional embedding profile
BEDROCK_EMBED_REGION=us-west-2
BEDROCK_EMBED_INFERENCE_PROFILE_ARN=arn:aws:bedrock:us-west-2:ACCOUNT_ID:inference-profile/my-embed-profile
 3mbfqo-codex/fix-model-invocation-with-on-demand-throughput

When an inference profile variable is set, the backend uses cross-region inference and omits the modelId in Bedrock requests.


When an inference profile variable is set, the backend uses cross-region inference and omits the `modelId` in Bedrock requests.
=======
2. Enable [Amazon Bedrock](https://aws.amazon.com/bedrock/) and choose models or, if required, their inference profiles:

```bash
# For on-demand models
BEDROCK_TEXT_MODEL_ID=anthropic.claude-v2

# For models that require an inference profile, omit BEDROCK_TEXT_MODEL_ID and set one of:
BEDROCK_TEXT_INFERENCE_PROFILE_ARN=arn:aws:bedrock:REGION:ACCOUNT_ID:inference-profile/my-text-profile
BEDROCK_TEXT_INFERENCE_PROFILE_ID=ip-1234567890abcdef

BEDROCK_EMBED_MODEL_ID=amazon.titan-embed-text-v1
BEDROCK_EMBED_INFERENCE_PROFILE_ARN=arn:aws:bedrock:REGION:ACCOUNT_ID:inference-profile/my-embed-profile

When an inference profile environment variable is provided, the backend uses it and does not send the modelId in the Bedrock request. main main

  1. Deploy an anomaly detection service using AWS Fraud Detector or a SageMaker endpoint and capture its identifier:
FRAUD_DETECTOR_MODEL_ARN=arn:aws:frauddetector:us-east-1:123456789012:detector/my-detector   # if using Fraud Detector
SAGEMAKER_ENDPOINT_NAME=my-anomaly-endpoint                                                   # if using SageMaker
  1. Include additional application variables such as the MongoDB connection string:
MONGODB_URI=mongodb://localhost:27017

Store these values in a .env file in backend or export them in your shell.

Bedrock (Claude 3.5 Haiku) via Inference Profile

Some Anthropic models (e.g., Claude 3.5 Haiku) cannot be invoked on-demand with a plain modelId. You must call them via an inference profile. Configure env like this:

# Do NOT set BEDROCK_TEXT_MODEL_ID for Haiku
export BEDROCK_TEXT_INFERENCE_PROFILE_ID=us.anthropic.claude-3-5-haiku-20241022-v1:0
export BEDROCK_TEXT_REGION=us-west-2

Or use the full ARN:

export BEDROCK_TEXT_INFERENCE_PROFILE_ARN=arn:aws:bedrock:us-east-1:<acct>:inference-profile/us.anthropic.claude-3-5-haiku-20241022-v1:0

If you use an on-demand model (not requiring profiles), configure:

export BEDROCK_TEXT_MODEL_ID=anthropic.claude-v2:1

The runtime automatically prefers the inference profile if set; otherwise it falls back to modelId. If none are set, the service will raise a clear configuration error at call time.

Backend

cd backend
# create .env and add the variables described above
pip install -r requirements.txt
uvicorn main:app --reload

Frontend

cd frontend
npm install
npm run dev   # start development server
npm run build # create production build
npm start     # preview production build

Visit: http://localhost:5173/

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published