Skip to content

Conversation

@BradSwain
Copy link
Contributor

@BradSwain BradSwain commented Aug 1, 2025

Potentially replaces #201

Makes the resources configurable values set in the env file. The values are automatically set during make setup-local based on available system resources.

TODOS:

  • set minimums that make sense
  • allocate resources based experience with sample challenges
  • (optional) Do not overwrite manually configured values when running make setup-local

@ret2libc What do you think of this approach in general compared to #201 ?

@BradSwain BradSwain requested a review from ret2libc August 1, 2025 03:12
Copy link
Collaborator

@ret2libc ret2libc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

overall I think it's a nice idea, however I'm not convinced all this logic is really needed. We are talking anyway about a very test-based deployment, just for showing the program. For any real usage, people will have to play a bit with the pods allocation themselves and give the right resources. For example, just having one seed-gen and one fuzzer-bot is not going to work very well. Mid/long-term i think we are better of implementing auto-scaling within k8s so that we can start everything with just 1 pod per component and scale up/down automatically as needed.

The reason why I think this might be a bit over-engineered is that given it's just for testing/showing the tool, ensuring the system has a minimum of X cpu/Y mem is enough to deploy a basic system that works. Any other use case above that will require some thinking about resources allocated to each component.

cc @michaelbrownuc

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

set defaults to MINIKUBE_* vars if not present. On a production deployment you probably don't evan care about setting those vars in the env file

# Docker build arguments, useful for local deployment
export FUZZER_BASE_IMAGE="gcr.io/oss-fuzz-base/base-runner"

# Minikube cluster resource allocation
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Explain a bit more how/when these are used, saying they are for the minikube environment only. Also, maybe make these commented out by default and enable them in the detect-resources script?! WDYT?

Comment on lines +84 to +85
# Convert memory from bytes to MiB
local total_memory_mib=$((total_memory_bytes / 1048576))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't this be done somehow by the convert_to_mib function?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and actually that fnction is never used, I think

Comment on lines +125 to +126
[ "$total_cpus" = "0" ] && total_cpus=4
[ "$total_memory_mib" = "0" ] && total_memory_mib=4096
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
[ "$total_cpus" = "0" ] && total_cpus=4
[ "$total_memory_mib" = "0" ] && total_memory_mib=4096
[ "$total_cpus" = "0" ] && total_cpus=5
[ "$total_memory_mib" = "0" ] && total_memory_mib=9216

@michaelbrownuc
Copy link
Collaborator

overall I think it's a nice idea, however I'm not convinced all this logic is really needed. We are talking anyway about a very test-based deployment, just for showing the program. For any real usage, people will have to play a bit with the pods allocation themselves and give the right resources. For example, just having one seed-gen and one fuzzer-bot is not going to work very well. Mid/long-term i think we are better of implementing auto-scaling within k8s so that we can start everything with just 1 pod per component and scale up/down automatically as needed.

The reason why I think this might be a bit over-engineered is that given it's just for testing/showing the tool, ensuring the system has a minimum of X cpu/Y mem is enough to deploy a basic system that works. Any other use case above that will require some thinking about resources allocated to each component.

cc @michaelbrownuc

I agree here, this is nice to have but it's probably not the most pressing priority.

@BradSwain BradSwain closed this Aug 1, 2025
@dguido dguido deleted the brad/detect-resources branch August 8, 2025 16:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants