Skip to content

Test#58

Open
zhangbo8418 wants to merge 7 commits intoImprove-playerbot-text-localization-for-whispers-and-master-messagesfrom
master
Open

Test#58
zhangbo8418 wants to merge 7 commits intoImprove-playerbot-text-localization-for-whispers-and-master-messagesfrom
master

Conversation

@zhangbo8418
Copy link
Copy Markdown
Owner

@zhangbo8418 zhangbo8418 commented Feb 28, 2026

Pull Request

Describe what this change does and why it is needed...


Design Philosophy

We prioritize stability, performance, and predictability over behavioral realism.
Complex player-mimicking logic is intentionally limited due to its negative impact on scalability, maintainability, and
long-term robustness.

Excessive processing overhead can lead to server hiccups, increased CPU usage, and degraded performance for all
participants. Because every action and
decision tree is executed per bot and per trigger, even small increases in logic complexity can scale poorly and
negatively affect both players and
world (random) bots. Bots are not expected to behave perfectly, and perfect simulation of human decision-making is not a
project goal. Increased behavioral
realism often introduces disproportionate cost, reduced predictability, and significantly higher maintenance overhead.

Every additional branch of logic increases long-term responsibility. All decision paths must be tested, validated, and
maintained continuously as the system evolves.
If advanced or AI-intensive behavior is introduced, the default configuration must remain the lightweight decision
model
. More complex behavior should only be
available as an explicit opt-in option, clearly documented as having a measurable performance cost.

Principles:

  • Stability before intelligence
    A stable system is always preferred over a smarter one.

  • Performance is a shared resource
    Any increase in bot cost affects all players and all bots.

  • Simple logic scales better than smart logic
    Predictable behavior under load is more valuable than perfect decisions.

  • Complexity must justify itself
    If a feature cannot clearly explain its cost, it should not exist.

  • Defaults must be cheap
    Expensive behavior must always be optional and clearly communicated.

  • Bots should look reasonable, not perfect
    The goal is believable behavior, not human simulation.

Before submitting, confirm that this change aligns with those principles.


Feature Evaluation

Please answer the following:

  • Describe the minimum logic required to achieve the intended behavior?
  • Describe the cheapest implementation that produces an acceptable result?
  • Describe the runtime cost when this logic executes across many bots?

How to Test the Changes

  • Step-by-step instructions to test the change
  • Any required setup (e.g. multiple players, bots, specific configuration)
  • Expected behavior and how to verify it

Complexity & Impact

Does this change add new decision branches?

    • No
    • Yes (explain below)

Does this change increase per-bot or per-tick processing?

    • No
    • Yes (describe and justify impact)

Could this logic scale poorly under load?

    • No
    • Yes (explain why)

Defaults & Configuration

Does this change modify default bot behavior?

    • No
    • Yes (explain why)

If this introduces more advanced or AI-heavy logic:

    • Lightweight mode remains the default
    • More complex behavior is optional and thereby configurable

AI Assistance

Was AI assistance (e.g. ChatGPT or similar tools) used while working on this change?

    • No
    • Yes (explain below)

If yes, please specify:

  • AI tool or model used (e.g. ChatGPT, GPT-4, Claude, etc.)
  • Purpose of usage (e.g. brainstorming, refactoring, documentation, code generation)
  • Which parts of the change were influenced or generated
  • Whether the result was manually reviewed and adapted

AI assistance is allowed, but all submitted code must be fully understood, reviewed, and owned by the contributor.
Any AI-influenced changes must be verified against existing CORE and PB logic. We expect contributors to be honest
about what they do and do not understand.


Final Checklist

    • Stability is not compromised
    • Performance impact is understood, tested, and acceptable
    • Added logic complexity is justified and explained
    • Documentation updated if needed

Notes for Reviewers

Anything that significantly improves realism at the cost of stability or performance should be carefully discussed
before merging.


Note

Medium Risk
Medium risk because it changes playerbot decision logic (new aggressive strategy/target selection) and refactors trainer learning/training gating, which can affect bot behavior and economy-related actions. Also updates timing/proc handling (StatsCollector -> Milliseconds) and widespread distance math, which could introduce subtle gameplay/balance regressions if incorrect.

Overview
Adds a new opt-in non-combat aggressive strategy that automatically picks an aggressive target when the bot has no target, backed by a new AggressiveTargetValue selector and AggressiveTargetAction.

Refactors trainer interactions: replaces AiPlayerbot.AutoTrainSpells with AiPlayerbot.AllowLearnTrainerSpells, rewrites TrainerAction to validate trainer targets, add isUseful/isPossible, and learn spells via casting/learning with cost checks; RpgTrainAction/RpgTrainTrigger are tightened to only run when in range, trainer is valid, and the bot can afford at least one spell (via new can train value).

Moves attunement quest completion to configuration (AiPlayerbot.AttunementQuests) and updates PlayerbotFactory::InitAttunementQuests to use that list.

Modernizes several systems: switches many manual 2D distance calculations to engine helpers (GetDistance2d/GetExactDist*), updates StatsCollector to use Milliseconds and the newer SpellProcEntry API, and adjusts ResetInstancesAction to use WorldPackets::Instance::ResetInstances. The C++ codestyle GitHub Action is also optimized to only run when src/** changes.

Written by Cursor Bugbot for commit d8c668c. This will update automatically on new commits. Configure here.

privatecore and others added 7 commits February 23, 2026 11:00
…ns (mod-playerbots#2104)

# Pull Request

* Fix the rest of the trainer-related functionality: list spells and
learn (cast vs. direct learn) spells.
* Rewrite `TrainerAction`: split the logic between appropriate methods
(`GetTarget`, `isUseful`, `isPossible`) instead of pushing everything
inside a single `Execute` method.
* Change method definitions to remove unnecessary declarations and
parameters overhead.
* Move the `Trainer` header into the implementation. Rewrite
`RpgTrainTrigger` to fit the original logic and move all validation to
`RpgTrainAction` (`isUseful` + `isPossible`).
* Implement "can train" context value calculation to use with
`RpgTrainTrigger`.
* Update and optimize "train cost" context value calculation -- it
should be much faster.
* Replace `AiPlayerbot.AutoTrainSpells` with
`AiPlayerbot.AllowLearnTrainerSpells` and remove the "free" value
behavior — please use `AiPlayerbot.BotCheats` if you want bots to learn
trainer's spells for "free".
* Add `nullptr` checks wherever necessary (only inside targeted
methods/functions).
* Make some codestyle changes and corrections based on the AC codestyle
guide.

---

## Design Philosophy

We prioritize **stability, performance, and predictability** over
behavioral realism.
Complex player-mimicking logic is intentionally limited due to its
negative impact on scalability, maintainability, and
long-term robustness.

Excessive processing overhead can lead to server hiccups, increased CPU
usage, and degraded performance for all
participants. Because every action and
decision tree is executed **per bot and per trigger**, even small
increases in logic complexity can scale poorly and
negatively affect both players and
world (random) bots. Bots are not expected to behave perfectly, and
perfect simulation of human decision-making is not a
project goal. Increased behavioral
realism often introduces disproportionate cost, reduced predictability,
and significantly higher maintenance overhead.

Every additional branch of logic increases long-term responsibility. All
decision paths must be tested, validated, and
maintained continuously as the system evolves.
If advanced or AI-intensive behavior is introduced, the **default
configuration must remain the lightweight decision
model**. More complex behavior should only be
available as an **explicit opt-in option**, clearly documented as having
a measurable performance cost.

Principles:

- **Stability before intelligence**  
  A stable system is always preferred over a smarter one.

- **Performance is a shared resource**  
  Any increase in bot cost affects all players and all bots.

- **Simple logic scales better than smart logic**  
Predictable behavior under load is more valuable than perfect decisions.

- **Complexity must justify itself**  
  If a feature cannot clearly explain its cost, it should not exist.

- **Defaults must be cheap**  
  Expensive behavior must always be optional and clearly communicated.

- **Bots should look reasonable, not perfect**  
  The goal is believable behavior, not human simulation.

Before submitting, confirm that this change aligns with those
principles.

---

## How to Test the Changes

Force bots to learn spells from trainers using the chat command `trainer
learn` or `trainer learn <spellId>`. Bots should properly list available
spells (`trainer` command) or learn them (based on configuration and
command).

## Complexity & Impact

- Does this change add new decision branches?
    - [x] No
    - [ ] Yes (**explain below**)

- Does this change increase per-bot or per-tick processing?
    - [x] No
    - [ ] Yes (**describe and justify impact**)

- Could this logic scale poorly under load?
    - [x] No
    - [ ] Yes (**explain why**)

---

## Defaults & Configuration

- Does this change modify default bot behavior?
    - [x] No
    - [ ] Yes (**explain why**)

If this introduces more advanced or AI-heavy logic:

- [x] Lightweight mode remains the default
- [ ] More complex behavior is optional and thereby configurable

---

## AI Assistance

- Was AI assistance (e.g. ChatGPT or similar tools) used while working
on this change?
    - [x] No
    - [ ] Yes (**explain below**)

If yes, please specify:

- AI tool or model used (e.g. ChatGPT, GPT-4, Claude, etc.)
- Purpose of usage (e.g. brainstorming, refactoring, documentation, code
generation)
- Which parts of the change were influenced or generated
- Whether the result was manually reviewed and adapted

AI assistance is allowed, but all submitted code must be fully
understood, reviewed, and owned by the contributor.
Any AI-influenced changes must be verified against existing CORE and PB
logic. We expect contributors to be honest
about what they do and do not understand.

---

## Final Checklist

- [x] Stability is not compromised
- [x] Performance impact is understood, tested, and acceptable
- [x] Added logic complexity is justified and explained
- [x] Documentation updated if needed

---

## Notes for Reviewers

Anything that significantly improves realism at the cost of stability or
performance should be carefully discussed
before merging.

---------

Co-authored-by: bashermens <31279994+hermensbas@users.noreply.github.com>
# Pull Request

Tired of failing that escort quest because your bots stood and watched
while the escort npc got swarmed and killed?
Tired of your bots standing around doing nothing while the npc you are
supposed to be guarding for 5 minutes is getting attacked?
Don't want to use the grind strategy because it is too heavy-handed and
has too many restrictions?

Look no further! Just do "nc +aggressive" and your bots will pick a
fight with anything they can in a 30 yard radius.

The aggressive targetting is a stripped down version of the grind
target.

## Feature Evaluation

Please answer the following:

- Describe the **minimum logic** required to achieve the intended
behavior?
Add a strategy, action, and targetting that will cause bots to attack
nearby enemies when out of combat.

- Describe the **cheapest implementation** that produces an acceptable
result?
Hopefully this is the cheapest.

- Describe the **runtime cost** when this logic executes across many
bots?
Minimal runtime cost as this strategy needs to be added specifically to
bots.

---

## How to Test the Changes

- Add a bot to party, or use selfbot
- Give them the aggressive strategy via "nc +aggressive"
- They should attack anything within 30 yards.
- If it is a bot with a master, the 30 yards should be centered around
the master not the bot (prevent chaining from enemy to enemy)

## Complexity & Impact

Does this change add new decision branches?
```
[] No
[x] Yes (**explain below**)
Only for bots that have the added strategy, adds decision to attack nearby targets when out of combat.
```

Does this change increase per-bot or per-tick processing?
```
[] No
[x] Yes (**describe and justify impact**)
Minimal increase to only bots that have this strategy added.
```

Could this logic scale poorly under load?
```
[x] No
[ ] Yes (**explain why**)
```
---

## Defaults & Configuration

Does this change modify default bot behavior?
```
[x] No
[ ] Yes (**explain why**)
```

If this introduces more advanced or AI-heavy logic:
```
[x] Lightweight mode remains the default
[ ] More complex behavior is optional and thereby configurable
```
---

## AI Assistance

Was AI assistance (e.g. ChatGPT or similar tools) used while working on
this change?
```
[ ] No
[x] Yes (**explain below**)
```
Claude is used to explore the codebase to find similar implementations
to be used for examples.

---

## Final Checklist

- [x] Stability is not compromised
- [x] Performance impact is understood, tested, and acceptable
- [x] Added logic complexity is justified and explained
- [x] Documentation updated if needed

---

## Notes for Reviewers

Anything that significantly improves realism at the cost of stability or
performance should be carefully discussed
before merging.
…ots#2136)

# Pull Request

I've being getting ready to test Serpentshrine Cavern strategy on
`test-staging`, but noticed the bots don't currently have attunement
setup.

Added attunement quest.

---

## Design Philosophy

We prioritize **stability, performance, and predictability** over
behavioral realism.
Complex player-mimicking logic is intentionally limited due to its
negative impact on scalability, maintainability, and
long-term robustness.

Excessive processing overhead can lead to server hiccups, increased CPU
usage, and degraded performance for all
participants. Because every action and
decision tree is executed **per bot and per trigger**, even small
increases in logic complexity can scale poorly and
negatively affect both players and
world (random) bots. Bots are not expected to behave perfectly, and
perfect simulation of human decision-making is not a
project goal. Increased behavioral
realism often introduces disproportionate cost, reduced predictability,
and significantly higher maintenance overhead.

Every additional branch of logic increases long-term responsibility. All
decision paths must be tested, validated, and
maintained continuously as the system evolves.
If advanced or AI-intensive behavior is introduced, the **default
configuration must remain the lightweight decision
model**. More complex behavior should only be
available as an **explicit opt-in option**, clearly documented as having
a measurable performance cost.

Principles:

- **Stability before intelligence**  
  A stable system is always preferred over a smarter one.

- **Performance is a shared resource**  
  Any increase in bot cost affects all players and all bots.

- **Simple logic scales better than smart logic**  
Predictable behavior under load is more valuable than perfect decisions.

- **Complexity must justify itself**  
  If a feature cannot clearly explain its cost, it should not exist.

- **Defaults must be cheap**  
  Expensive behavior must always be optional and clearly communicated.

- **Bots should look reasonable, not perfect**  
  The goal is believable behavior, not human simulation.

Before submitting, confirm that this change aligns with those
principles.

---

## Feature Evaluation

Please answer the following:

- Describe the **minimum logic** required to achieve the intended
behavior?
- Describe the **cheapest implementation** that produces an acceptable
result?
- Describe the **runtime cost** when this logic executes across many
bots?

---

## How to Test the Changes

- Add bots and convert to raid
- Make sure you have attunement by completing
[this](https://www.wowhead.com/tbc/quest=13431/the-cudgel-of-kardesh)
quest
- Teleport to SSC and summon bots. The bots should appear in the raid.

## Complexity & Impact

Does this change add new decision branches?
- - [x] No
- - [ ] Yes (**explain below**)

Does this change increase per-bot or per-tick processing?
- - [x] No
- - [ ] Yes (**describe and justify impact**)

Could this logic scale poorly under load?
- - [x] No
- - [ ] Yes (**explain why**)
---

## Defaults & Configuration

Does this change modify default bot behavior?
- - [ ] No
- - [x] Yes (**explain why**)

This adds the attunement quest for SSC by default

If this introduces more advanced or AI-heavy logic:
- - [x] Lightweight mode remains the default
- - [ ] More complex behavior is optional and thereby configurable
---

## AI Assistance

Was AI assistance (e.g. ChatGPT or similar tools) used while working on
this change?
- - [x] No
- - [ ] Yes (**explain below**)

If yes, please specify:

- AI tool or model used (e.g. ChatGPT, GPT-4, Claude, etc.)
- Purpose of usage (e.g. brainstorming, refactoring, documentation, code
generation)
- Which parts of the change were influenced or generated
- Whether the result was manually reviewed and adapted

AI assistance is allowed, but all submitted code must be fully
understood, reviewed, and owned by the contributor.
Any AI-influenced changes must be verified against existing CORE and PB
logic. We expect contributors to be honest
about what they do and do not understand.

---

## Final Checklist

- - [x] Stability is not compromised
- - [x] Performance impact is understood, tested, and acceptable
- - [x] Added logic complexity is justified and explained
- - [x] Documentation updated if needed

---

## Notes for Reviewers

Anything that significantly improves realism at the cost of stability or
performance should be carefully discussed
before merging.
…-playerbots#2127)

# Pull Request

This change replaces a few manual distance calculations in
`WorldPosition` with AzerothCore distance helpers. The goal is to reduce
duplicated math, keep behavior consistent with core utilities, and avoid
reimplementing logic that already exists in the core.

---

## Design Philosophy

We prioritize **stability, performance, and predictability** over
behavioral realism.
Complex player-mimicking logic is intentionally limited due to its
negative impact on scalability, maintainability, and
long-term robustness.

Excessive processing overhead can lead to server hiccups, increased CPU
usage, and degraded performance for all
participants. Because every action and
decision tree is executed **per bot and per trigger**, even small
increases in logic complexity can scale poorly and
negatively affect both players and
world (random) bots. Bots are not expected to behave perfectly, and
perfect simulation of human decision-making is not a
project goal. Increased behavioral
realism often introduces disproportionate cost, reduced predictability,
and significantly higher maintenance overhead.

Every additional branch of logic increases long-term responsibility. All
decision paths must be tested, validated, and
maintained continuously as the system evolves.
If advanced or AI-intensive behavior is introduced, the **default
configuration must remain the lightweight decision
model**. More complex behavior should only be
available as an **explicit opt-in option**, clearly documented as having
a measurable performance cost.

Principles:

- **Stability before intelligence**  
  A stable system is always preferred over a smarter one.

- **Performance is a shared resource**  
  Any increase in bot cost affects all players and all bots.

- **Simple logic scales better than smart logic**  
Predictable behavior under load is more valuable than perfect decisions.

- **Complexity must justify itself**  
  If a feature cannot clearly explain its cost, it should not exist.

- **Defaults must be cheap**  
  Expensive behavior must always be optional and clearly communicated.

- **Bots should look reasonable, not perfect**  
  The goal is believable behavior, not human simulation.

Before submitting, confirm that this change aligns with those
principles.

---

## Feature Evaluation

Please answer the following:

- Describe the **minimum logic** required to achieve the intended
behavior?
Use existing core distance helpers instead of manual math, keeping the
logic localized to `WorldPosition`.
- Describe the **cheapest implementation** that produces an acceptable
result?
Directly call `GetExactDist`, `GetExactDist2d`, and `GetExactDist2dSq`
where appropriate.
- Describe the **runtime cost** when this logic executes across many
bots?
No additional cost; the helper calls replace equivalent math and avoid
extra intermediate objects.

---

## How to Test the Changes

- Step-by-step instructions to test the change
- Build the module and run existing bot scenarios that rely on
`WorldPosition` distance checks.
  - Verify no behavioral regressions in travel-related logic.
- Any required setup (e.g. multiple players, bots, specific
configuration)
  - Standard server + mod-playerbots setup.
- Expected behavior and how to verify it
- Distances computed in travel logic remain identical; no gameplay
change expected.

## Complexity & Impact

Does this change add new decision branches?
- - [x] No
- - [ ] Yes (**explain below**)

Does this change increase per-bot or per-tick processing?
- - [x] No
- - [ ] Yes (**describe and justify impact**)

Could this logic scale poorly under load?
- - [x] No
- - [ ] Yes (**explain why**)
---

## Defaults & Configuration

Does this change modify default bot behavior?
- - [x] No
- - [ ] Yes (**explain why**)

If this introduces more advanced or AI-heavy logic:
- - [x] Lightweight mode remains the default
- - [x] More complex behavior is optional and thereby configurable
---

## AI Assistance

Was AI assistance (e.g. ChatGPT or similar tools) used while working on
this change?
- - [x] No
- - [ ] Yes (**explain below**)

---

## Final Checklist

- - [x] Stability is not compromised
- - [x] Performance impact is understood, tested, and acceptable
- - [x] Added logic complexity is justified and explained
- - [ ] Documentation updated if needed

---

## Notes for Reviewers

This is a localized refactor that replaces manual distance math with
core helpers for consistency and maintainability.
No behavioral change is expected.

---------

Co-authored-by: Keleborn <22352763+Celandriel@users.noreply.github.com>
mod-playerbots#2158)

# Pull Request

When integrating latest changes from
https://github.com/azerothcore/azerothcore-wotlk into
https://github.com/mod-playerbots/azerothcore-wotlk/tree/Playerbot you
will face some compiling issues due to refactoring. That PR does not
change any of the logic, but implements needed changes to be compatible
again

---

## Design Philosophy

We prioritize **stability, performance, and predictability** over
behavioral realism.
Complex player-mimicking logic is intentionally limited due to its
negative impact on scalability, maintainability, and
long-term robustness.

Excessive processing overhead can lead to server hiccups, increased CPU
usage, and degraded performance for all
participants. Because every action and
decision tree is executed **per bot and per trigger**, even small
increases in logic complexity can scale poorly and
negatively affect both players and
world (random) bots. Bots are not expected to behave perfectly, and
perfect simulation of human decision-making is not a
project goal. Increased behavioral
realism often introduces disproportionate cost, reduced predictability,
and significantly higher maintenance overhead.

Every additional branch of logic increases long-term responsibility. All
decision paths must be tested, validated, and
maintained continuously as the system evolves.
If advanced or AI-intensive behavior is introduced, the **default
configuration must remain the lightweight decision
model**. More complex behavior should only be
available as an **explicit opt-in option**, clearly documented as having
a measurable performance cost.

Principles:

- **Stability before intelligence**  
  A stable system is always preferred over a smarter one.

- **Performance is a shared resource**  
  Any increase in bot cost affects all players and all bots.

- **Simple logic scales better than smart logic**  
Predictable behavior under load is more valuable than perfect decisions.

- **Complexity must justify itself**  
  If a feature cannot clearly explain its cost, it should not exist.

- **Defaults must be cheap**  
  Expensive behavior must always be optional and clearly communicated.

- **Bots should look reasonable, not perfect**  
  The goal is believable behavior, not human simulation.

Before submitting, confirm that this change aligns with those
principles.

---

## Feature Evaluation

Please answer the following:

- Describe the **minimum logic** required to achieve the intended
behavior?
- Describe the **cheapest implementation** that produces an acceptable
result?
- Describe the **runtime cost** when this logic executes across many
bots?

---

## How to Test the Changes

- Step-by-step instructions to test the change
- Any required setup (e.g. multiple players, bots, specific
configuration)
- Expected behavior and how to verify it

## Complexity & Impact

Does this change add new decision branches?
- - [ X] No
- - [ ] Yes (**explain below**)

Does this change increase per-bot or per-tick processing?
- - [ X] No
- - [ ] Yes (**describe and justify impact**)

Could this logic scale poorly under load?
- - [ X] No
- - [ ] Yes (**explain why**)
---

## Defaults & Configuration

Does this change modify default bot behavior?
- - [ X] No
- - [ ] Yes (**explain why**)

If this introduces more advanced or AI-heavy logic:
- - [ X] Lightweight mode remains the default
- - [ ] More complex behavior is optional and thereby configurable
---

## AI Assistance

Was AI assistance (e.g. ChatGPT or similar tools) used while working on
this change?
- - [ X] No
- - [ ] Yes (**explain below**)

If yes, please specify:

- AI tool or model used (e.g. ChatGPT, GPT-4, Claude, etc.)
- Purpose of usage (e.g. brainstorming, refactoring, documentation, code
generation)
- Which parts of the change were influenced or generated
- Whether the result was manually reviewed and adapted

AI assistance is allowed, but all submitted code must be fully
understood, reviewed, and owned by the contributor.
Any AI-influenced changes must be verified against existing CORE and PB
logic. We expect contributors to be honest
about what they do and do not understand.

---

## Final Checklist

- - [ X] Stability is not compromised
- - [ X] Performance impact is understood, tested, and acceptable
- - [ X] Added logic complexity is justified and explained
- - [ X] Documentation updated if needed

---

## Notes for Reviewers

Please doublecheck if none of the timing-logic (migration from uint32 to
microseconds) has been changed

---------

Co-authored-by: Keleborn <22352763+Celandriel@users.noreply.github.com>
Co-authored-by: bash <hermensb@gmail.com>
Update master from Test staging and Core Update
Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Bugbot Free Tier Details

You are on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle.

To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.

Bugbot Autofix prepared a fix for the issue found in the latest run.

  • ✅ Fixed: GetDistance2d subtracts combat reach, changing safety distance calculations
    • Replaced GetDistance2d with GetExactDist2d in all affected safety distance checks to preserve center-to-point distance semantics without subtracting combat reach.

Create PR

Or push these changes by commenting:

@cursor push ed25955c70
Preview (ed25955c70)
diff --git a/src/Ai/Dungeon/PitOfSaron/Action/PitOfSaronActions.cpp b/src/Ai/Dungeon/PitOfSaron/Action/PitOfSaronActions.cpp
--- a/src/Ai/Dungeon/PitOfSaron/Action/PitOfSaronActions.cpp
+++ b/src/Ai/Dungeon/PitOfSaron/Action/PitOfSaronActions.cpp
@@ -201,7 +201,7 @@
                 continue;
 
             // Check if position is within maximum allowed distance from boss
-            if (boss && boss->GetDistance2d(potentialPos.GetPositionX(), potentialPos.GetPositionY()) > MAX_BOSS_DISTANCE)
+            if (boss && boss->GetExactDist2d(potentialPos.GetPositionX(), potentialPos.GetPositionY()) > MAX_BOSS_DISTANCE)
                 continue;
 
             // Score this position based on:
@@ -214,7 +214,7 @@
             float minOrbDist = std::numeric_limits<float>::max();
             for (Unit* orb : orbs)
             {
-                float orbDist = orb->GetDistance2d(potentialPos.GetPositionX(), potentialPos.GetPositionY());
+                float orbDist = orb->GetExactDist2d(potentialPos.GetPositionX(), potentialPos.GetPositionY());
                 minOrbDist = std::min(minOrbDist, orbDist);
             }
             score += minOrbDist * 2.0f;  // Weight orb distance more heavily
@@ -230,7 +230,7 @@
             // Factor in proximity to boss (closer is better, as long as we're safe from orbs)
             if (boss)
             {
-                float bossDist = boss->GetDistance2d(potentialPos.GetPositionX(), potentialPos.GetPositionY());
+                float bossDist = boss->GetExactDist2d(potentialPos.GetPositionX(), potentialPos.GetPositionY());
                 // Add points for being closer to boss (inverse relationship)
                 // but only if we're safely away from orbs
                 if (minOrbDist > SAFE_DISTANCE)

diff --git a/src/Ai/Raid/Icecrown/Action/RaidIccActions.cpp b/src/Ai/Raid/Icecrown/Action/RaidIccActions.cpp
--- a/src/Ai/Raid/Icecrown/Action/RaidIccActions.cpp
+++ b/src/Ai/Raid/Icecrown/Action/RaidIccActions.cpp
@@ -4482,7 +4482,7 @@
             // Maintain minimum distance from center position (if too close to center, move out)
             float centerX = ICC_BQL_CENTER_POSITION.GetPositionX();
             float centerY = ICC_BQL_CENTER_POSITION.GetPositionY();
-            float centerDist = bot->GetDistance2d(centerX, centerY);
+            float centerDist = bot->GetExactDist2d(centerX, centerY);
             if (centerDist < MIN_CENTER_DISTANCE && !((boss->GetPositionZ() - bot->GetPositionZ()) > 5.0f))
             {
                 float dx = bot->GetPositionX() - centerX;

diff --git a/src/Ai/Raid/Magtheridon/Util/RaidMagtheridonHelpers.cpp b/src/Ai/Raid/Magtheridon/Util/RaidMagtheridonHelpers.cpp
--- a/src/Ai/Raid/Magtheridon/Util/RaidMagtheridonHelpers.cpp
+++ b/src/Ai/Raid/Magtheridon/Util/RaidMagtheridonHelpers.cpp
@@ -132,7 +132,7 @@
         }
         for (Unit* hazard : debrisHazards)
         {
-            float dist = hazard->GetDistance2d(x, y);
+            float dist = hazard->GetExactDist2d(x, y);
             if (dist < 9.0f)
                 return false;
         }
@@ -145,7 +145,7 @@
             if (!go || go->GetEntry() != GO_BLAZE)
                 continue;
 
-            float dist = go->GetDistance2d(x, y);
+            float dist = go->GetExactDist2d(x, y);
             if (dist < 5.0f)
                 return false;
         }
This Bugbot Autofix run was free. To enable autofix for future PRs, go to the Cursor dashboard.

// Check if position is within maximum allowed distance from boss
if (boss && sqrt(pow(potentialPos.GetPositionX() - boss->GetPositionX(), 2) +
pow(potentialPos.GetPositionY() - boss->GetPositionY(), 2)) > MAX_BOSS_DISTANCE)
if (boss && boss->GetDistance2d(potentialPos.GetPositionX(), potentialPos.GetPositionY()) > MAX_BOSS_DISTANCE)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GetDistance2d subtracts combat reach, changing safety distance calculations

Medium Severity

Throughout multiple raid/dungeon files, raw Euclidean distance calculations (sqrt(pow(x1-x2, 2) + pow(y1-y2, 2))) were replaced with GetDistance2d(x, y). Unlike GetExactDist2d, GetDistance2d subtracts the calling object's bounding radius (GetObjectSize()), producing a systematically shorter distance. This is especially problematic for > MAX_DISTANCE checks — e.g., boss->GetDistance2d(...) > MAX_BOSS_DISTANCE allows positions further from the boss than intended, since a boss with ~3–5 yard combat reach effectively raises the max range by that amount. GetExactDist2d preserves the original point-to-point semantics.

Additional Locations (2)

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants