Skip to content

Conversation

@jacomago
Copy link
Contributor

Tests for

Removing a channel
Removing an alias

@jacomago jacomago force-pushed the remove-things-tests branch 4 times, most recently from 82a4f6a to efe0345 Compare November 12, 2025 09:25
@jacomago jacomago self-assigned this Nov 12, 2025
@jacomago jacomago force-pushed the remove-things-tests branch 2 times, most recently from 078c923 to 6ff043e Compare November 12, 2025 12:10
@jacomago jacomago marked this pull request as ready for review November 12, 2025 12:32
Copy link
Contributor

@anderslindho anderslindho left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please reconsider the commit structure or expand the commit messages a bit

@jacomago jacomago force-pushed the remove-things-tests branch from 6ff043e to 638a14a Compare November 12, 2025 13:19
Comment on lines 12 to 16
record(longout, "$(P)lo:base_pv2") {
info("test", "testing")
info("hello", "world")
info("archive", "default")
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a somewhat confusing change on its own because it really doesn't look like you're removing an infotag, but the point I guess is that since you have added the PV to the base .db file you need it here as well.

It might be nice if there were a better way to manage this, but shrug. Maybe something with dbLoadTemplate, or one file per record?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree - I think one record per file might help here

Could also template startup scripts, which then further could "automate" getting record counts (as each dbLoadTemplate would be +=1)

But maybe too much indirection - not sure

Copy link
Contributor

@anderslindho anderslindho left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the names - of startup scripts, db files, records - all are a bit confusing. What is "base"? Why do you sometimes use the term "pv" and sometimes "record"? What even is a base record - are you claming that the collection of "test", "hello", "archive" info-tags with those values are the base of the setup? (Then why is ai:base_pv different?)

I also think a refactoring along what @simon-ess suggested would make sense, to have one record per db file. I would take it further and template startup scripts in the test file where they are relevant, and I might even define the db-files in the same manner (using tmp path/dir fixtures).

Comment on lines 90 to 150
# Act
restart_ioc(docker_ioc, cf_client, base_channel_name, "st_remove_infotag")

# Assert
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This commit is a nice change, but I'd say it's a matter of factoring out start and restart functions 🔧

def test_number_of_aliases_and_alais_property(self, cf_client: ChannelFinderClient) -> None:
channels = cf_client.find(property=[("alias", "*")])
assert len(channels) == 8
assert len(channels) == IOC_COUNT * BASE_ALIAS_COUNT
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can squash this commit with previous

Comment on lines 12 to 16
record(longout, "$(P)lo:base_pv2") {
info("test", "testing")
info("hello", "world")
info("archive", "default")
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree - I think one record per file might help here

Could also template startup scripts, which then further could "automate" getting record counts (as each dbLoadTemplate would be +=1)

But maybe too much indirection - not sure

@jacomago jacomago force-pushed the remove-things-tests branch from 35af100 to 393ef9b Compare November 19, 2025 09:13
@tynanford
Copy link
Contributor

tynanford commented Nov 19, 2025

We still have the issue with flaky test failures. I assume the 3.9 and 3.12 tests will pass if we re-run them a couple of times. Doesn't have to be part of this PR but I wonder if we could change the port allocation to be a random port in cf-compose.yml

ports:
  - "0:8080" 

And then if the get_service_host_and_port call in client_checks.py works like it sounds, it will create cf_client with the port docker assigned it?

ERROR    testcontainers.compose.compose:compose.py:467 STDERR:
 recc1  Built
 ioc1-1  Built
 Container test-single-ioc-elasticsearch-1  Running
 Container test-single-ioc-elasticsearch-1  Waiting
 Container test-single-ioc-elasticsearch-1  Healthy
 Container test-single-ioc-cf-1  Starting
Error response from daemon: failed to set up container networking: driver failed programming external connectivity on endpoint test-single-ioc-cf-1 (8fa9cb4b3ba67cc61319d41a149350d1860d683fbab0bdd888cdb9cedeceb083): Bind for 0.0.0.0:8080 failed: port is already allocated

Or maybe the containers just need to be torn down at the end of each test?

@jacomago
Copy link
Contributor Author

We still have the issue with flaky test failures. I assume the 3.9 and 3.12 tests will pass if we re-run them a couple of times. Doesn't have to be part of this PR but I wonder if we could change the port allocation to be a random port in cf-compose.yml

ports:
  - "0:8080" 

And then if the get_service_host_and_port call in client_checks.py works like it sounds, it will create cf_client with the port docker assigned it?

ERROR    testcontainers.compose.compose:compose.py:467 STDERR:
 recc1  Built
 ioc1-1  Built
 Container test-single-ioc-elasticsearch-1  Running
 Container test-single-ioc-elasticsearch-1  Waiting
 Container test-single-ioc-elasticsearch-1  Healthy
 Container test-single-ioc-cf-1  Starting
Error response from daemon: failed to set up container networking: driver failed programming external connectivity on endpoint test-single-ioc-cf-1 (8fa9cb4b3ba67cc61319d41a149350d1860d683fbab0bdd888cdb9cedeceb083): Bind for 0.0.0.0:8080 failed: port is already allocated

Or maybe the containers just need to be torn down at the end of each test?

They should be torn down between each test class already. I think this problem is them not tearing down fast enough. I will try random port in another PR.

The previous flakyness I used to see was the docker machine not being ready at the time the tests started, which I'm less sure on how to fix as its from the github actions os.

@sonarqubecloud
Copy link

@jacomago jacomago merged commit 466a55d into ChannelFinder:master Nov 27, 2025
18 checks passed
@jacomago
Copy link
Contributor Author

jacomago commented Dec 4, 2025

We still have the issue with flaky test failures. I assume the 3.9 and 3.12 tests will pass if we re-run them a couple of times. Doesn't have to be part of this PR but I wonder if we could change the port allocation to be a random port in cf-compose.yml

ports:
  - "0:8080" 

And then if the get_service_host_and_port call in client_checks.py works like it sounds, it will create cf_client with the port docker assigned it?

ERROR    testcontainers.compose.compose:compose.py:467 STDERR:
 recc1  Built
 ioc1-1  Built
 Container test-single-ioc-elasticsearch-1  Running
 Container test-single-ioc-elasticsearch-1  Waiting
 Container test-single-ioc-elasticsearch-1  Healthy
 Container test-single-ioc-cf-1  Starting
Error response from daemon: failed to set up container networking: driver failed programming external connectivity on endpoint test-single-ioc-cf-1 (8fa9cb4b3ba67cc61319d41a149350d1860d683fbab0bdd888cdb9cedeceb083): Bind for 0.0.0.0:8080 failed: port is already allocated

Or maybe the containers just need to be torn down at the end of each test?

Think I will try this now....

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants