[FIX] Fix tensor concatenation to handle mixed numpy/torch arrays #37
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.

TL;DR
Fix tensor concatenation to handle mixed numpy arrays
What changed?
Modified the
_concatenatefunction inbackend_tensor.pyto properly handle cases where some tensors in the input list are numpy arrays. The previous implementation only checked the type of the first tensor, which could lead to errors when concatenating mixed tensor types. Also added an explicit error for unsupported tensor types.How to test?
Test concatenating a list of tensors where some elements are numpy arrays and others are not. Verify that the function correctly identifies and handles numpy arrays regardless of their position in the list.
Why make this change?
The previous implementation had a bug where it only checked the type of the first tensor in the list, which would fail if the first tensor was not a numpy array but other tensors in the list were. This change makes the function more robust by checking if any tensor in the list is a numpy array, ensuring proper handling of mixed tensor types.