Skip to content

Conversation

@nnethercote
Copy link
Collaborator

- `CoverageInfoBuilderMethods::init_coverage` was removed.
- The `ret_ptr` argument was added to `BuilderMethods::atomic_rmw`, see
  rust-lang/rust#144192.
@nnethercote
Copy link
Collaborator Author

This doesn't work because rust-lang/rust#144192 changed the signature of atomic_rmw and I don't know how to update it appropriately. As written the code fails with an assertion. @FractalFir, it looks like you wrote the atomics code, do you know what to do here? The GCC and LLVM backends look a bit different to the CUDA backend so I can't just copy what they are doing.

// We can exchange *ptr with val, and then discard the result.
self.atomic_rmw(AtomicRmwBinOp::AtomicXchg, ptr, val, order);
// njn: ?
self.atomic_rmw(AtomicRmwBinOp::AtomicXchg, ptr, val, order, /* ret_ptr */ true);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The true here seems incorrect. From my understanding of this, ret_ptr should be set if the exchange is operating on a pointer(since LLVM does not support atomics on pointers, and this needs a cast to usize).

For most of these operations, the type of ‘’ must be an integer type whose bit width is a power of two greater than or equal to eight. For xchg, this may also be a floating point or a pointer type with the same size constraints as integers.

So, this should be something like is_ptr(don't remember the exact LLVM API here).

dst: &'ll Value,
src: &'ll Value,
order: AtomicOrdering,
_ret_ptr: bool, // njn: what to do?
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here, if ret_ptr is true, we should cast dst to *usize, src to usize, and then cast the return value back to a *T(by checking the original type of src).

// Since for any A, A | 0 = A, and performing atomics on constant memory is UB in Rust, we can abuse or to perform atomic reads.
self.atomic_rmw(AtomicRmwBinOp::AtomicOr, ptr, self.const_int(ty, 0), order)
// njn: ?
self.atomic_rmw(AtomicRmwBinOp::AtomicOr, ptr, self.const_int(ty, 0), order, /* ret_ptr */ true)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same concern here as for load_operand.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants