-
Notifications
You must be signed in to change notification settings - Fork 22
Open
Description
Description
get_full_kspace() underestimates the total sequence time because rep_traj[:, 3] spans event_count - 1 intervals instead of the full event_count durations from rep.event_time. This causes a ~50% time shortfall in downstream applications (e.g., plotting).
Steps to Reproduce
- Create a
Sequencewithrep.event_time.sum() = 0.010sper repetition. - Call
seq.get_full_kspace(). - Check
rep_traj[-1, 3] - rep_traj[0, 3]vs.rep.event_time.sum().
Expected vs. Actual
- Expected:
rep_traj[-1, 3] - rep_traj[0, 3] = rep.event_time.sum()(e.g., 0.010s); total matchesget_duration()(e.g., 0.142829s). - Actual:
rep_traj[-1, 3] - rep_traj[0, 3] = 0.009s; total ~0.072829s (ratio: 0.51).
Evidence
Diagnostics from a 15-rep sequence:
- Rep 1:
event_time sum = 0.010000s, plotted = 0.009000s. - Total:
get_duration() = 0.142829s, plotted = 0.072829s.
Suggested Fix
Modify get_full_kspace() to include the starting point, yielding event_count + 1 points:
def get_full_kspace(self):
k_pos = torch.zeros(4, device=self.device)
trajectory = []
stored = torch.zeros(4, device=self.device)
for rep in self:
if rep.pulse.usage == PulseUsage.EXCIT:
k_pos = stored
elif rep.pulse.usage == PulseUsage.REFOC:
k_pos = -k_pos
elif rep.pulse.usage == PulseUsage.STORE:
stored = k_pos
grad_time = torch.cat([rep.gradm, rep.event_time[:, None]], 1)
cum_traj = torch.cumsum(grad_time, dim=0)
rep_traj = torch.cat([k_pos.unsqueeze(0), k_pos + cum_traj], dim=0)
k_pos = rep_traj[-1, :]
trajectory.append(rep_traj)
return trajectory
Here the time difference:
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels