-
Notifications
You must be signed in to change notification settings - Fork 420
Description
While fuzzing onion_message_target
, @dergoegge found the following (base64-encoded) input which triggers an infinite loop in the onion parsing logic:
AuT//////////52dIwCdnZ2dAgr///////j///8ACAAAAGL/AgBWAf0AgP3/AAD/Av//AAAAYv8CAgCGAQAA/39zAJ0C//8AAAAAAAEABPv//wP///8AAf8A////AAAAAAADAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
From my investigation, the root cause is something like this:
-
The
hop_data
for the onion packet is wrapped in aCursor
and sent off for TLV parsing here:rust-lightning/lightning/src/ln/onion_utils.rs
Lines 2640 to 2641 in 81495c6
let mut chacha_stream = ChaChaReader { chacha: &mut chacha, read: Cursor::new(&hop_data[..]) }; match R::read(&mut chacha_stream, read_args) { -
The type and length for the first TLV are parsed. In this case, the type is
encrypted_recipient_data
and the specified length is larger than the remaining data inhop_data
. Nevertheless, the remaininghop_data
is wrapped in aFixedLengthReader
of the larger length and passed off for decryption here:rust-lightning/lightning/src/util/ser_macros.rs
Lines 635 to 639 in 81495c6
let length: ser::BigSize = $crate::util::ser::Readable::read(stream_ref)?; let mut s = ser::FixedLengthReader::new(stream_ref, length.0); match typ.0 { $(_t if $crate::_decode_tlv_stream_match_check!(_t, $type, $fieldty) => { $crate::_decode_tlv!($stream, s, $field, $fieldty); -
The
FixedLengthReader
is read and decrypted, and any extra bytes are handled in this loop:rust-lightning/lightning/src/crypto/streams.rs
Lines 131 to 134 in 81495c6
while chacha_stream.read.bytes_remain() { let mut buf = [0; 256]; chacha_stream.read(&mut buf)?; } hop_data
is exhausted, every read from theFixedLengthReader
becomes a no-op sinceCursor::read
returns 0. At the same time,FixedLengthReader::bytes_remaining
always returns true because there's still bytes remaining from its point of view. The loop continues forever.
It looks like this bug was introduced here: 8ee02eb
Prior to that commit, chacha_stream.read.eat_remaining()
would return an error once hop_data
was exhausted.