<?xml version="1.0" encoding="UTF-8"?>
  <?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
  <!-- generated by https://github.com/cabo/kramdown-rfc version 1.6.14 (Ruby 2.6.10) -->


<!DOCTYPE rfc  [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">

]>


<rfc ipr="trust200902" docName="draft-cheshire-sbm-01" category="info" submissionType="independent" tocInclude="true" sortRefs="true" symRefs="true">
  <front>
    <title abbrev="Source Buffer Management">Source Buffer Management</title>

    <author fullname="Stuart Cheshire">
      <organization>Apple Inc.</organization>
      <address>
        <email>cheshire@apple.com</email>
      </address>
    </author>

    <date year="2025" month="March" day="03"/>

    
    
    <keyword>Bufferbloat</keyword> <keyword>Latency</keyword> <keyword>Responsiveness</keyword>

    <abstract>


<t>In the past decade there has been growing awareness about the
harmful effects of bufferbloat in the network, and there has
been good work on developments like L4S to address that problem.
However, bufferbloat on the sender itself remains a significant
additional problem, which has not received similar attention.
This document offers techniques and guidance for host networking
software to avoid network traffic suffering unnecessary delays
caused by excessive buffering at the sender. These improvements
are broadly applicable across all datagram and transport
protocols (UDP, TCP, QUIC, etc.) on all operating systems.</t>



    </abstract>

    <note title="About This Document" removeInRFC="true">
      <t>
        The latest revision of this draft can be found at <eref target="https://StuartCheshire.github.io/draft-cheshire-sbm/draft-cheshire-sbm.html"/>.
        Status information for this document may be found at <eref target="https://datatracker.ietf.org/doc/draft-cheshire-sbm/"/>.
      </t>
      <t>Source for this draft and an issue tracker can be found at
        <eref target="https://github.com/StuartCheshire/draft-cheshire-sbm"/>.</t>
    </note>


  </front>

  <middle>


<section anchor="conventions-and-definitions"><name>Conventions and Definitions</name>

<t>The key words "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
"<bcp14>MAY</bcp14>", and "<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.</t>

</section>
<section anchor="introduction"><name>Introduction</name>

<t>In 2010 Jim Gettys identified the problem
of how excessive buffering in networks adversely affects
delay-sensitive applications <xref target="Bloat1"/><xref target="Bloat2"/><xref target="Bloat3"/>.
This important work identifying a non-obvious problem
has led to valuable developments to improve this situation,
like fq_codel <xref target="RFC8290"/>, PIE <xref target="RFC8033"/>, Cake <xref target="Cake"/>
and L4S <xref target="RFC9330"/>.</t>

<t>However, excessive buffering at the source
-- in the sending devices themselves --
can equally contribute to degraded performance
for delay-sensitive applications,
and this problem has not yet received
a similar level of attention.</t>

<t>This document describes the source buffering problem,
steps that have been taken so far to address the problem,
shortcomings with those existing solutions,
and new mechanisms that work better.</t>

<t>To explain the problem and the solution,
this document begins with some historical background
about why computers have buffers in the first place,
and why buffers are useful.
This document explains the need for backpressure on
senders that are able to exceed the network capacity,
and separates backpressure mechanisms into
direct backpressure and indirect backpressure.</t>

<t>The document describes
the TCP_REPLENISH_TIME socket option
for TCP connections using BSD Sockets,
and its equivalent for other networking protocols and APIs.</t>

<t>The goal is for application software to be able to
write chunks of data large enough to be efficient,
without writing too many of them too quickly.
This avoids the unfortunate situation where a delay-sensitive
application inadvertently writes many blocks of data
long before they will actually depart the source machine,
such that by the time the enqueued data is actually sent,
the application may have newer data that it would rather send instead.
By deferring generating data until the networking code is
actually ready to send it, the application retains more precise
control over what data will be sent when the opportunity arises.</t>

<t>The document concludes by describing some alternative
solutions that are often proposed, and explains
why we feel they are less effective than simply
implementing effective source buffer management.</t>

</section>
<section anchor="source-buffering"><name>Source Buffering</name>

<t>Starting with the most basic principles,
computers have always had to deal with the situation
where software is able to generate output data
faster than the physical medium can accept it.
The software may be sending data to a paper tape punch,
to an RS232 serial port (UART),
or to a printer connected via a parallel port.
The software may be writing data to a floppy disk
or a spinning hard disk.
It was self-evident to early computer designers that it would
be unacceptable for data to be lost in these cases.</t>

</section>
<section anchor="direct-backpressure"><name>Direct Backpressure</name>

<t>The early solutions were simple.
When an application wrote data to a file on a floppy disk,
the file system “write” API would not return control to the caller
until the data had actually been written to the floppy disk.
This had the natural effect of slowing down
the application so that it could not exceed
the capacity of the medium to accept the data.</t>

<t>Soon it became clear that these simple synchronous APIs
unreasonably limited the performance of the system.
If, instead, the file system “write” API
were to return to the caller immediately
-- even though the actual write to the
spinning hard disk had not yet completed --
then the application could get on with other
useful work while the actual write to the
spinning hard disk proceeded in parallel.</t>

<t>Some systems allowed a single asynchronous write
to the spinning hard disk to proceed while
the application software performed other processing.
Other systems allowed multiple asynchronous writes to be enqueued,
but even these systems generally imposed some upper bound on
the amount of outstanding incomplete writes they would support.
At some point, if the application software persisted in
trying to write data faster than the medium could accept it,
then the application would be throttled in some way,
either by making the API call a blocking call
(simply not returning control to the application,
removing its ability to do anything else)
or by returning a Unix EWOULDBLOCK error or similar
(to inform the application that its API call had
been unsuccessful, and that it would need to take
action to write its data again at a later time).</t>

<t>It is informative to observe a comparison with graphics cards.
Most graphics cards support double-buffering.
This allows one frame to be displayed while
the CPU and GPU are working on generating the next frame.
This concurrency allows for greater efficiency,
by enabling two actions to be happening at the same time.
But quintuple-buffering is not better than double-buffering.
Having a pipeline five frames deep, or ten frames,
or fifty frames, is not better than two frames.
For a fast-paced video game, having a display pipeline fifty
frames deep, where every frame is generated, then waits in
the pipeline, and then is displayed fifty frames later,
would not improve performance or efficiency,
but would cause an unacceptable delay between
a player performing an action and
seeing the results of that action on the screen.
It is beneficial for the video game to work on preparing
the next frame while the previous frame is being displayed,
but it is not beneficial for the video game to get multiple
frames ahead of the frame currently being displayed.</t>

<t>Another reason that it is good not to permit an
excessive amount of unsent data to be queued up
is that once data is committed to a buffer,
there are generally limited options for changing it.
Some systems may provide a mechanism to flush the entire
buffer and discard all the data, but mechanisms to
selectively remove or re-order enqueued data
are complicated and rare.
While it could be possible to add such mechanisms,
on balance it is simpler simply to avoid committing
too much unsent data to the buffer in the first place.
If the backlog of unsent data is kept reasonably low,
that gives the source more flexibility decide what to
put into the buffer next, when that opportunity arises.</t>

<t>In summary, in order to give applications maximum
flexibility, pending data should be kept as close
to the application for as long as possible.
Application buffers should be as large as needed
for the application to do its work,
and lower-layer buffers should be no larger than
is necessary to provide efficient use of available
network capacity and other resources like CPU time.</t>

</section>
<section anchor="indirect-backpressure"><name>Indirect Backpressure</name>

<t>All of the situations described above using “direct backpressure”
are one-hop communication where the CPU generating the data
is connected more-or-less directly to the device receiving the data.
In these cases it is relatively simple for the receiving device
to exert backpressure to influence the rate at which the CPU sends data.</t>

<t>When we introduce multi-hop networking,
the situation becomes more complicated.
When a flow of packets travels 30 hops though
a network, the bottleneck hop may be quite distant
from the original source of the data stream.</t>

<t>For example, when a cable modem
with a 35Mb/s output rate receives
an excessive flow of packets coming in
on its Gb/s Ethernet interface,
the cable modem cannot directly cause
the sending application to block or receive an EWOULDBLOCK error.
The cable modem’s choices are limited to
enqueueing an incoming packet,
discarding an incoming packet,
or enqueueing an incoming packet and
marking with an ECN CE mark <xref target="RFC3168"/>.</t>

<t>The reasons the cable modem’s choices are so limited
are because of security and packet size constraints.</t>

<t>Security and trust concerns revolve around preventing a
malicious entity from performing a denial-of-service attack
against a victim device by sending fraudulent messages that
would cause it to reduce its transmission rate.
It is particularly important to guard against an off-path attacker
being able to do this. This concern is addressed if queue size
feedback generated in the network follows the same path already
taken by the data packets and their subsequent acknowledgement
packets. The logic is that any on-path device that is able to
modify data packets (changing the ECN bits in the IP header)
could equally well corrupt packets or discard them entirely.
Thus, trusting ECN information from these devices does not
increase security concerns, since these devices could already
perform more malicious actions anyway. The sender already
trusts the receiver to generate accurate acknowledgement
packets, so also trusting it to report ECN information back
to the sender does not increase the security risk.</t>

<t>A consequence of this security requirement is that it takes a
full round trip time for the source to learn about queue state
in the network. In many common cases this is not a significant
deficiency. For example, if a user is receiving data from a
well-connected server on the Internet, and the network
bottleneck is the last hop on the path (e.g., the Wi-Fi hop to
the user’s smartphone in their home) then the location where
the queue is building up (the Wi-Fi Access Point) is very close
to the receiver, and having the receiver echo the queue state
information back to the sender does not add significant delay.</t>

<t>Packet size constraints, particularly scarce bits available
in the IP header, mean that for pragmatic reasons the ECN
queue size feedback is limited to two states: “The source
may try sending a little faster if desired,” and, “The
source should reduce its sending rate.” Use of these
increase/decrease indications in successive packets allows
the sender to converge on the ideal transmission rate, and
then to oscillate slightly around the ideal transmission
rate as it continues to track changing network conditions.</t>

<t>Discarding or marking an incoming packet
at some point within the network are
what we refer to as indirect backpressure,
with the assumption that these actions will eventually
result in the sending application being throttled
via having a write call blocked,
returning an EWOULDBLOCK error,
or some other form of backpressure that
causes the source application
to temporarily pause sending new data.</t>

</section>
<section anchor="case-study-tcpnotsentlowat"><name>Case Study -- TCP_NOTSENT_LOWAT</name>

<t>In April 2011 the author was investigating
sluggishness with Mac OS Screen Sharing,
which uses the VNC Remote Framebuffer (RFB) protocol <xref target="RFC6143"/>.
Initially it seemed like a classic case of network bufferbloat.
However, deeper investigation revealed that in this case
the network was not responsible for the excessive delay --
the excessive delay was being caused by
excessive buffering on the sending device itself.</t>

<t>In this case the network connection was a relatively slow
DSL line (running at about 500 kb/s) and
the socket send buffer (SO_SNDBUF) was set to 128 kilobytes.
With a 50 ms round-trip time,
about 3 kilobytes (roughly two packets)
was sufficient to fill the bandwidth-delay product of the path.
The remaining 125 kilobytes available in the 128 kB socket send buffer
were simply holding bytes that had not even been sent yet.
At 500 kb/s throughput (62.5 kB/s),
this meant that every byte written by the VNC RFB server
spent two seconds sitting in the socket send buffer
before it even left the source machine.
Clearly, delaying every sent byte by two seconds
resulted in a very sluggish screen sharing experience,
and it did not yield any useful benefit like
higher throughput or lower CPU utilization.</t>

<t>This led to the creation in May 2011
of a new socket option on Mac OS and iOS
called “TCP_NOTSENT_LOWAT”.
This new socket option provided the ability for
sending software (like the VNC RFB server)
to specify a low-water mark threshold for the
minimum amount of <strong>unsent</strong> data it would like
to have waiting in the socket send buffer.
Instead of encouraging the application to
fill the socket send buffer to its maximum capacity,
the socket send buffer would hold just the data
that had been sent but not yet acknowledged
(enough to fully occupy the bandwidth-delay product
of the network path and fully utilize the available capacity)
plus some <strong>small</strong> amount of additional unsent data waiting to go out.
Some <strong>small</strong> amount of unsent data waiting to go out is
beneficial, so that the network stack has data
ready to send when the opportunity arises
(e.g., a TCP ACK arrives signalling
that previous data has now been delivered).
Too much unsent data waiting to go out
-- in excess of what the network stack
might soon be able to send --
is harmful for delay-sensitive applications
because it increases delay without
meaningfully increasing throughput or utilization.</t>

<t>Empirically it was found that setting an
unsent data low-water mark threshold of 16 kilobytes
worked well for VNC RFB screen sharing.
When the amount of unsent data fell below this
low-water mark threshold, kevent() would
wake up the VNC RFB screen sharing application
to begin work on preparing the next frame to send.
Once the VNC RFB screen sharing application
had prepared the next frame and written it
to the socket send buffer,
it would again call kevent() to block and wait
to be notified when it became time to begin work
on the following frame.
This allows the VNC RFB screen sharing server
to stay just one frame ahead of
the frame currently being sent over the network,
and not inadvertently get multiple frames ahead.
This provided enough unsent data waiting to go out
to fully utilize the capacity of the path,
without buffering so much unsent data
that it adversely affected usability.</t>

<t>A live on-stage demo showing the benefits of using TCP_NOTSENT_LOWAT
with VNC RFB screen sharing was shown at the
Apple Worldwide Developer Conference in June 2015 <xref target="Demo"/>.</t>

</section>
<section anchor="shortcomings-of-tcpnotsentlowat"><name>Shortcomings of TCP_NOTSENT_LOWAT</name>

<t>While TCP_NOTSENT_LOWAT achieved its initial intended goal,
later operational experience has revealed some shortcomings.</t>

<section anchor="platform-differences"><name>Platform Differences</name>

<t>The Linux network maintainers implemented a TCP
socket option with the same name, but different behavior.
While the Apple version of TCP_NOTSENT_LOWAT was
focussed on reducing delay,
the Linux version was focussed on reducing kernel memory usage.
The Apple version of TCP_NOTSENT_LOWAT controls
a low-water mark, below which the application is signalled
that it is time to begin working on generating fresh data.
The Linux version determines a high-water mark for unsent data,
above which the application is <strong>prevented</strong> from writing any more,
even if it has data prepared and ready to enqueue.
Setting TCP_NOTSENT_LOWAT to 16 kilobytes works well on Apple
systems, but can severely limit throughput on Linux systems.
This has led to confusion among developers and makes it difficult
to write portable code that works on both platforms.</t>

</section>
<section anchor="time-versus-bytes"><name>Time Versus Bytes</name>

<t>The original thinking on TCP_NOTSENT_LOWAT focussed on
the number of unsent bytes remaining, but it soon became
clear that the relevant quantity was time, not bytes.
The quantity of interest to the sending application
was how much advance notice it would get of impending
data exhaustion, so that it would have enough time
to generate its next logical block of data.
On low-rate paths (e.g., 250 kb/s and less)
16 kilobytes of unsent data could still result
in a fairly significant unnecessary queueing delay.
On high-rate paths (e.g., Gb/s and above)
16 kilobytes of unsent data could be consumed
very quickly, leaving the sending application
insufficient time to generate its next logical block of data
before the unsent backlog ran out
and available network capacity was left unused.
It became clear that it would be more useful for the
sending application specify how much advance notice
of data exhaustion it required (in milliseconds, or microseconds),
depending on how much time the application anticipated
needing to generate its next logical block of data.</t>

<t>The application could perform this calculation itself,
calculating the estimated current data rate and dividing
that by its desired advance notice time, to compute the number
of outstanding unsent bytes corresponding to that desired time.
However, the application would have to keep adjusting its
TCP_NOTSENT_LOWAT value as the observed data rate changed.
Since the transport protocol already knows the number of
unacknowledged bytes in flight, and the current round-trip delay,
the transport protocol is in a better position
to perform this calculation.</t>

<t>In addition, the network stack knows if features like hardware
offload, aggregation, and stretch acks are being used,
which could impact the burstiness of consumption of unsent bytes.</t>

<t>Wi-Fi interfaces perform better when they send
batches of packets aggregated together instead of
sending individual packets one at a time.
The amount of aggregation that is desirable depends
on the current wireless conditions,
so the Wi-Fi interface and its driver
are in the best position to determine that.</t>

<t>If stretch acks are being used, then each ack packet
could acknowledge 8 data segments, or about 12 kilobytes.
If one such ack packet is lost, the following ack packet
will cumulatively acknowledge 24 kilobytes,
instantly consuming the entire 16 kilobyte unsent backlog,
and giving the application no advance notice that
the transport protocol is suddenly out of available data to send,
and some network capacity becomes wasted.</t>

<t>Occasional failures to fully utilize the entire
available network capacity are not a disaster, but we
still would like to avoid this being a common occurrence.
Therefore it is better to have the transport protocol,
in cooperation with the other layers of the network stack,
use all the information available to estimate
when it expects to run out of data available to send,
given the current network conditions
and current amount of unsent data.
When the estimated time remaining until exhaustion falls
below the application’s specified threshold, the application
is notified to begin working on generating more data.</t>

</section>
<section anchor="other-transport-protocols"><name>Other Transport Protocols</name>

<t>TCP_NOTSENT_LOWAT was initially defined only for TCP,
and only for the BSD Sockets programming interface.
It would be useful to define equivalent delay management
capabilities for other transport protocols, like QUIC,
and for other network programming APIs.</t>

</section>
</section>
<section anchor="tcpreplenishtime"><name>TCP_REPLENISH_TIME</name>

<t>Because of these lessons learned, this document proposes
a new BSD Socket option for TCP, TCP_REPLENISH_TIME.</t>

<t>The new TCP_REPLENISH_TIME socket option specifies the
threshold for notifying an application of impending data
exhaustion in terms of microseconds, not bytes.
It is the job of the transport protocol to compute its
best estimate of when the amount of remaining unsent data
falls below this threshold.</t>

<t>The new TCP_REPLENISH_TIME socket option
should have the same semantics across all
operating systems and network stack implementations.</t>

<t>Other transport protocols, like QUIC,
and other network APIs not based on BSD Sockets,
should provide equivalent time-based backlog-management
mechanisms, as appropriate to their API design.</t>

<t>The time-based estimate does not need to be perfectly accurate,
either on the part of the transport protocol estimating how much
time remains before the backlog of unsent data is exhausted,
or on the part of the application estimating how much
time it will need generate its next logical block of data.
If the network data rate increases significantly, or a group of
delayed acknowledgments all arrive together, then the transport
protocol could end up discovering that it has overestimated how
much time remains before the data is exhausted.
If the operating system scheduler is slow to schedule the
application process, or the CPU is busy with other tasks,
then the application may discover that it has
underestimated how much time it will take
to generate its next logical block of data.
These situations are not considered to be serious problems,
especially if they only occur infrequently.
For a delay-sensitive application, having some reasonable
mechanism to avoid an excessive backlog of unsent data is
dramatically better than having no such mechanism at all.
Occasional overestimates or underestimates do not
negate the benefit of this capability.</t>

<section anchor="solicitation-for-name-suggestions"><name>Solicitation for Name Suggestions</name>

<t>Author’s note: The BSD socket option name “TCP_REPLENISH_TIME”
is currently proposed as a working name
for this new option for BSD Sockets.
While the name does not affect the behavior of the code,
the choice of name is important, because people often
form their first impressions of a concept based on its name,
and if they form incorrect first impressions then their
thinking about the concept may be adversely affected.</t>

<t>For example, the BSD socket option could be called
“TCP_REPLENISH_TIME” or “TCP_EXHAUSTION_TIME”.
These are two sides of the same coin.
From the application’s point of view, it is expressing
how much time it will require to replenish the buffer.
From the networking code’s point of view, it is estimating
how much time remains before it will need the buffer replenished.
In an ideal world,
REPLENISH_TIME == EXHAUSTION_TIME, so that the data is
replenished at exactly the moment the networking code needs it.
In a sense, they are two ways of saying the same thing.
Since this API call is made by the application, we feel it
should be expressed in terms of the application’s requirement.</t>

</section>
</section>
<section anchor="applicability"><name>Applicability</name>

<t>This time-based backlog management is applicable anywhere
that a queue of unsent data may build up on the sending device.</t>

<t>Since multi-hop network protocols already implement
indirect backpressure in the form of discarding or marking packets,
it can be tempting to use this mechanism
for the first hop of the path too.
However, this is not an ideal solution because indirect
backpressure from the network is very crude compared to
the much richer direct backpressure
that is available within the sending device itself.
Relying on indirect backpressure by
discarding or marking a packet in the sending device itself
is a crude rate-control signal, because it takes a full network
round-trip time before the effect of that drop or mark is
observed at the receiver and echoed back to the sender, and
it may take multiple such round trips before it finally
results in an appropriate reduction in sending rate.</t>

<t>In contrast to queue buildup in the network,
queue buildup at the sending device has different properties
regarding (i) security, (ii) packet size constraints, and (iii) immediacy.
This means that when it is the source device itself
that is building up a backlog of unsent data,
designers of networking software have more freedom about how to manage this.</t>

<t>(i) When the source of the data and the location of the backlog are
the same physical device, network security and trust concerns do not apply.</t>

<t>(ii) When the mechanism we use to communicate about queue state
is a software API instead of packets sent though a network,
we do not have the constraint of having to work within
limited IP packet header space.</t>

<t>(iii) When flow control is implemented via a local software API,
the delivery of STOP/GO information to the source is immediate.</t>

<t>Direct backpressure can be achieved
simply making an API call block,
or by returning a Unix EWOULDBLOCK error,
or using equivalent mechanisms in other APIs,
and has the effect of immediately halting the flow of new data.
Similarly, when the system becomes able to accept more data,
unblocking an API call, indicating that a socket
has become writable using select() or kevent(),
or equivalent mechanisms in other APIs,
has the effect of immediately allowing the production of more data.</t>

<t>Where direct backpressure mechanisms are possible they
should be preferred over indirect backpressure mechanisms.</t>

<t>If the outgoing network interface on the source device
is the slowest hop of a multi-hop network path, then this
is where the backlog of unsent data will accumulate.</t>

<t>In addition to physical bottlenecks,
devices also have intentional algorithmic bottlenecks:</t>

<t><list style="symbols">
  <t>If the TCP receive window is full, then the sending TCP
implementation will voluntarily refrain from sending new data,
even though the device’s outgoing first-hop interface is easily
capable of sending those packets.
This is vital to avoid overrunning the receiver with data
faster than it can process it.</t>
  <t>The transport protocol’s rate management (congestion control) algorithm
may determine that it should delay before sending more data, so as
not to overflow a queue at some other bottleneck within the network.
This is vital to avoid overrunning the capacity of the bottleneck
network hop with data faster than it can forward it,
resulting in massive packet loss,
which would equate to a large wastage of resources at the sender,
in the form of battery power and network capacity wasted by
generating packets that will not make it to the receiver.</t>
  <t>When packet pacing is being used, the sending network
implementation may choose voluntarily to moderate the rate at
which it emits packets, so as to smooth the flow of packets into
the network, even though the device’s outgoing first-hop interface
might be easily capable of sending at a much higher rate.</t>
</list></t>

<t>Whether the source application is constrained
by a physical bottleneck on the sending device, or
by an algorithmic bottleneck on the sending device,
the benefits of not overcommitting data to the outgoing buffer are similar.</t>

<t>As described in the introduction,
the goal is for the application software to be able to
write chunks of data large enough to be efficient,
without writing too many of them too quickly,
and causing unwanted self-inflicted delay.</t>

</section>
<section anchor="bulk-transfer-protocols"><name>Bulk Transfer Protocols</name>

<t>It is frequently asserted that latency matters primarily for
interactive applications like video conferencing and on-line games,
and latency is relatively unimportant for most other applications.</t>

<t>We do not agree with this characterization.</t>

<t>Even for large bulk data transfers
-- e.g., downloading a software update or uploading a video --
we believe latency affects performance.</t>

<t>For example, TCP fast retransmit can immediately
recover a single lost packet in a single round-trip time.
TCP generally performs at its absolute best when the
loss rate is no more than one loss per round-trip time.
More than one loss per round-trip time requires more
extensive use of TCP SACK blocks, which consume extra
space in the packet header, and makes the work of the
rate management (congestion control) algorithm harder.
This can result in the transport protocol temporarily
sending too fast, resulting in additional packet loss,
or too slowly, resulting in underutilized network capacity.
For a given fixed loss rate (in packets lost per second)
a higher total network round-trip time
(including the time spent in buffers in the sending network
interface, below the transport protocol layer)
equates to more lost packets per network round-trip time,
causing error recovery to occur less quickly.
A transport protocol cannot make rate adaptation changes
to adjust to varying network conditions in less than one
network round-trip time, so the higher the total network
round-trip time is, the less agile the transport protocol
is at adjusting to varying network conditions.</t>

<t>In short, a client running over a transport protocol like TCP
may itself not be a real-time delay-sensitive application,
but a transport protocol itself is most definitely a
delay-sensitive application, responding in real time
to changing network conditions.</t>

</section>
<section anchor="alternative-proposals"><name>Alternative Proposals</name>

<section anchor="just-use-udp"><name>Just use UDP</name>

<t>Because much of the discussion about network latency involves
talking about the behavior of transport protocols like TCP,
sometimes people conclude that TCP is the problem,
and think that using UDP will solve the source buffering problem.
It does no such thing.
If an application sends UDP packets faster than the outgoing
network interface can carry them, then a queue of packets
will still build up, causing increasing delay for those packets,
and eventual packet loss when the queue reaches its capacity.</t>

<t>Any protocol that runs over UDP (like QUIC) must end up
re-creating the same rate adaptation behaviors that are
already built into TCP, or it will fail to operate
gracefully over a range of different network conditions.</t>

</section>
<section anchor="packet-expiration"><name>Packet Expiration</name>

<t>One approach that is sometimes used, is to send packets
tagged with an expiration time, and if they have spent
too long waiting in the outgoing queue then they are
automatically discarded without even being sent.
This is counterproductive because the sending application
does all the work to generate data, and then has to do more
work to recover from the self-inflicted data loss caused by
the expiration time.</t>

<t>If the outgoing queue is kept short, then the
amount of unwanted delay is kept correspondingly short.
In addition, if there is only a small amount of data in the
outgoing queue, then the cost of sending a small amount of
data that may arguably have become stale is also small --
usually smaller than the cost of having to recover missing
state caused by intentional discard of that delayed data.</t>

<t>For example, in video conferencing applications it is
frequently thought that if a frame is delayed past the
point where it becomes too late to display it, then it becomes
a waste of network capacity to send that frame at all.
However, the fallacy in that argument is that modern
video compression algorithms make extensive use of
similarity between consecutive frames.
A given video frame is not just encoded as a single frame
in isolation, but as a collection of visual
differences relative to the previous frame.
The previous frame may have arrived too late for the
time it was supposed to be displayed, but the data
contained within it is still needed to decode and
display the current frame.
If the previous frame was intentionally discarded by the
sender, then the subsequent frames are also impacted by
that loss, and the cost of repairing the damage is
frequently much higher than the cost would have been
to simply send the delayed frame.</t>

</section>
<section anchor="head-of-line-blocking-traffic-priorities"><name>Head of Line Blocking / Traffic Priorities</name>

<t>People are often very concerned about the problem of
head-of-line-blocking, and propose to solve it using
techniques such as packet priorities.
There is an unconscious unstated assumption baked into
this line of reasoning, which is having an excessively
long queue is inevitable and unavoidable, and therefore
we have to devote a lot of our energy into how to organize
and prioritize and manage that excessively long queue.
In contrast, if we take steps to keep queues short,
the problems head-of-line-blocking largely go away.
When the line is consistently short, being at the back of
the line is no longer the serious problem that it used to be.</t>

</section>
</section>
<section anchor="security-considerations"><name>Security Considerations</name>

<t>No security concerns are anticipated resulting from reducing
the amount of stale data sitting in buffers at the sender.</t>

</section>
<section anchor="iana-considerations"><name>IANA Considerations</name>

<t>This document has no IANA actions.</t>

</section>


  </middle>

  <back>


    <references title='Normative References'>



<reference anchor='RFC2119' target='https://www.rfc-editor.org/info/rfc2119'>
  <front>
    <title>Key words for use in RFCs to Indicate Requirement Levels</title>
    <author fullname='S. Bradner' initials='S.' surname='Bradner'/>
    <date month='March' year='1997'/>
    <abstract>
      <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
    </abstract>
  </front>
  <seriesInfo name='BCP' value='14'/>
  <seriesInfo name='RFC' value='2119'/>
  <seriesInfo name='DOI' value='10.17487/RFC2119'/>
</reference>

<reference anchor='RFC8174' target='https://www.rfc-editor.org/info/rfc8174'>
  <front>
    <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
    <author fullname='B. Leiba' initials='B.' surname='Leiba'/>
    <date month='May' year='2017'/>
    <abstract>
      <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
    </abstract>
  </front>
  <seriesInfo name='BCP' value='14'/>
  <seriesInfo name='RFC' value='8174'/>
  <seriesInfo name='DOI' value='10.17487/RFC8174'/>
</reference>




    </references>

    <references title='Informative References'>

<reference anchor="Bloat1" target="https://gettys.wordpress.com/2010/12/06/whose-house-is-of-glasse-must-not-throw-stones-at-another/">
  <front>
    <title>Whose house is of glasse, must not throw stones at another</title>
    <author initials="J." surname="Gettys">
      <organization></organization>
    </author>
    <date year="2010" month="December"/>
  </front>
</reference>
<reference anchor="Bloat2" target="https://queue.acm.org/detail.cfm?id=2071893">
  <front>
    <title>Bufferbloat: Dark Buffers in the Internet</title>
    <author initials="J." surname="Gettys">
      <organization></organization>
    </author>
    <author initials="K." surname="Nichols">
      <organization></organization>
    </author>
    <date year="2011" month="November"/>
  </front>
  <seriesInfo name="ACM Queue, Volume 9, issue 11" value=""/>
</reference>
<reference anchor="Bloat3" target="https://dl.acm.org/doi/10.1145/2063176.2063196">
  <front>
    <title>Bufferbloat: Dark Buffers in the Internet</title>
    <author initials="J." surname="Gettys">
      <organization></organization>
    </author>
    <author initials="K." surname="Nichols">
      <organization></organization>
    </author>
    <date year="2012" month="January"/>
  </front>
  <seriesInfo name="Communications of the ACM, Volume 55, Number 1" value=""/>
</reference>
<reference anchor="Cake" target="https://ieeexplore.ieee.org/document/8475045">
  <front>
    <title>Piece of CAKE: A Comprehensive Queue Management Solution for Home Gateways</title>
    <author initials="T." surname="Høiland-Jørgensen">
      <organization></organization>
    </author>
    <author initials="D." surname="Taht">
      <organization></organization>
    </author>
    <author initials="J." surname="Morton">
      <organization></organization>
    </author>
    <date year="2018" month="June"/>
  </front>
  <seriesInfo name="2018 IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN)" value=""/>
</reference>
<reference anchor="Demo" target="https://developer.apple.com/videos/play/wwdc2015/719/?time=2199">
  <front>
    <title>Your App and Next Generation Networks</title>
    <author initials="S." surname="Cheshire">
      <organization></organization>
    </author>
    <date year="2015" month="June"/>
  </front>
  <seriesInfo name="Apple Worldwide Developer Conference" value=""/>
</reference>


<reference anchor='RFC3168' target='https://www.rfc-editor.org/info/rfc3168'>
  <front>
    <title>The Addition of Explicit Congestion Notification (ECN) to IP</title>
    <author fullname='K. Ramakrishnan' initials='K.' surname='Ramakrishnan'/>
    <author fullname='S. Floyd' initials='S.' surname='Floyd'/>
    <author fullname='D. Black' initials='D.' surname='Black'/>
    <date month='September' year='2001'/>
    <abstract>
      <t>This memo specifies the incorporation of ECN (Explicit Congestion Notification) to TCP and IP, including ECN's use of two bits in the IP header. [STANDARDS-TRACK]</t>
    </abstract>
  </front>
  <seriesInfo name='RFC' value='3168'/>
  <seriesInfo name='DOI' value='10.17487/RFC3168'/>
</reference>

<reference anchor='RFC6143' target='https://www.rfc-editor.org/info/rfc6143'>
  <front>
    <title>The Remote Framebuffer Protocol</title>
    <author fullname='T. Richardson' initials='T.' surname='Richardson'/>
    <author fullname='J. Levine' initials='J.' surname='Levine'/>
    <date month='March' year='2011'/>
    <abstract>
      <t>RFB ("remote framebuffer") is a simple protocol for remote access to graphical user interfaces that allows a client to view and control a window system on another computer. Because it works at the framebuffer level, RFB is applicable to all windowing systems and applications. This document describes the protocol used to communicate between an RFB client and RFB server. RFB is the protocol used in VNC. This document is not an Internet Standards Track specification; it is published for informational purposes.</t>
    </abstract>
  </front>
  <seriesInfo name='RFC' value='6143'/>
  <seriesInfo name='DOI' value='10.17487/RFC6143'/>
</reference>

<reference anchor='RFC8033' target='https://www.rfc-editor.org/info/rfc8033'>
  <front>
    <title>Proportional Integral Controller Enhanced (PIE): A Lightweight Control Scheme to Address the Bufferbloat Problem</title>
    <author fullname='R. Pan' initials='R.' surname='Pan'/>
    <author fullname='P. Natarajan' initials='P.' surname='Natarajan'/>
    <author fullname='F. Baker' initials='F.' surname='Baker'/>
    <author fullname='G. White' initials='G.' surname='White'/>
    <date month='February' year='2017'/>
    <abstract>
      <t>Bufferbloat is a phenomenon in which excess buffers in the network cause high latency and latency variation. As more and more interactive applications (e.g., voice over IP, real-time video streaming, and financial transactions) run in the Internet, high latency and latency variation degrade application performance. There is a pressing need to design intelligent queue management schemes that can control latency and latency variation, and hence provide desirable quality of service to users.</t>
      <t>This document presents a lightweight active queue management design called "PIE" (Proportional Integral controller Enhanced) that can effectively control the average queuing latency to a target value. Simulation results, theoretical analysis, and Linux testbed results have shown that PIE can ensure low latency and achieve high link utilization under various congestion situations. The design does not require per-packet timestamps, so it incurs very little overhead and is simple enough to implement in both hardware and software.</t>
    </abstract>
  </front>
  <seriesInfo name='RFC' value='8033'/>
  <seriesInfo name='DOI' value='10.17487/RFC8033'/>
</reference>

<reference anchor='RFC8290' target='https://www.rfc-editor.org/info/rfc8290'>
  <front>
    <title>The Flow Queue CoDel Packet Scheduler and Active Queue Management Algorithm</title>
    <author fullname='T. Hoeiland-Joergensen' initials='T.' surname='Hoeiland-Joergensen'/>
    <author fullname='P. McKenney' initials='P.' surname='McKenney'/>
    <author fullname='D. Taht' initials='D.' surname='Taht'/>
    <author fullname='J. Gettys' initials='J.' surname='Gettys'/>
    <author fullname='E. Dumazet' initials='E.' surname='Dumazet'/>
    <date month='January' year='2018'/>
    <abstract>
      <t>This memo presents the FQ-CoDel hybrid packet scheduler and Active Queue Management (AQM) algorithm, a powerful tool for fighting bufferbloat and reducing latency.</t>
      <t>FQ-CoDel mixes packets from multiple flows and reduces the impact of head-of-line blocking from bursty traffic. It provides isolation for low-rate traffic such as DNS, web, and videoconferencing traffic. It improves utilisation across the networking fabric, especially for bidirectional traffic, by keeping queue lengths short, and it can be implemented in a memory- and CPU-efficient fashion across a wide range of hardware.</t>
    </abstract>
  </front>
  <seriesInfo name='RFC' value='8290'/>
  <seriesInfo name='DOI' value='10.17487/RFC8290'/>
</reference>

<reference anchor='RFC9330' target='https://www.rfc-editor.org/info/rfc9330'>
  <front>
    <title>Low Latency, Low Loss, and Scalable Throughput (L4S) Internet Service: Architecture</title>
    <author fullname='B. Briscoe' initials='B.' role='editor' surname='Briscoe'/>
    <author fullname='K. De Schepper' initials='K.' surname='De Schepper'/>
    <author fullname='M. Bagnulo' initials='M.' surname='Bagnulo'/>
    <author fullname='G. White' initials='G.' surname='White'/>
    <date month='January' year='2023'/>
    <abstract>
      <t>This document describes the L4S architecture, which enables Internet applications to achieve low queuing latency, low congestion loss, and scalable throughput control. L4S is based on the insight that the root cause of queuing delay is in the capacity-seeking congestion controllers of senders, not in the queue itself. With the L4S architecture, all Internet applications could (but do not have to) transition away from congestion control algorithms that cause substantial queuing delay and instead adopt a new class of congestion controls that can seek capacity with very little queuing. These are aided by a modified form of Explicit Congestion Notification (ECN) from the network. With this new architecture, applications can have both low latency and high throughput.</t>
      <t>The architecture primarily concerns incremental deployment. It defines mechanisms that allow the new class of L4S congestion controls to coexist with 'Classic' congestion controls in a shared network. The aim is for L4S latency and throughput to be usually much better (and rarely worse) while typically not impacting Classic performance.</t>
    </abstract>
  </front>
  <seriesInfo name='RFC' value='9330'/>
  <seriesInfo name='DOI' value='10.17487/RFC9330'/>
</reference>




    </references>


<section numbered="false" anchor="acknowledgments"><name>Acknowledgments</name>

<t>TODO Acknowledgments.</t>

</section>


  </back>

<!-- ##markdown-source:
H4sIAPE/xmcAA8V965LbyJXm/3yKXOmPqoOkVFKr3a1Y70zp0t2ydZuukrWO
9YYDBJIkXCBAI4GiaIci/BobsRuxjzG/d97ET7LnO+dkIkGiyvKvmfC0qlhE
Xs79jvl8brqyq9wze++y6dvc2ef9auVa+zars7Xburq7Z7LlsnU3d34lzzq3
btrDM1vWq8aYosnrbEvLFm226ub5xvlN2bq5X27nj86N75fb0vuyqbvDzuGh
wu0c/afuTN1vl659Zgpa8pl9/Ojx0/mjJ/Q/Qyd4Yu7brHXZM/vp9RX9vG/a
63Xb9Ltn9uryd59+sldtVvtd03Y2qwt76dqbMnfefqLvlfXa/oTvmmt3oAeL
Z8bO9S7Lqsk6/PqGNq3zA378xdFCtS9vXO28N/RP757RnlY3/PQTfpHzj9en
j7dZWeEr/+o+Z9td5RZ5s8XnWZtvntlN1+38s4cPkz8+pOVo6bLb9EtAuuuz
tnuhYHt4CsV79O2KDus7+nZYb/zUQlZblM3E8xMfLTbdtrpnTNZ3m6YFdGgP
a1d9VQkuZXkb1ue/Nu06q8u/ZB3h8pm92NF17Os6X/AfnYAhbPKv2S6AwtRN
u6WHbgikBiQz/Gbtc2Dj/BkvEQ+D/5sTofhn9jcL+5PruoPnTwP9fto03tlN
09N/S2+blV1XmfduZre972zddLbbtM3e+q4hjNoMNNJ0G9fe43WE3l663IH+
iPDOH8n6Wbt23YC0NW+9AAXtWqIMxh6+/fD88cNH3z3c4xxzPse89PNmNZdz
zHGMOe0452PM5RjzrJvrMR6Gqz/+2quHT3+7sO/KfNNUY4gktE0Xy9prpXZP
TxEsgKjOtbXrUgC8a24iAM75c+/a0nkgiRD84q39t971BNXfNVW/dfaHGUHb
987qt4/B9Wd8e5Hl2wWRysPCdUQSi3y1/Zey+PXjR786//6HJ+HeT/4z7/2b
rCbqPuDaj0+u/aLZbvu6zJnMmbawDAEjguHp05l9x5LLTsOhqAYgNOXD80eL
8/NvnxLhfPfk/FffLfjfH76jR19k1+52QFwt7M//8e9lReJt/pv/+Hfao/au
Hn3l5cJeZZvuGH5vSSo29QhQH0qidtzmxcVvXxFycU+i6Y1jsSeITsS8vaS7
AgKW+NX+3NC1fyLY7bODH0Gyrx3A+P0JGPGhff3q1SvFAIMzq+zlYbtrfNlv
La39psnpI4jvt65rm11TlV1W2wsS+/ad6yDyvX3w5uLd24t3Z5OgLp1zn3dV
QyIQPyrM8x53ePj9t796+ujbp/TgS7dtbgf05WIs6ALMfk86EHKOT/jOfe6I
MGvX8lXi+abA8fSUmVhaku6oin1ZODrQjauaHZHQi6YmeiVN5KZpKXxxESXq
wxtaofEPd1V2eLjfFzl2fPir8x8e/ktXbt2vH5//8AOt9cuPL56cf/f9M/nx
u/Nvn+iP3z96En98/MMj/fGHJ0/oRzOfz2229F2b5Z0xr4WJdhlJ1cLlGZ0c
4otkb+bt0rkaGnIPdZjtSVlDe9LTTQ/568wma7ekUqwjjsw75qXlwK+BQ2sB
5IyhHFc3snrTFKz5QS0KCqDW26q8dvbNt5e2a2xWFJDO9DCtumubZeW2C/Nz
s6cH2tloz0b29DBAWlt23lUr20J5Ea9n1pfrulwR85NxQquWSrS65szuNySD
+PLQMS2xFDFPQY9tiU1b0jNkUuCZhbnakGIKlEg3Z5nUuXxTlyQmPV923ZdF
RohnHiNV0gVYEECNb1YdYMoXvGnKIvzREm5WdEbr+V4Afl/XdBTvIdQKR2Th
yUojpVTY5cG6z/gTmHwZH8i6BAwkQ4j6SZOSQIBGAHwNdl62TVZUBwvKI5gQ
CGyWtw1wXFUg+WzdZlvBWzDGDK3RNTnJafvg48sPM3v1gv7zbx9fv5hZ1+WL
M6AAj4OmiZPoMP7gO7f1CyG+bVkUlTNkQRFn3Ag4BVwv3aqsGSXeEHydJeMO
xFF4e+/tx8urezP51757zz//8oq2/eXVS/x8+fPFmzfxB6PfuPz5/cc3L4ef
hidfvH/79tW7l/IwfWpHH5l7by9+f08o9t77D1ev37+7eHNPCDpFu6JvSbCF
ECR52xFOiLgL5/O2XNIv9MzzFx/+3/89/9b+9a//hfjw8fn5D1++6C/fn//q
W/plT2Jadmtqwof8Sgg8GEKNI8IrBaZ5tiMBWnn6rrd+0+xrC34iyH7zPwCZ
//nM/tdlvjv/9r/pB7jw6MMAs9GHDLPTT04eFiBOfDSxTYTm6PMjSI/Pe/H7
0e8B7smHoBpSN21T9DnohAUY7DX7m3KrVoUt4XoQk7tCZJvwtiHhRACb5BaC
bh20UVaQTCGpQVwhUs0ww809FCnM2sAtQrd//atYuF++6E+P409PvnxRMUGM
R5xDMkdEnR7wwIxKgqaeN8ubkozMeFYIoArnb+xNVvXMmSPpSH9QbhaKpKP1
fKKZYcG5+vMf84YOTudTLfDly8x+eP1KPyANgQ9gn9An+OfLFwP6g8Tlr0Bd
4AKDmL1LzrAzSewdZD4ED/5Oh2anjT7bElBv6EdyU0n8Wvfnnij6YHPyG4lT
+o45qXAkcQq6OQkPdiSgNiE970LCzIhmKSMAowQ/uEGKmyzK8QqwhLpKBPqR
RA8c7JMLJhcPGsOQaNupZtpkgA3UWkcArekhu6LNRgrMJU+SndKRuqflvN2T
f0d/htvjPpde5KaaaHrB2u3tlhQM+Wh+q1syPS2J7knK0w0aC2MpUyQEYKje
jevNzFiKLd0a2pGP4GEJ0l+7pi1huy2znL3ymsDHen+/Ac62O0IYKTy58tgi
X5UtaTo6Ru7k4HgkfAcSkxQXWQ3HKlRP7tVmIBoA3rE/u2Y9PUgMLxpNb4/F
mDe6hqlTOT7oUZKWWV52BzmFd7ushZc9XjOBKMnwxhRkJObd+Dt4vKwn/rIQ
PXVKNAbnIM34hz/+8urDm1fvXl/+/Ic/Xr1++4oAnF8TUTY7ll+4In0NXEAa
XkRK74H855cvyUbHdxX9ZMuAaUoSCNgKT7KrmdgUdtDNeOLiw2uvJ1w3hEuC
Np5KWMemRsgyAtPs25L4Md/09TVbdTAFbAXL1bq66dcb/b6DnVLScWYG1MP0
QY/iKF3TWGLfgzpYW/6Ajp9fVwdFPVs9gu8eYYOuJy/CDbIMehDQP2Z+k96g
rFlig42hOXFwLxuTQZgPxzdVQ6daOtqHbVz6bkkalaxgkUMFyCOVZrRIvilr
omHf5xshOLK28AVY4fyDq9klLgRAuFJYzjNQ8J30sNvsICxDvEyY46d44RKs
3FeFJQoFTkHm8F06lxUL8xzHI/ZhybNWFwXSFc/3JL+qlO7xFwh/Oo+J5yGP
qzgAa7Jyx+bF6Gwt3Hkivy0ARASel94Zls4NiUoCMaGDTsp7MuiWLOY7tlZ4
tWa3YyQSyxFr0uP+mD9oubzqC/DgITCLSDoCaFapH0kYjqJvYHSiVNpnBy+S
DF+xloLMMBAxexI9zlWCXDxRQeSKb1KyoiS1QxpgVx0M/suGMHYfvjIS86Ai
9ZYXMDxGEVNY8OaSnDleQaU30QyM/GXmyXrf0XfykrYh/j0SmFkFP5t+LkTn
EW/GFSL1G6H+yKGgLpV1SgIEk76jdYW+V+TD0aH5liz9NwfPInzrCrjjULpZ
nrsdqG3BeIlrgy6XidZmuiS1RY4hXNiO/mt3fZ1viKTp49r+cvn4yWN2gOE9
IUb74OPFL1dnM9O0+mTLJnGQbMQjN2XGK7ZEkE6emj5GECHDMVZk+OyIYkp/
jQ1Ike/KusZ3yAUt+POFeU2kCLOYPL65u2EjixVD1laDygLVkQ8YVUhgPHJH
iZMEPgxmNjr0APS3CogVBUcqOs+EuO/bl6IUnidKQWheth3oeM/IZLpbmE9g
GeAj4b89yW6X3rmsHPtS6e1FpPCfxKuyf//b/2ah9/e//R8IfBUj4r12fVvb
wMK0KJ7NAf3WDFKDdwQpRlnBJgwWBcPpY8kZVHoz9ULqZLRNFoIAELe+kpBB
Qf7JiQj0TQR8Ho8qytvI+URph8CcUi9AIsQbzkzgv2ygAGDA5BkJkLyCq8Sr
C5oE3AQpoty2qWFhQyfS5UkcevL8l3TbimzCLvgKg9UZ9hcwE3WtZkEgz+w/
QIJhZNORFQUj0JPhjjsR+5IcIouZTFGQlWhVAIvRIJpMnzSn5M7gDxYuiLti
1xM5oCCOU6gLqNeOAyQsa9h2MGKKiRm53+BG/8QRSBYDbeLkBr5mtGwDZDiS
QA5EwbGXeo0IQ4oO3sIofCa2oL/oLnK8CXJS2aGooy+KVcSPeey5MO9Fpx6d
aNtXHeTzxIl8sG9Uv88MOSgBU0xZupZIYjANfDwEZFiV9TuIzSXMZhitfOgt
/cbsQULbkzNYiOcZcBc3ZtOE0eX7nQjJi06W3TUkU4kMVyf4TcHgyXpnnJiu
PYgppqhkVj/WE0E98JZRQcym6UgOtnScfum6SnDPhyOVNjOuZFCTdt9mbIhw
bJ3EEmifaIBtMjZQ6HfzQLRxIq3EdhkJrGT7mWndtrlhwHVQh2UFSQEdCq10
ILcG6rzy7gxqYnlIVs3sx7r8bF99Qrzi+Zv3L35ryaCCEd0Gt9A8gFvNCayT
i6vM8sNliAElhtnXZCCC1oiTQpAzNenYl8FlyCmEQcbLBZRgSUZLtobbBluH
c4EtG5lnxE2k1hBCGPJqeLhZku6FJcG8D2Mr8DX5z7tNmXs6ZFuQjnoLxTX+
MBAWQa0nVTePXm2wy8EgZDjXJORaCFZhBmJIRKRHrPjiw0e+8U/4l8gv2J90
msROFeP0cyfL6S6wBfsWgfFD2BE6d02iGdcPrkVOVIUgJ4Q1r7WHJlDbkM+1
QYysTqMRfOYSOz0ntiWno+76XXpRQBREJ56z8MIpMH7OboRyduXO0eaQ+TcK
E8Kac7sZqAdaUj5j62dVrogm9YOpjXAD+fPC/MjWDFhyTmqPjaTCkYFHf53B
VpT9FfLpOWgPMzqIWIuI1Ojm2DoYiqKyiEIy0FspEimsFgPzNR4ZsJxeRGiS
vLyoskP4aaQyj9DWBxbgYDUMnpGJxW4dQLMnNjKZ5X3bsCJfvVZc44jk+7tA
TWRqkfTW7B2YRr4Vov95SysulHWWBAQcijQaCAxfGKDMnKj5BzLgwEpk2Y8J
NtGM9BWJ1EUQL/lMEWpy67IbEP8PNodODpooYDTbkJURDBDZSVilY+tstCFJ
iAvJOVsxaqL0Af6RXuFseQOwkqFDgDRDIG9QSn3Nvlxi8apn2+9MqbZyAxwH
T5fkzhYWYiHWqrANaw247PT/g2oMFpbEPITNEXRZiyBfjA0GOAEgLWTRsiE8
g21WVe836nh3SOapowb6JYBAuHGYPJiISA11o5hZQ0RUibPHTvEWJNwAdPOm
RcZo5NFzkoQVNNQAjJgaHnrLFjxIIhqxSyhngqk6aFkB7Z1vkr1JNtTkGVbM
KIIdMU9b9UmHPJCClgkRURQsdIQf3FAvfxp1g7Eq3yCvpGrWx/ilra+h5VMj
uNkDd4TkdXkzjnlyPGBVuc+lqtvC5cANRwMIoHBAETlLzwTemYXAAChnKjLw
mgyHfrvN2gMsayvwB0ecBNm32edy229NcoqZ3aXOqt8ENPDNyA3MyWOLlmWq
yTkERgIN4SD6N6CNrKzkSyFcOayLRzgAhsgyW70mMPTITmBbBFKWc54cuYO5
2c5FuJ0uXDeysGgHMNuQ6xPzl1khhtoQPuXQ9U1GRgud3BzHOyWRpCJBsKj5
VKhrUYycSCmmnNeLqoq+T4hFeDsktLIleEaClOT3TERFyQtiziEDYr5pdkzP
seJCFVUwHo6sBOY6MQ40agDyI96cczBHNhNe4a9zekEj/OkaC81sB19dOa4l
jaO8r75hwOGwhKxpOKTs2qNIsBiIVY90vjyHMAwC8Zw4DrdCIMUHL5Wd/T2n
CDlx5UTeM2iGkJ149kPsk3zaBrqA2S+RQSF4AI98D0QRzhEkRoaWLubtk0eW
VvbqUpJajfl3ZlC22gm21/hWiLiQidSxgYckFWmhRgzgpi1JRpPqUlGgZCEc
15H4IL+YTRgtflOWJ6OU9fu2KdyWA8P00ZOnb5cPfYhYMdg0MeMN0kFRJx3f
SzIkMFrY2ff2J6zzCuRNN5PE64pzDeJjx50R8YLui0TDVohJ01NHnMveiagD
PhnsjxOHQWJWyT5//9v/olNuGk50cdwxRBQao+pEbRn29jhOz3ebGVVat/25
ifpo+htsFZEAvY4xSBz4xTv74pXFx5LJQ5EIZ/KumM4h873t/sEdfBOuIZUC
Tkw4RHcc2SJByug5fPkXEGmNshJCCKT7Zfq1rkXZHix+QhrY8KapAF5OLLFR
paHYjK5DGGETCx+xAUrUmNqExKE12VOoxvNSFIo0Hh3EsA/l4ULRpyTngnxY
HiLGyZrqi54zKFsI2bUT48akhmrZSfCGmbUU3qq9Vrky7QbbEjmDMu8rDvYN
aV5osZ6tkXCiGiUiZOMDR3xY1xqx5EJQt2g4hYliDfWOCFYc9ZXkIVztlVhl
DG+zIi0E4TRY+UcVNyTcxK+KTpHsX3EywEiSUlMazNOB5dQfKMk06cnRpD1R
65Bf182ePH4JiRv9MheXkIpbl7kNhiKnfWq5raJAjNIYwjZEdeXqMN72QTQK
cSLQ8VLcFf799QcLs9i1Z0ZsrpBB3jtURjRt25PmD2sheqsWIWefxGCU1FNP
fhlTJLbCNtG5hnmgws+7mLwuGsfGvCH2A/+4gQcCSc8Q3RKVkDyoMRWFt9Kw
SPSBzIMzS0DbZwcBp5YvRUzhsD7RU2oohTQA+VS9/jCJoxnYOasQeg3XDiTO
cYBjGICqYlBOjhJgYCMM5I8Kh5bjwuaChQATTFAXsHPjt5C/bKX+sBwC8CBE
ur9BdbIVkdC15U7SbEFBqwqiUyHKW2sVmrJDR5c3Y+JfkH0jeUCYHwiBsh3A
B1LfbFwOVrjguS7sSKcR12UwulqxIKKdwME0UEtmQILzwWbh2EwbvNFQoRr9
7HBEkyjjUtBboRAPirkJhXnEQg/cYr0Q9f2pnP9Y8heIhzhxSnux8PYk8Lvd
BkEbAUSJkrOtO7MxkEfaLTHB+HGBH9zYvqxYQPY7+2DY6YIDW/YDAo9n+B7H
F0a2daBIuZ7GLEakSj6QfHWMrTG92VvojV2pAU0SNCBS+zCteWZjkQwJAAXA
4cJoMB9LlBnpgkxdFdDbrs3WOFo+UpfEJWYQvzaK39InGp+jO3xD/wzm8dVQ
HQNbq2sHVZTRY8B/iMgSnSE5RWpnhjwCQXOmKxglfnUaEsUU1mKdhKc++mCm
eRfl1UNy2YRpUcUQvCoEbiVwCVMnyn5WGSZBBV0qR6UenB8ly5JzlidakUlA
A8eNbXxeVhVn9KtyvYEFpgp/egkjIsyLVw17oJdAPGpVr4eIQXR3mloKOGFu
vBwsqaa1wSY6NZlMlsbS2Wg6Uppk7hj2bfcg4ZUAAKeaKgCRugfxAumD7W4I
FosqCNKdM+Zs5rDSMhLAOi6XSu3RpQa7NM5ukD6NEUEt0ECwg41WBJ6SWPeE
2crWJN9cPENWRSjYHbk3sITYBhqFAJJjMdc7GDrkxxNGd2wwhfOjREm9nvv2
BQjusuuLg53PpRjm3fury1fvrv7wxzfvP11ccQjgYkfrcIOCQJHLtzmRWxLN
kbJas3dofNWv16XfcAkyA/1tltv3l/aSg332csOhO0IIe2LxCr9798L+4rbI
rf6ISJpGKB788uPzs1gvI4YySqhhKL9GEaokdYhYnENSid1ncmzQ/kFyAcoE
0AtEk5QgJ4XJiMxyhCZehOssbojyXcgSaEkpFjQpGe5jDbK0Li0TZ3VwliSE
Klm/k4/3WQhPxmJhM1XE14yJUC02qZ1eaI24HnHEKUPREm+VjdxrkiLm5eUb
y/HqB21fhwi9KO6njx7Za3LkzoLICIVRXJ8SkHT5/o+X714+//jjmab22Wo5
f/y9vS6rZnnoEEj6JA7m00d268V8mEfzYaZVa0+GB+gwcIwRRCBRrWLvzPD6
fQyyINpYaihxSUfcl0W3mQtcd1J/Gtxh6OiFOleoM8c9zx8/TXaMmifwO9/g
+cSVzVAmcCD1LSpZFtESQ02YIxXJ2ScO6x2cZAkDVFls0CXhaj/47vGCDvOc
YK2Ff1B2nawn+QJsEBP+6hAw3/z4XG0Z43cMFSg3B8HLJaedFs9Oo89osVWp
x63caqq+amFeVFwsMROy5RQeH4uvxmdbHtKtVXiKv5OJSRLEg8b+SVWyOECF
ELo06lCLWCIeoJnz0sE6JwtRs+ASq++Y182GVBbH5CIgifk4kscRnr4rK22V
C6WjWrDLnjWSWFKdRmLqIP1XiNmxhByVAIL7VJLx+d5fGq4TKFj3nwpNUvKa
QDtdSmOFomBDdpRwYAJrxzTxA5Znp3g+g3wnXOdwzjLcd77ndByHEwgYzoMq
gyQypFkRmU0SCd98I6Hmb77RYHPIATFUaXWugEIi6k7igRTmYgusSegjksmi
azgO25jIqBMipJNgrEaQk5LQW74vZ+U7/gkxixiUjOw3cB2SC6EEI/G8CvNg
KJKET0POMDlou8NdwsSoMAmyVTx1OpcsIOSm1RlRmoTbnJldRZ4ka/dvviFP
oKoI/ANOkk6XNA8QkABXskFkTjMxU0vc+RwqDYdE1yzW+KQX8gh6cF02g3Nc
j3hHCaFR7yfjOtkLMmeytuUcBbwCOqbk67gxSHNzWtEE/bkXdBGo4Ym44ox4
ZyqlcnIlrWUXdQkA7CcvRAxAli1dmA22GMvhS5FW5jIp6ZH6R0XsJoTYysHH
9kGRS22tgeCmUwpJ6JeCkTgIqbFoerXdlVzKLeYM1NxKjXCYwk6EeFabFBy3
8j0B4vy7QbMZQAIFAQjB4IZRmoyksMasu81tGceV43pSRH6hocxt+8/sNdvQ
D860am+PBgZyWUeSbKwBjoxXLnY/Tfke1SgEJC7M+xDp/4rlIR5kxViJHhfk
QnhVsWUXoysnEmhmosSUkhA28uO1Y5Sa18tkJc4iacsLs9JQEyelyum1jZp7
EhzUmGisysiGiOEtF1aDoGM39yBScigVCalrc3vqmtHOBcUJO2mLAweY0oLu
ND1u0/S4njeqPBW5dzN1FMipPD2uOIToHerZBzPZn8oNE0JYx11DSJx71cAc
F4P8QUyUYLaGfb5tuH8rUJ6aHixpJLU25S+x23MLWvaxI0zklPmajlSIuNjT
Sk4Q2mg5VXCf/KmkO4VONem/SRZ84k8Wtp1D66REcNmh4lxNDWShHWFmpM5J
+wRZOw3WGsvv6CqxZkv7ZXDE+/YDrcB+7MtypTfSzsE3Zd1/jqIaVjmK27lJ
JVR+c00iHd2MbaihEBu0W3MhDhR9oVuAs+CGIxH0KdaFCLBBA2zSTYILKDKr
Ju85ls+eIGl+8biqTE0SOXhYSKT1xBPXiCiitHvbtLBfiajEBfmqg2iFnTfH
kn6mQnjIZo5aLaLO5VrdWGVyKmNOS8BWkOAaG7g6uWfhOlSn8CgHC8s7lf7Q
KwnPsUt3424/4zffaELJFWTCcIQ2VJTD2Ef4fWbYIylXuEGwSQbRzWUewUDR
DBzZRqorpyAKrzTRi1Z6CVktNrUgxWiBi5ATqvE93BwXCmRGWrxWAMXWWS25
jl4GoXDVM+xIo4rTLtwt2Zsth9RLIVtEQ1n2SdSIk1RsPjaFJmbkuLBhGqL+
nbKVctkV0Ps7WplMq+es9RmDMTmMCFrA+BRsEgKWCIeMVhhMAAFZdJ4FQGW0
qqDHzLi2G5EGdwMv9s99JmlCsAp7/FJ2JaGBKw466zdoQ04WO9+lweZjJY6F
0CjKsp7kOhfsQL1K3c5+qKReQZjICoYJyH3eZMivNPUsrXNXnwJ+T3AM6KAm
zeBASrKxwIm0rAqp6JWyzPuaOZW/C/3kQ1Lg8VP1+LnQhGzVMzMixCNDSzJS
dERkWtiLNuxDr7KSw+VJoD3tOY85aI2+03GYS0/P81M4CzPp1xxmKfH7fosY
p+PNuElshkRPTCZMYaqs04BNGerpvgqkZugEi2So1VItkrVkLPAtoq91UmSz
Z2ZcAVAIrnFG+LQRIaJ/qZVUGmsIDvRU5Df437dQoQkNeQO5YRtNrhX2AWF0
SwguNVzCRarbEo398sHZzMiIJmXauE/sa0tPA+7Jyx3yywaFT8Gi+lrSZSY8
7UUI2VANLVbI2OhVEHecmfiZUgBCqFvOcqtBKUCQpAEXAJIhGJ3B5UFKqyWh
cszHIihYjHJbkB3kkjmq0B/JKGSZOR4boMB7hU2ksipGf48BmYgBevLauR0d
608xH+vNlPBE+zcnRdg7lqrvIrk5Z0VAfZch/zyMahjC25pItohReDsSwgYl
uUPsQi9KJLTipM2QtAxQTyKsid0ysSlXraM0VGqfMRUmOGG3IV+izSFcMZsI
IMgNSHGvHPqOQmEbOkYQ2CLsraoGLTrZet06CbnLHVCw1IGZ0BQqNS2MXzQT
atJAKJOEepaLmln2LdCjIQARUxq1G2svlHlxwjSWI/l4Sb1/iHFIAtAssw6z
tNJKp3Bk1vCkYTacOwhRsCgqkIgiUkeHTqx2qJ20DggJXo0c7QQSsQyDSVaL
sCEJfPAKA5b3qJbAvYcs28z4JklCx5va0J1cIC7Tcr2QxvWW0LUB8dLrqIYe
nwToXt2JGEleu0z+HHJ4oVclkq39XovS3JrnI7DIk8j/+eM0V0DbAVZcmzss
yCncxmtT7OAYJ1ty/i7vt33McKTbP/522GQGvYQqIBluQBQTJRjXoKSG4pHq
ESd4PZQypuKjbk6kGLJ1tzOf74vCYZwIwJAWjMYqYlCUNsfDwTpRcaEGcY8E
NSrN3+d55sVVI4uhYv6b9Km1PvsO9QlESxFGUXpOgIvdtyedyObJEDUeKqNZ
XmjZVKjrQGyV20iE8NuYdOCvStuFRp2nYQWE0WLRDx1cQMmUcuFuHBQ2Ekcz
w30NGn5OSxqGm8OFUNVlQnAGfm4uYzzavg4Ikj6g9EFBEAqix9x5mgJnLIY/
TwbZkiDcoEpZ4w9pK+kKTeyKFd0NoUmJzI0IUqpO2FSRUSsxRnf0RSPlNvq1
u91ENpFCBvm+lca9YRDkhzDfwEwqS8kah+QtCnpqpyN1dNCCUHv8BCdNBi2A
JDDxSAtOVb5JZ3Ew4tR+Y1mG9dOZDBKtHdrGDaidg0Cl88nEhlMaJInFlM5z
lPiMJ/MdRofT0Q73J2dMGPN8KNeUMgSIcpQhcO2UyNV09oZ21XsjCaoBJCEq
EsA3uZ/aeHjyH4+8iCTDdogZp5SYTA6h8ycRfqmvJfZ7avwSVZNWYRZN7dyR
K/i6CzVWf2qWgZknBGdiFMImYw0W+EXyACeR7JR/htAgs04S1B445J+Cl9Gi
nyjAOC7laUvY5T4Z2GVOJm5ZGRuT2k8x/JWFypn3X02SY3IECQqAM41OjUaW
6LFj+8LAJZA5c3lI9d48YZmkYwZmLxEB0WaLdml12suWezClj19BmSwZcRUr
yEL/5VKa1aQWPJRMxpbVWHDXdndQh64OGAevySQy1CcDRu7ovlHaheHZTO6c
kv6tW8K1hKLk+321O/Z6rMYGT2JIOiVRALjh3KLIg3JhhbKMc6n1JROpuL+X
M3PReFXrbQTKOD5OjW0kPmhh1OoiJSCGj/jNiHbhs0Fb0f3N4KhOgPwEwPG+
x7xhPVnfKATnqk7PTNrED1k0pUjQbnLp99xIkwdXTfpD0k5vu8xf+1tap1H+
F66Z3pEcsOLokok3HpDMzcP/jNt9pRMQYgdPMLhgk5bYMTAFpngkw8fo/I5l
tKQMV+KzsMpkUwtmTis14Sinlg7WOzKbsY+VbczYc+bMqLVP7LtRI8it/GMK
UoMoz9RpFUNrre5EpvK4+46do6papAZsSlpcLz7CA5Qj13zX7JGlSZpY1xy1
+0GslcsGNd3d0Gb2DqL6sl+vndephhdc3samEy3unnG1N+TmWEci7RDrP47U
A7qrsHnMqoWROCwuo12FJbRDTWtFElWeSOo0i8HbDpW3MtBDbi4pjyCfEDnW
fhvuGeFSOG2IjV0Qs9g1snMNshI8wceE9vqy1Z5FdBI7rgBlDZ5JUf0uUSxM
69k2lPAoSfJCKO9suSzzdLHAhmVrYow6Di+Nu2gL1GkG77i3qZtE1RDElLzI
7VgDjYW/vvrvP198vMR8w/jnwLI8jQvFTmXhotfBSj9vSlJ4P4burGNbXCpa
6YGb0u1n6gGRp9HKGAwzLVg0bKjdAKSgS22xDXU4cb+j8VJ37Bk11tGeRxJ7
pMCGLYdzsPzmQTlSLbxHMnNmju2lX//aHsNzXIMS5EayMCQCYVaaCTeIzbIh
PHFPPp7nPmWcBT6ZF2o4RFzxOCf0Rkn5WsQYD6QYYnNlMj4CdXgYt6vldiOB
GSZZlZ0Z+kQVk9rlE+zdKUJI2izYR9DGVhFVWqx2aoQlfgs36iQzYetD6Bjg
IJMUwR+JZWYjNBFAnU9WlKIpjAFx0v6Yzq3TSGW0U81k3XXsetYq5mKy/Du0
v6CkAgk3zC1xCOBJ8Lb3ipSoJ2JLrwgTbsQYigIwwm4U301aSQKJhoFPUfaF
45vR8VdHTDW0VrR94XSoh7QQMnWChdoyh40xAQ0Tu6ti8CApbL+lrvcXknXq
gE+DeHkw02DNYtjsjg2gozK9DgyWeRjrIinkQTkMDUAcR4rNMUeVvKmZN4ya
kvh7CzxptpjYPIbJY7JQG1F4ZBxpLCX6ccuJNC+UohBwoqHuhI2JoTEpFWAr
pEBjPb9EvOuR18I5++Cnjvo1ONjNYMkkJSmMxVxETHQ0PtuM/5pMeE5gz7ns
WKmAQ7gWwQeDCLAg8kF5FjuyZvQb/XpLF6fEzekb9BUdWpWH2Y0oRQszSDWo
VY56BsbEEAg0bTPKbrHwkJ0KM9qGEvtR+So7wzKYoCXpjB4s1usbMeNFkjGD
EpBx4Rj6muhkDgmO2B7VjKcnZNorJR2UYaqe3G82uNd39LuKKcki9cAHSk80
GKl7JyKpSTrm3VSjG08yD7CARhmyBDEl4EWdcbp56AE3exdOEwMKA8bxfGjg
0rEoIkdMaHB6/SEQi3ROWb/jCJkRKuFLcQN34PZyXHEjMwArfi1AegMxJrVM
k1P1l1fvPzz86f0oqBqL5hiHvLSOUuMOoFMRpkI/FCMZLazX2VR4E0FQx+w9
zb56ahR/Uwq1ksjGaJasuoQIk4jdutEs3iC9kklw9NcqpjpDC/zQTnMpE6rg
jscAlDqyIUgfYsY6witGUmfkYMaxW8mVZ7EZLPjcmZq2Rob/5zzVi2iaV5bL
yvSUB2cQt6EmUfrUvwYKd0MgC2kX1rdxxjdH9ZKw8CeeHzGlsJKNeRRaHMiC
QeqDIbXjri5oV3bFp5XfsJYkqTiG0HfrJu1AG1JgTX0q+0wQiOgaGKyJbMr+
Qclh8FdIg9GTw5iMW/xgHV+rOSk3Tp3y5JAgqoY2Uw/ZKr3J3A7MMoCr8rQA
L6vWDWF8sy3z9LFnxnxjFQooww7zEQhdBZEqBgv3oKgY+AhqCfV145CjnPuG
zCT6gFvICB2QPmITHXeSaaVWMh9RLsCGbkQIm2sM0QElcETI1yfdzH56pcML
NHXPk7ZDC7tOaScTDNP1h3AE6CN0Lo0MCQ75nExdVStTY0XsMRDYriYjiWKo
Q74nZvcDkpoaKwgC9GxACTeQjhOoXCUlhB2GabFhEq45CAHuAPdGZ0HhYixk
gjEfuiKFW5O+5NMGya+G1nFd7bBqnFgDjEVY2glY0m326OHHQEIxsbRrZJul
bavI3vqQxd/H6QASMw6Tq5HFhFnAEfswFmf0ooyZOXIrlpgPT/pox30/aTQ9
LQPqpLUuSWPFmSxsH7GP23RclKeN9yk1MZWw5tTLYOV6PQwXi5nwhDvERj5i
LRAImbcg7ZTBYBGRI9uGOJbOrFFwIR25RYRlNCyAc5N+2zSaCj2eysKz0lPz
9GSO6VfyqXZQwMFlbrUT3Mq6iX0g7cpS85mAJmHXQfQe1YJG24aU/xIdTRNC
cdpbRaiXH6lvEYq3PGaSSKHYr41Uuw9TvUZTvCJswiwz6f+DtkfheDp5SYmz
TF6AIdul092Pw87/ORPexeKBgydpsX1Wy2CEajXHAKWSS+RDL/99+7yvriXP
CxgkaV5J2w0BZ7RZozlA+1cqeccfnaPj8drkc22F6NH2xkSW5adv7eDElszh
y0MxvJhHiDfOuWd1LRMdua5SdxlPkCIDPQ58AeR59rcI0HQz0Gk0urM1+Suh
wAD0uclwQNcmDTvgI6wn2FgCMkIwCh7PU4O52hITllHuJKZqRHW/KzhbSSbq
bviz3Hc+hw+wJEubGDbeTN91kk5zPA5/Qu9DQsM6lq59EdHpOGMSaWxTxWG/
PDZ7iBbEz4+c+wXS+cnEQD0Gi2eZ9MpxFS0oChawgdTX5BXCMKLuWH2gyIf/
in6Hk83eftX3QhxNZnAZ97nTd6lpZh1HvkRDmrxoILw8SotZCXAEJsP+UeDc
kes0Swq18UdpSWJWMv+cYcDVb64Nc1Wz2o6nC0xluYcW/lhXBvYFgmd2pGnT
12SlypYHvTds3cItGT3DqRStBzrVmSFpJHUtq/Iz+usjKh+UddQyQj78PgRk
9M9MFjRA18D2CCsfYY68UX7PQLBEGJvSwVwOs/2OoldRp8aBYnaoe5kAIZcF
nRmxM7wo2XZE8UJVt5xxZoJ0lDHEyjqsrSXPxuV38Y0ZF1Nn0AlnbFiIVi+y
nRoDUhnqeWB/Ib2seJuQjIQ+LSACOCp9yRrzhbnt4FbLAGOHtBtj4yRyV3qx
Xnj5bB2yTaf34chGl1TF3nliHSGJrqAZj2bgOvBggaocmkIchD8cE5hL+oY4
GdXKIwyyas6nviulyUNeJxfX9RAha/idevxKM3Zv73qR1MwmdcVlzceIXQJ3
Tz65by+GF2dAdZLfm0F13r9vfwO0Q1x9fPlhqApiSypEwEqP7gwuWeNAU9gj
6ryaZ7URHWXVUQZtlBU8rR+JgEbh6NbhNj6kAsOLQESLQ5KWR29H0lc61dfy
FWEVuoaY057nxyV238mbmbjiR3OZVt/gwskYcmOPiotkWiPWDnx7PCM9GGnm
1POHuM2ztuU0zlZd4CRHoktKAalUNoY0ySwaSEkvr/hxYsklPqoAJAyQSSXx
EBCSPVtUy3Lzj08krrmoD4n4B0iJU6TAgq/+IBb7nMlLdqUyg1T6XKYZpHmt
Y1kTKGF4XYsJiRxcVgfEcgFZ08a8H0pIWdxxbYYzazKFnDbMC/e2EGGS3wkx
7WkeuG91HtSrz7tSyjiNeV87icRn4f09pTTJCymKW1X62LAdUEVeIirhw0BF
F5dU+ZcmoTmCwqqFZ/XyVNmj4QbRwhf8dLEOnMHUd81Qy6D5Ft286eOgj1L7
ZgfnO0f1GV45KK7AzTCjMVVqaRkmM0OoVZUXTSb1JBIliJPAN+IBFqLVTPh6
MPBi/urYopfucdSNx5EzHPEbw3AiphYnkfEAX5XqAVYmrWhVX0IYJTww6sxA
J9OG35ww6iYQnMmrdLieJbM86CAp5ZNMsew5PloS3MrZ0E/c0+NlzPBiJ+gY
suN7nrKsL4jjwKoneSov9UEgThYg27z3+v6orbwsJIqgsOkQnw+44PFZmJDU
cUNIfC9oGtcLsxBjykyruDSmOp53V0+6Rqn/xNkek3hl4vfrUBkemBcHpIet
+B2zgKtO3hJMdDF+zeyjIZsw8L4MNDB8z2QSckmnL8VgTOBkmeQmDela+TNq
y0FtZsbKLQisdT+aSsjxktoEOMSiksHo9mJ1HXsFRn13KZ/nyfYyEzHvu+Hl
ATDnxPqVHSKwYIf8SaQvag+0pkedJv4WQlQlqT81G9gQ4UxrU1UuBsxvStCR
KYa+6Oi5hsDDeJS9NI0cjbePryuTwr5iwFFoX4ulJJm+VcLHsrJhJD4fMqTb
+JViHJAJsUUdht6FUhCnb8Xi4gvkZAM1dEkFvB5apcjRuaUOPFL/SLBKvYUJ
Od8hYj1MOg1jBjDIHswpDUFBmGXqAA2NUcqZ6B4u4yyJItsi2DhmkzSGNebs
pDkME0t4vIIkq5SgXWQkvTqU3s+a9HuDgMXzkOR5iEgKv0H4Q1uCWpEANh/E
8hpepSb1BpKjlLneiqbw3kiiZriqGLSLiMg8ZJHk6lp2xjzH1lipVppJ3oIs
jTY+RjbjebRXgwUg/EXwiMxD7WsWZEU6UG+ZXXP8i2OOPG+x1jguqgn5SBrM
9HFGXlJKSE4uK+aoY+jxG81r4Sp9zRFs/J68pXrFis/Fjr3C3WCKHPSbvscH
45lduz6IeaPJ56ZdZzXm8wqQ5MJ/certa2I669LT2eF0i7QmgFUWnYCrEfRF
o9o5yF/2qidNgjVvJ3Em0SQM1GjwKu9D0g3CwNRYKV4dxJSqClh7bbqYiAqj
PcJDtdg8IQg7LiWNOYo+igWZLRFy5S+0FlVH4Jh3zeloXeHDoQk1iTSwFRJm
Ihy9ZUn0q3SFDZPK4mtIR+/H5mn4F+8uTs4zfkOpjBSSb+pcR32nNQ/MhSc2
Loo2f30mfZau+PU90jne3ftCi75/+f74mwvz/wGbuNAOv4UAAA==

-->

</rfc>

