p4-dev@lists.p4.org

list for questions/discussion of p4 programs and tools

View all threads

Increase in deq_timedelta when enq_qdepth increases

BS
Bibek Shrestha
Sun, Jan 3, 2021 11:08 PM

Hi all,

According to my understanding, the deq_timedelta should increase whenever
there is an increase in enq_qdepth as the packet starts to experience
queueing. When I tried to see this in action with P4 and BMV2, I found that
whenever there is an increase in enq_qdepth, the deq_timedelta instead
decreases producing results just opposite of my expectation. I tried this
multiple times but had the same result. Is my understanding correct? If
yes, is there any explanation to why I am getting such a result?

Thanks
B.

Hi all, According to my understanding, the deq_timedelta should increase whenever there is an increase in enq_qdepth as the packet starts to experience queueing. When I tried to see this in action with P4 and BMV2, I found that whenever there is an increase in enq_qdepth, the deq_timedelta instead decreases producing results just opposite of my expectation. I tried this multiple times but had the same result. Is my understanding correct? If yes, is there any explanation to why I am getting such a result? Thanks B.
AF
Andy Fingerhut
Mon, Jan 4, 2021 12:57 AM

First, if the very first packet has deq_timedelta 0, or very close to it, I
would be very surprised if it goes down from 0, so I am guessing you are
describing some longer term behavior, not what happens on the second or
third packet.

Second, the packet scheduling behavior of queues in BMv2 is not
necessarily "schedule packets at a constant bit rate for each output port"
the way it would typically be on a physical switch ASIC.  There are options
for configuring the maximum packet rate that a particular queue will be
scheduled at, but that option ignores the lengths of individual packets.

In general, BMv2 is not intended for performance simulations of networks of
physical links and switches.  It could be modified to make it more suitable
for this purpose, but it would be a non-trivial amount of work and testing
to get it to that point.

Andy

On Sun, Jan 3, 2021 at 3:09 PM Bibek Shrestha sbibek2050@gmail.com wrote:

Hi all,

According to my understanding, the deq_timedelta should increase whenever
there is an increase in enq_qdepth as the packet starts to experience
queueing. When I tried to see this in action with P4 and BMV2, I found that
whenever there is an increase in enq_qdepth, the deq_timedelta instead
decreases producing results just opposite of my expectation. I tried this
multiple times but had the same result. Is my understanding correct? If
yes, is there any explanation to why I am getting such a result?

Thanks
B.


P4-dev mailing list
P4-dev@lists.p4.org
http://lists.p4.org/mailman/listinfo/p4-dev_lists.p4.org

First, if the very first packet has deq_timedelta 0, or very close to it, I would be very surprised if it goes down from 0, so I am guessing you are describing some longer term behavior, not what happens on the second or third packet. Second, the packet scheduling behavior of queues in BMv2 is _not_ necessarily "schedule packets at a constant bit rate for each output port" the way it would typically be on a physical switch ASIC. There are options for configuring the maximum _packet rate_ that a particular queue will be scheduled at, but that option ignores the lengths of individual packets. In general, BMv2 is not intended for performance simulations of networks of physical links and switches. It could be modified to make it more suitable for this purpose, but it would be a non-trivial amount of work and testing to get it to that point. Andy On Sun, Jan 3, 2021 at 3:09 PM Bibek Shrestha <sbibek2050@gmail.com> wrote: > Hi all, > > According to my understanding, the deq_timedelta should increase whenever > there is an increase in enq_qdepth as the packet starts to experience > queueing. When I tried to see this in action with P4 and BMV2, I found that > whenever there is an increase in enq_qdepth, the deq_timedelta instead > decreases producing results just opposite of my expectation. I tried this > multiple times but had the same result. Is my understanding correct? If > yes, is there any explanation to why I am getting such a result? > > Thanks > B. > > _______________________________________________ > P4-dev mailing list > P4-dev@lists.p4.org > http://lists.p4.org/mailman/listinfo/p4-dev_lists.p4.org >
BS
Bibek Shrestha
Mon, Jan 4, 2021 1:35 AM

Thank you for the response. My case is, I track average deq_timedelta each
100ms and also the enq_qdepth over the same period. Defaultly
the enq_qdepth is always zero and let's suppose the average deq_timedelta
is X (it is greater than zero).  Then I try to overload the switch by
initiating transfers between hosts. At that point, I see enq_qdepth starts
to take values greater than zero which suggests that the packet is getting
queued. At this point, I also measure the average deq_timedelta and
consider it Y.  Since I would assume that the amount of time a packet lives
in the queue increases if the queue is getting occupied, but that is not
what I see. I see that Y < X which is opposite of what I expected.

In my case, I have tweaked the BMV2 src to sync the time from the system
rather than starting from 0.

I understand your point that BMv2 is not intended for performance
simulations but I was still assuming that the deq_timedelta should
generally increase if packets are queued more often. From your answer, it
seems making such an assumption would be a mistake in this case. Am I right?

Thank you
B.

On Sun, Jan 3, 2021 at 4:58 PM Andy Fingerhut andy.fingerhut@gmail.com
wrote:

First, if the very first packet has deq_timedelta 0, or very close to it,
I would be very surprised if it goes down from 0, so I am guessing you are
describing some longer term behavior, not what happens on the second or
third packet.

Second, the packet scheduling behavior of queues in BMv2 is not
necessarily "schedule packets at a constant bit rate for each output port"
the way it would typically be on a physical switch ASIC.  There are options
for configuring the maximum packet rate that a particular queue will be
scheduled at, but that option ignores the lengths of individual packets.

In general, BMv2 is not intended for performance simulations of networks
of physical links and switches.  It could be modified to make it more
suitable for this purpose, but it would be a non-trivial amount of work and
testing to get it to that point.

Andy

On Sun, Jan 3, 2021 at 3:09 PM Bibek Shrestha sbibek2050@gmail.com
wrote:

Hi all,

According to my understanding, the deq_timedelta should increase whenever
there is an increase in enq_qdepth as the packet starts to experience
queueing. When I tried to see this in action with P4 and BMV2, I found that
whenever there is an increase in enq_qdepth, the deq_timedelta instead
decreases producing results just opposite of my expectation. I tried this
multiple times but had the same result. Is my understanding correct? If
yes, is there any explanation to why I am getting such a result?

Thanks
B.


P4-dev mailing list
P4-dev@lists.p4.org
http://lists.p4.org/mailman/listinfo/p4-dev_lists.p4.org

Thank you for the response. My case is, I track average deq_timedelta each 100ms and also the enq_qdepth over the same period. Defaultly the enq_qdepth is always zero and let's suppose the average deq_timedelta is X (it is greater than zero). Then I try to overload the switch by initiating transfers between hosts. At that point, I see enq_qdepth starts to take values greater than zero which suggests that the packet is getting queued. At this point, I also measure the average deq_timedelta and consider it Y. Since I would assume that the amount of time a packet lives in the queue increases if the queue is getting occupied, but that is not what I see. I see that Y < X which is opposite of what I expected. In my case, I have tweaked the BMV2 src to sync the time from the system rather than starting from 0. I understand your point that BMv2 is not intended for performance simulations but I was still assuming that the deq_timedelta should generally increase if packets are queued more often. From your answer, it seems making such an assumption would be a mistake in this case. Am I right? Thank you B. On Sun, Jan 3, 2021 at 4:58 PM Andy Fingerhut <andy.fingerhut@gmail.com> wrote: > First, if the very first packet has deq_timedelta 0, or very close to it, > I would be very surprised if it goes down from 0, so I am guessing you are > describing some longer term behavior, not what happens on the second or > third packet. > > Second, the packet scheduling behavior of queues in BMv2 is _not_ > necessarily "schedule packets at a constant bit rate for each output port" > the way it would typically be on a physical switch ASIC. There are options > for configuring the maximum _packet rate_ that a particular queue will be > scheduled at, but that option ignores the lengths of individual packets. > > In general, BMv2 is not intended for performance simulations of networks > of physical links and switches. It could be modified to make it more > suitable for this purpose, but it would be a non-trivial amount of work and > testing to get it to that point. > > Andy > > > On Sun, Jan 3, 2021 at 3:09 PM Bibek Shrestha <sbibek2050@gmail.com> > wrote: > >> Hi all, >> >> According to my understanding, the deq_timedelta should increase whenever >> there is an increase in enq_qdepth as the packet starts to experience >> queueing. When I tried to see this in action with P4 and BMV2, I found that >> whenever there is an increase in enq_qdepth, the deq_timedelta instead >> decreases producing results just opposite of my expectation. I tried this >> multiple times but had the same result. Is my understanding correct? If >> yes, is there any explanation to why I am getting such a result? >> >> Thanks >> B. >> >> _______________________________________________ >> P4-dev mailing list >> P4-dev@lists.p4.org >> http://lists.p4.org/mailman/listinfo/p4-dev_lists.p4.org >> >
AF
Andy Fingerhut
Mon, Jan 4, 2021 1:49 AM

I suspect that there is a perfectly reasonable explanation for what you are
seeing, but that it might take several hours, or days, to determine exactly
what about the bmv2 behavior is causing what you see.

I have seen behavior where bmv2 gets into a mode where it schedules packets
from one of its queues for an "output queue" for a particular output port
very quickly, much faster than the link can drain the packets, and then
stops scheduling packets for that output port for a long time, which I
believe was because the software implementation using veth links
effectively allows packets to be enqueued for that port at nearly
arbitrarily fast rates, and then blocks further writes of packets to the
veth interface until some software queue in the kernel drains below some
threshold.  That queue is completely outside of bmv2, and is not visible to
it in the enq_qdepth nor deq_timedelta statistics that you are observing.
Maybe that explains what you are seeing, or maybe it is something else.

The veth links DO NOT behave like constant bit rate links between
switches.  I do not have a good characterization of precisely how they do
behave, but even if I did I know it would be very unlike a constant bit
rate link.

Andy

On Sun, Jan 3, 2021 at 5:35 PM Bibek Shrestha sbibek2050@gmail.com wrote:

Thank you for the response. My case is, I track average deq_timedelta each
100ms and also the enq_qdepth over the same period. Defaultly
the enq_qdepth is always zero and let's suppose the average deq_timedelta
is X (it is greater than zero).  Then I try to overload the switch by
initiating transfers between hosts. At that point, I see enq_qdepth starts
to take values greater than zero which suggests that the packet is getting
queued. At this point, I also measure the average deq_timedelta and
consider it Y.  Since I would assume that the amount of time a packet lives
in the queue increases if the queue is getting occupied, but that is not
what I see. I see that Y < X which is opposite of what I expected.

In my case, I have tweaked the BMV2 src to sync the time from the system
rather than starting from 0.

I understand your point that BMv2 is not intended for performance
simulations but I was still assuming that the deq_timedelta should
generally increase if packets are queued more often. From your answer, it
seems making such an assumption would be a mistake in this case. Am I right?

Thank you
B.

On Sun, Jan 3, 2021 at 4:58 PM Andy Fingerhut andy.fingerhut@gmail.com
wrote:

First, if the very first packet has deq_timedelta 0, or very close to it,
I would be very surprised if it goes down from 0, so I am guessing you are
describing some longer term behavior, not what happens on the second or
third packet.

Second, the packet scheduling behavior of queues in BMv2 is not
necessarily "schedule packets at a constant bit rate for each output port"
the way it would typically be on a physical switch ASIC.  There are options
for configuring the maximum packet rate that a particular queue will be
scheduled at, but that option ignores the lengths of individual packets.

In general, BMv2 is not intended for performance simulations of networks
of physical links and switches.  It could be modified to make it more
suitable for this purpose, but it would be a non-trivial amount of work and
testing to get it to that point.

Andy

On Sun, Jan 3, 2021 at 3:09 PM Bibek Shrestha sbibek2050@gmail.com
wrote:

Hi all,

According to my understanding, the deq_timedelta should increase
whenever there is an increase in enq_qdepth as the packet starts to
experience queueing. When I tried to see this in action with P4 and BMV2, I
found that whenever there is an increase in enq_qdepth, the deq_timedelta
instead decreases producing results just opposite of my expectation. I
tried this multiple times but had the same result. Is my understanding
correct? If yes, is there any explanation to why I am getting such a result?

Thanks
B.


P4-dev mailing list
P4-dev@lists.p4.org
http://lists.p4.org/mailman/listinfo/p4-dev_lists.p4.org

I suspect that there is a perfectly reasonable explanation for what you are seeing, but that it might take several hours, or days, to determine exactly what about the bmv2 behavior is causing what you see. I have seen behavior where bmv2 gets into a mode where it schedules packets from one of its queues for an "output queue" for a particular output port _very_ quickly, much faster than the link can drain the packets, and then stops scheduling packets for that output port for a long time, which I believe was because the software implementation using veth links effectively allows packets to be enqueued for that port at nearly arbitrarily fast rates, and then blocks further writes of packets to the veth interface until some software queue in the kernel drains below some threshold. That queue is completely outside of bmv2, and is not visible to it in the enq_qdepth nor deq_timedelta statistics that you are observing. Maybe that explains what you are seeing, or maybe it is something else. The veth links DO NOT behave like constant bit rate links between switches. I do not have a good characterization of precisely how they do behave, but even if I did I know it would be very unlike a constant bit rate link. Andy On Sun, Jan 3, 2021 at 5:35 PM Bibek Shrestha <sbibek2050@gmail.com> wrote: > Thank you for the response. My case is, I track average deq_timedelta each > 100ms and also the enq_qdepth over the same period. Defaultly > the enq_qdepth is always zero and let's suppose the average deq_timedelta > is X (it is greater than zero). Then I try to overload the switch by > initiating transfers between hosts. At that point, I see enq_qdepth starts > to take values greater than zero which suggests that the packet is getting > queued. At this point, I also measure the average deq_timedelta and > consider it Y. Since I would assume that the amount of time a packet lives > in the queue increases if the queue is getting occupied, but that is not > what I see. I see that Y < X which is opposite of what I expected. > > In my case, I have tweaked the BMV2 src to sync the time from the system > rather than starting from 0. > > I understand your point that BMv2 is not intended for performance > simulations but I was still assuming that the deq_timedelta should > generally increase if packets are queued more often. From your answer, it > seems making such an assumption would be a mistake in this case. Am I right? > > Thank you > B. > > > > > On Sun, Jan 3, 2021 at 4:58 PM Andy Fingerhut <andy.fingerhut@gmail.com> > wrote: > >> First, if the very first packet has deq_timedelta 0, or very close to it, >> I would be very surprised if it goes down from 0, so I am guessing you are >> describing some longer term behavior, not what happens on the second or >> third packet. >> >> Second, the packet scheduling behavior of queues in BMv2 is _not_ >> necessarily "schedule packets at a constant bit rate for each output port" >> the way it would typically be on a physical switch ASIC. There are options >> for configuring the maximum _packet rate_ that a particular queue will be >> scheduled at, but that option ignores the lengths of individual packets. >> >> In general, BMv2 is not intended for performance simulations of networks >> of physical links and switches. It could be modified to make it more >> suitable for this purpose, but it would be a non-trivial amount of work and >> testing to get it to that point. >> >> Andy >> >> >> On Sun, Jan 3, 2021 at 3:09 PM Bibek Shrestha <sbibek2050@gmail.com> >> wrote: >> >>> Hi all, >>> >>> According to my understanding, the deq_timedelta should increase >>> whenever there is an increase in enq_qdepth as the packet starts to >>> experience queueing. When I tried to see this in action with P4 and BMV2, I >>> found that whenever there is an increase in enq_qdepth, the deq_timedelta >>> instead decreases producing results just opposite of my expectation. I >>> tried this multiple times but had the same result. Is my understanding >>> correct? If yes, is there any explanation to why I am getting such a result? >>> >>> Thanks >>> B. >>> >>> _______________________________________________ >>> P4-dev mailing list >>> P4-dev@lists.p4.org >>> http://lists.p4.org/mailman/listinfo/p4-dev_lists.p4.org >>> >>
BS
Bibek Shrestha
Mon, Jan 4, 2021 7:21 PM

Thank you for your explanation. I see that there is no way right now to
correct this results in existing BMv2 implementation. I am trying to work
with NS3 for the experiments now. I was looking at BMv2, P4 based NS3
simulators but was not able to find a stable one that works. Some repos I
found was not maintained and had no documentation to work with. Can you
point me in the right direction?

Thanks
B.

On Sun, Jan 3, 2021 at 5:49 PM Andy Fingerhut andy.fingerhut@gmail.com
wrote:

I suspect that there is a perfectly reasonable explanation for what you
are seeing, but that it might take several hours, or days, to determine
exactly what about the bmv2 behavior is causing what you see.

I have seen behavior where bmv2 gets into a mode where it schedules
packets from one of its queues for an "output queue" for a particular
output port very quickly, much faster than the link can drain the
packets, and then stops scheduling packets for that output port for a long
time, which I believe was because the software implementation using veth
links effectively allows packets to be enqueued for that port at nearly
arbitrarily fast rates, and then blocks further writes of packets to the
veth interface until some software queue in the kernel drains below some
threshold.  That queue is completely outside of bmv2, and is not visible to
it in the enq_qdepth nor deq_timedelta statistics that you are observing.
Maybe that explains what you are seeing, or maybe it is something else.

The veth links DO NOT behave like constant bit rate links between
switches.  I do not have a good characterization of precisely how they do
behave, but even if I did I know it would be very unlike a constant bit
rate link.

Andy

On Sun, Jan 3, 2021 at 5:35 PM Bibek Shrestha sbibek2050@gmail.com
wrote:

Thank you for the response. My case is, I track average deq_timedelta
each 100ms and also the enq_qdepth over the same period. Defaultly
the enq_qdepth is always zero and let's suppose the average deq_timedelta
is X (it is greater than zero).  Then I try to overload the switch by
initiating transfers between hosts. At that point, I see enq_qdepth starts
to take values greater than zero which suggests that the packet is getting
queued. At this point, I also measure the average deq_timedelta and
consider it Y.  Since I would assume that the amount of time a packet lives
in the queue increases if the queue is getting occupied, but that is not
what I see. I see that Y < X which is opposite of what I expected.

In my case, I have tweaked the BMV2 src to sync the time from the system
rather than starting from 0.

I understand your point that BMv2 is not intended for performance
simulations but I was still assuming that the deq_timedelta should
generally increase if packets are queued more often. From your answer, it
seems making such an assumption would be a mistake in this case. Am I right?

Thank you
B.

On Sun, Jan 3, 2021 at 4:58 PM Andy Fingerhut andy.fingerhut@gmail.com
wrote:

First, if the very first packet has deq_timedelta 0, or very close to
it, I would be very surprised if it goes down from 0, so I am guessing you
are describing some longer term behavior, not what happens on the second or
third packet.

Second, the packet scheduling behavior of queues in BMv2 is not
necessarily "schedule packets at a constant bit rate for each output port"
the way it would typically be on a physical switch ASIC.  There are options
for configuring the maximum packet rate that a particular queue will be
scheduled at, but that option ignores the lengths of individual packets.

In general, BMv2 is not intended for performance simulations of networks
of physical links and switches.  It could be modified to make it more
suitable for this purpose, but it would be a non-trivial amount of work and
testing to get it to that point.

Andy

On Sun, Jan 3, 2021 at 3:09 PM Bibek Shrestha sbibek2050@gmail.com
wrote:

Hi all,

According to my understanding, the deq_timedelta should increase
whenever there is an increase in enq_qdepth as the packet starts to
experience queueing. When I tried to see this in action with P4 and BMV2, I
found that whenever there is an increase in enq_qdepth, the deq_timedelta
instead decreases producing results just opposite of my expectation. I
tried this multiple times but had the same result. Is my understanding
correct? If yes, is there any explanation to why I am getting such a result?

Thanks
B.


P4-dev mailing list
P4-dev@lists.p4.org
http://lists.p4.org/mailman/listinfo/p4-dev_lists.p4.org

Thank you for your explanation. I see that there is no way right now to correct this results in existing BMv2 implementation. I am trying to work with NS3 for the experiments now. I was looking at BMv2, P4 based NS3 simulators but was not able to find a stable one that works. Some repos I found was not maintained and had no documentation to work with. Can you point me in the right direction? Thanks B. On Sun, Jan 3, 2021 at 5:49 PM Andy Fingerhut <andy.fingerhut@gmail.com> wrote: > I suspect that there is a perfectly reasonable explanation for what you > are seeing, but that it might take several hours, or days, to determine > exactly what about the bmv2 behavior is causing what you see. > > I have seen behavior where bmv2 gets into a mode where it schedules > packets from one of its queues for an "output queue" for a particular > output port _very_ quickly, much faster than the link can drain the > packets, and then stops scheduling packets for that output port for a long > time, which I believe was because the software implementation using veth > links effectively allows packets to be enqueued for that port at nearly > arbitrarily fast rates, and then blocks further writes of packets to the > veth interface until some software queue in the kernel drains below some > threshold. That queue is completely outside of bmv2, and is not visible to > it in the enq_qdepth nor deq_timedelta statistics that you are observing. > Maybe that explains what you are seeing, or maybe it is something else. > > The veth links DO NOT behave like constant bit rate links between > switches. I do not have a good characterization of precisely how they do > behave, but even if I did I know it would be very unlike a constant bit > rate link. > > Andy > > > On Sun, Jan 3, 2021 at 5:35 PM Bibek Shrestha <sbibek2050@gmail.com> > wrote: > >> Thank you for the response. My case is, I track average deq_timedelta >> each 100ms and also the enq_qdepth over the same period. Defaultly >> the enq_qdepth is always zero and let's suppose the average deq_timedelta >> is X (it is greater than zero). Then I try to overload the switch by >> initiating transfers between hosts. At that point, I see enq_qdepth starts >> to take values greater than zero which suggests that the packet is getting >> queued. At this point, I also measure the average deq_timedelta and >> consider it Y. Since I would assume that the amount of time a packet lives >> in the queue increases if the queue is getting occupied, but that is not >> what I see. I see that Y < X which is opposite of what I expected. >> >> In my case, I have tweaked the BMV2 src to sync the time from the system >> rather than starting from 0. >> >> I understand your point that BMv2 is not intended for performance >> simulations but I was still assuming that the deq_timedelta should >> generally increase if packets are queued more often. From your answer, it >> seems making such an assumption would be a mistake in this case. Am I right? >> >> Thank you >> B. >> >> >> >> >> On Sun, Jan 3, 2021 at 4:58 PM Andy Fingerhut <andy.fingerhut@gmail.com> >> wrote: >> >>> First, if the very first packet has deq_timedelta 0, or very close to >>> it, I would be very surprised if it goes down from 0, so I am guessing you >>> are describing some longer term behavior, not what happens on the second or >>> third packet. >>> >>> Second, the packet scheduling behavior of queues in BMv2 is _not_ >>> necessarily "schedule packets at a constant bit rate for each output port" >>> the way it would typically be on a physical switch ASIC. There are options >>> for configuring the maximum _packet rate_ that a particular queue will be >>> scheduled at, but that option ignores the lengths of individual packets. >>> >>> In general, BMv2 is not intended for performance simulations of networks >>> of physical links and switches. It could be modified to make it more >>> suitable for this purpose, but it would be a non-trivial amount of work and >>> testing to get it to that point. >>> >>> Andy >>> >>> >>> On Sun, Jan 3, 2021 at 3:09 PM Bibek Shrestha <sbibek2050@gmail.com> >>> wrote: >>> >>>> Hi all, >>>> >>>> According to my understanding, the deq_timedelta should increase >>>> whenever there is an increase in enq_qdepth as the packet starts to >>>> experience queueing. When I tried to see this in action with P4 and BMV2, I >>>> found that whenever there is an increase in enq_qdepth, the deq_timedelta >>>> instead decreases producing results just opposite of my expectation. I >>>> tried this multiple times but had the same result. Is my understanding >>>> correct? If yes, is there any explanation to why I am getting such a result? >>>> >>>> Thanks >>>> B. >>>> >>>> _______________________________________________ >>>> P4-dev mailing list >>>> P4-dev@lists.p4.org >>>> http://lists.p4.org/mailman/listinfo/p4-dev_lists.p4.org >>>> >>>
AF
Andy Fingerhut
Mon, Jan 4, 2021 9:47 PM

If you are trying to do performance simulations of a network that should be
at least semi-accurate with regard to timing of packets traversing over
network links, and using realistic packet scheduling algorithms in
switches, etc., then I have not used it myself, but I thought that is what
projects like NS3 are for.

I do not know of any working projects that combine a P4-programmable device
and NS3.  I haven't looked for one, either, so my not knowing about one
doesn't mean it doesn't exist.  Hopefully someone else reading this might
know of one.

Andy

On Mon, Jan 4, 2021 at 11:21 AM Bibek Shrestha sbibek2050@gmail.com wrote:

Thank you for your explanation. I see that there is no way right now to
correct this results in existing BMv2 implementation. I am trying to work
with NS3 for the experiments now. I was looking at BMv2, P4 based NS3
simulators but was not able to find a stable one that works. Some repos I
found was not maintained and had no documentation to work with. Can you
point me in the right direction?

Thanks
B.

On Sun, Jan 3, 2021 at 5:49 PM Andy Fingerhut andy.fingerhut@gmail.com
wrote:

I suspect that there is a perfectly reasonable explanation for what you
are seeing, but that it might take several hours, or days, to determine
exactly what about the bmv2 behavior is causing what you see.

I have seen behavior where bmv2 gets into a mode where it schedules
packets from one of its queues for an "output queue" for a particular
output port very quickly, much faster than the link can drain the
packets, and then stops scheduling packets for that output port for a long
time, which I believe was because the software implementation using veth
links effectively allows packets to be enqueued for that port at nearly
arbitrarily fast rates, and then blocks further writes of packets to the
veth interface until some software queue in the kernel drains below some
threshold.  That queue is completely outside of bmv2, and is not visible to
it in the enq_qdepth nor deq_timedelta statistics that you are observing.
Maybe that explains what you are seeing, or maybe it is something else.

The veth links DO NOT behave like constant bit rate links between
switches.  I do not have a good characterization of precisely how they do
behave, but even if I did I know it would be very unlike a constant bit
rate link.

Andy

On Sun, Jan 3, 2021 at 5:35 PM Bibek Shrestha sbibek2050@gmail.com
wrote:

Thank you for the response. My case is, I track average deq_timedelta
each 100ms and also the enq_qdepth over the same period. Defaultly
the enq_qdepth is always zero and let's suppose the average deq_timedelta
is X (it is greater than zero).  Then I try to overload the switch by
initiating transfers between hosts. At that point, I see enq_qdepth starts
to take values greater than zero which suggests that the packet is getting
queued. At this point, I also measure the average deq_timedelta and
consider it Y.  Since I would assume that the amount of time a packet lives
in the queue increases if the queue is getting occupied, but that is not
what I see. I see that Y < X which is opposite of what I expected.

In my case, I have tweaked the BMV2 src to sync the time from the system
rather than starting from 0.

I understand your point that BMv2 is not intended for performance
simulations but I was still assuming that the deq_timedelta should
generally increase if packets are queued more often. From your answer, it
seems making such an assumption would be a mistake in this case. Am I right?

Thank you
B.

On Sun, Jan 3, 2021 at 4:58 PM Andy Fingerhut andy.fingerhut@gmail.com
wrote:

First, if the very first packet has deq_timedelta 0, or very close to
it, I would be very surprised if it goes down from 0, so I am guessing you
are describing some longer term behavior, not what happens on the second or
third packet.

Second, the packet scheduling behavior of queues in BMv2 is not
necessarily "schedule packets at a constant bit rate for each output port"
the way it would typically be on a physical switch ASIC.  There are options
for configuring the maximum packet rate that a particular queue will be
scheduled at, but that option ignores the lengths of individual packets.

In general, BMv2 is not intended for performance simulations of
networks of physical links and switches.  It could be modified to make it
more suitable for this purpose, but it would be a non-trivial amount of
work and testing to get it to that point.

Andy

On Sun, Jan 3, 2021 at 3:09 PM Bibek Shrestha sbibek2050@gmail.com
wrote:

Hi all,

According to my understanding, the deq_timedelta should increase
whenever there is an increase in enq_qdepth as the packet starts to
experience queueing. When I tried to see this in action with P4 and BMV2, I
found that whenever there is an increase in enq_qdepth, the deq_timedelta
instead decreases producing results just opposite of my expectation. I
tried this multiple times but had the same result. Is my understanding
correct? If yes, is there any explanation to why I am getting such a result?

Thanks
B.


P4-dev mailing list
P4-dev@lists.p4.org
http://lists.p4.org/mailman/listinfo/p4-dev_lists.p4.org

If you are trying to do performance simulations of a network that should be at least semi-accurate with regard to timing of packets traversing over network links, and using realistic packet scheduling algorithms in switches, etc., then I have not used it myself, but I thought that is what projects like NS3 are for. I do not know of any working projects that combine a P4-programmable device and NS3. I haven't looked for one, either, so my not knowing about one doesn't mean it doesn't exist. Hopefully someone else reading this might know of one. Andy On Mon, Jan 4, 2021 at 11:21 AM Bibek Shrestha <sbibek2050@gmail.com> wrote: > Thank you for your explanation. I see that there is no way right now to > correct this results in existing BMv2 implementation. I am trying to work > with NS3 for the experiments now. I was looking at BMv2, P4 based NS3 > simulators but was not able to find a stable one that works. Some repos I > found was not maintained and had no documentation to work with. Can you > point me in the right direction? > > Thanks > B. > > On Sun, Jan 3, 2021 at 5:49 PM Andy Fingerhut <andy.fingerhut@gmail.com> > wrote: > >> I suspect that there is a perfectly reasonable explanation for what you >> are seeing, but that it might take several hours, or days, to determine >> exactly what about the bmv2 behavior is causing what you see. >> >> I have seen behavior where bmv2 gets into a mode where it schedules >> packets from one of its queues for an "output queue" for a particular >> output port _very_ quickly, much faster than the link can drain the >> packets, and then stops scheduling packets for that output port for a long >> time, which I believe was because the software implementation using veth >> links effectively allows packets to be enqueued for that port at nearly >> arbitrarily fast rates, and then blocks further writes of packets to the >> veth interface until some software queue in the kernel drains below some >> threshold. That queue is completely outside of bmv2, and is not visible to >> it in the enq_qdepth nor deq_timedelta statistics that you are observing. >> Maybe that explains what you are seeing, or maybe it is something else. >> >> The veth links DO NOT behave like constant bit rate links between >> switches. I do not have a good characterization of precisely how they do >> behave, but even if I did I know it would be very unlike a constant bit >> rate link. >> >> Andy >> >> >> On Sun, Jan 3, 2021 at 5:35 PM Bibek Shrestha <sbibek2050@gmail.com> >> wrote: >> >>> Thank you for the response. My case is, I track average deq_timedelta >>> each 100ms and also the enq_qdepth over the same period. Defaultly >>> the enq_qdepth is always zero and let's suppose the average deq_timedelta >>> is X (it is greater than zero). Then I try to overload the switch by >>> initiating transfers between hosts. At that point, I see enq_qdepth starts >>> to take values greater than zero which suggests that the packet is getting >>> queued. At this point, I also measure the average deq_timedelta and >>> consider it Y. Since I would assume that the amount of time a packet lives >>> in the queue increases if the queue is getting occupied, but that is not >>> what I see. I see that Y < X which is opposite of what I expected. >>> >>> In my case, I have tweaked the BMV2 src to sync the time from the system >>> rather than starting from 0. >>> >>> I understand your point that BMv2 is not intended for performance >>> simulations but I was still assuming that the deq_timedelta should >>> generally increase if packets are queued more often. From your answer, it >>> seems making such an assumption would be a mistake in this case. Am I right? >>> >>> Thank you >>> B. >>> >>> >>> >>> >>> On Sun, Jan 3, 2021 at 4:58 PM Andy Fingerhut <andy.fingerhut@gmail.com> >>> wrote: >>> >>>> First, if the very first packet has deq_timedelta 0, or very close to >>>> it, I would be very surprised if it goes down from 0, so I am guessing you >>>> are describing some longer term behavior, not what happens on the second or >>>> third packet. >>>> >>>> Second, the packet scheduling behavior of queues in BMv2 is _not_ >>>> necessarily "schedule packets at a constant bit rate for each output port" >>>> the way it would typically be on a physical switch ASIC. There are options >>>> for configuring the maximum _packet rate_ that a particular queue will be >>>> scheduled at, but that option ignores the lengths of individual packets. >>>> >>>> In general, BMv2 is not intended for performance simulations of >>>> networks of physical links and switches. It could be modified to make it >>>> more suitable for this purpose, but it would be a non-trivial amount of >>>> work and testing to get it to that point. >>>> >>>> Andy >>>> >>>> >>>> On Sun, Jan 3, 2021 at 3:09 PM Bibek Shrestha <sbibek2050@gmail.com> >>>> wrote: >>>> >>>>> Hi all, >>>>> >>>>> According to my understanding, the deq_timedelta should increase >>>>> whenever there is an increase in enq_qdepth as the packet starts to >>>>> experience queueing. When I tried to see this in action with P4 and BMV2, I >>>>> found that whenever there is an increase in enq_qdepth, the deq_timedelta >>>>> instead decreases producing results just opposite of my expectation. I >>>>> tried this multiple times but had the same result. Is my understanding >>>>> correct? If yes, is there any explanation to why I am getting such a result? >>>>> >>>>> Thanks >>>>> B. >>>>> >>>>> _______________________________________________ >>>>> P4-dev mailing list >>>>> P4-dev@lists.p4.org >>>>> http://lists.p4.org/mailman/listinfo/p4-dev_lists.p4.org >>>>> >>>>
RJ
Raj Joshi
Tue, Jan 5, 2021 7:40 AM

Hi Bibek,

This paper is relevant:
https://conferences.sigcomm.org/sosr/2018/sosr18-finals/sosr18-final13.pdf

The source code seems to be here: https://ns-4.github.io/

-- Raj

On Tue, Jan 5, 2021 at 5:48 AM Andy Fingerhut andy.fingerhut@gmail.com
wrote:

If you are trying to do performance simulations of a network that should
be at least semi-accurate with regard to timing of packets traversing over
network links, and using realistic packet scheduling algorithms in
switches, etc., then I have not used it myself, but I thought that is what
projects like NS3 are for.

I do not know of any working projects that combine a P4-programmable
device and NS3.  I haven't looked for one, either, so my not knowing about
one doesn't mean it doesn't exist.  Hopefully someone else reading this
might know of one.

Andy

On Mon, Jan 4, 2021 at 11:21 AM Bibek Shrestha sbibek2050@gmail.com
wrote:

Thank you for your explanation. I see that there is no way right now to
correct this results in existing BMv2 implementation. I am trying to work
with NS3 for the experiments now. I was looking at BMv2, P4 based NS3
simulators but was not able to find a stable one that works. Some repos I
found was not maintained and had no documentation to work with. Can you
point me in the right direction?

Thanks
B.

On Sun, Jan 3, 2021 at 5:49 PM Andy Fingerhut andy.fingerhut@gmail.com
wrote:

I suspect that there is a perfectly reasonable explanation for what you
are seeing, but that it might take several hours, or days, to determine
exactly what about the bmv2 behavior is causing what you see.

I have seen behavior where bmv2 gets into a mode where it schedules
packets from one of its queues for an "output queue" for a particular
output port very quickly, much faster than the link can drain the
packets, and then stops scheduling packets for that output port for a long
time, which I believe was because the software implementation using veth
links effectively allows packets to be enqueued for that port at nearly
arbitrarily fast rates, and then blocks further writes of packets to the
veth interface until some software queue in the kernel drains below some
threshold.  That queue is completely outside of bmv2, and is not visible to
it in the enq_qdepth nor deq_timedelta statistics that you are observing.
Maybe that explains what you are seeing, or maybe it is something else.

The veth links DO NOT behave like constant bit rate links between
switches.  I do not have a good characterization of precisely how they do
behave, but even if I did I know it would be very unlike a constant bit
rate link.

Andy

On Sun, Jan 3, 2021 at 5:35 PM Bibek Shrestha sbibek2050@gmail.com
wrote:

Thank you for the response. My case is, I track average deq_timedelta
each 100ms and also the enq_qdepth over the same period. Defaultly
the enq_qdepth is always zero and let's suppose the average deq_timedelta
is X (it is greater than zero).  Then I try to overload the switch by
initiating transfers between hosts. At that point, I see enq_qdepth starts
to take values greater than zero which suggests that the packet is getting
queued. At this point, I also measure the average deq_timedelta and
consider it Y.  Since I would assume that the amount of time a packet lives
in the queue increases if the queue is getting occupied, but that is not
what I see. I see that Y < X which is opposite of what I expected.

In my case, I have tweaked the BMV2 src to sync the time from the
system rather than starting from 0.

I understand your point that BMv2 is not intended for performance
simulations but I was still assuming that the deq_timedelta should
generally increase if packets are queued more often. From your answer, it
seems making such an assumption would be a mistake in this case. Am I right?

Thank you
B.

On Sun, Jan 3, 2021 at 4:58 PM Andy Fingerhut andy.fingerhut@gmail.com
wrote:

First, if the very first packet has deq_timedelta 0, or very close to
it, I would be very surprised if it goes down from 0, so I am guessing you
are describing some longer term behavior, not what happens on the second or
third packet.

Second, the packet scheduling behavior of queues in BMv2 is not
necessarily "schedule packets at a constant bit rate for each output port"
the way it would typically be on a physical switch ASIC.  There are options
for configuring the maximum packet rate that a particular queue will be
scheduled at, but that option ignores the lengths of individual packets.

In general, BMv2 is not intended for performance simulations of
networks of physical links and switches.  It could be modified to make it
more suitable for this purpose, but it would be a non-trivial amount of
work and testing to get it to that point.

Andy

On Sun, Jan 3, 2021 at 3:09 PM Bibek Shrestha sbibek2050@gmail.com
wrote:

Hi all,

According to my understanding, the deq_timedelta should increase
whenever there is an increase in enq_qdepth as the packet starts to
experience queueing. When I tried to see this in action with P4 and BMV2, I
found that whenever there is an increase in enq_qdepth, the deq_timedelta
instead decreases producing results just opposite of my expectation. I
tried this multiple times but had the same result. Is my understanding
correct? If yes, is there any explanation to why I am getting such a result?

Thanks
B.


P4-dev mailing list
P4-dev@lists.p4.org
http://lists.p4.org/mailman/listinfo/p4-dev_lists.p4.org


Hi Bibek, This paper is relevant: https://conferences.sigcomm.org/sosr/2018/sosr18-finals/sosr18-final13.pdf The source code seems to be here: https://ns-4.github.io/ -- Raj On Tue, Jan 5, 2021 at 5:48 AM Andy Fingerhut <andy.fingerhut@gmail.com> wrote: > If you are trying to do performance simulations of a network that should > be at least semi-accurate with regard to timing of packets traversing over > network links, and using realistic packet scheduling algorithms in > switches, etc., then I have not used it myself, but I thought that is what > projects like NS3 are for. > > I do not know of any working projects that combine a P4-programmable > device and NS3. I haven't looked for one, either, so my not knowing about > one doesn't mean it doesn't exist. Hopefully someone else reading this > might know of one. > > Andy > > > On Mon, Jan 4, 2021 at 11:21 AM Bibek Shrestha <sbibek2050@gmail.com> > wrote: > >> Thank you for your explanation. I see that there is no way right now to >> correct this results in existing BMv2 implementation. I am trying to work >> with NS3 for the experiments now. I was looking at BMv2, P4 based NS3 >> simulators but was not able to find a stable one that works. Some repos I >> found was not maintained and had no documentation to work with. Can you >> point me in the right direction? >> >> Thanks >> B. >> >> On Sun, Jan 3, 2021 at 5:49 PM Andy Fingerhut <andy.fingerhut@gmail.com> >> wrote: >> >>> I suspect that there is a perfectly reasonable explanation for what you >>> are seeing, but that it might take several hours, or days, to determine >>> exactly what about the bmv2 behavior is causing what you see. >>> >>> I have seen behavior where bmv2 gets into a mode where it schedules >>> packets from one of its queues for an "output queue" for a particular >>> output port _very_ quickly, much faster than the link can drain the >>> packets, and then stops scheduling packets for that output port for a long >>> time, which I believe was because the software implementation using veth >>> links effectively allows packets to be enqueued for that port at nearly >>> arbitrarily fast rates, and then blocks further writes of packets to the >>> veth interface until some software queue in the kernel drains below some >>> threshold. That queue is completely outside of bmv2, and is not visible to >>> it in the enq_qdepth nor deq_timedelta statistics that you are observing. >>> Maybe that explains what you are seeing, or maybe it is something else. >>> >>> The veth links DO NOT behave like constant bit rate links between >>> switches. I do not have a good characterization of precisely how they do >>> behave, but even if I did I know it would be very unlike a constant bit >>> rate link. >>> >>> Andy >>> >>> >>> On Sun, Jan 3, 2021 at 5:35 PM Bibek Shrestha <sbibek2050@gmail.com> >>> wrote: >>> >>>> Thank you for the response. My case is, I track average deq_timedelta >>>> each 100ms and also the enq_qdepth over the same period. Defaultly >>>> the enq_qdepth is always zero and let's suppose the average deq_timedelta >>>> is X (it is greater than zero). Then I try to overload the switch by >>>> initiating transfers between hosts. At that point, I see enq_qdepth starts >>>> to take values greater than zero which suggests that the packet is getting >>>> queued. At this point, I also measure the average deq_timedelta and >>>> consider it Y. Since I would assume that the amount of time a packet lives >>>> in the queue increases if the queue is getting occupied, but that is not >>>> what I see. I see that Y < X which is opposite of what I expected. >>>> >>>> In my case, I have tweaked the BMV2 src to sync the time from the >>>> system rather than starting from 0. >>>> >>>> I understand your point that BMv2 is not intended for performance >>>> simulations but I was still assuming that the deq_timedelta should >>>> generally increase if packets are queued more often. From your answer, it >>>> seems making such an assumption would be a mistake in this case. Am I right? >>>> >>>> Thank you >>>> B. >>>> >>>> >>>> >>>> >>>> On Sun, Jan 3, 2021 at 4:58 PM Andy Fingerhut <andy.fingerhut@gmail.com> >>>> wrote: >>>> >>>>> First, if the very first packet has deq_timedelta 0, or very close to >>>>> it, I would be very surprised if it goes down from 0, so I am guessing you >>>>> are describing some longer term behavior, not what happens on the second or >>>>> third packet. >>>>> >>>>> Second, the packet scheduling behavior of queues in BMv2 is _not_ >>>>> necessarily "schedule packets at a constant bit rate for each output port" >>>>> the way it would typically be on a physical switch ASIC. There are options >>>>> for configuring the maximum _packet rate_ that a particular queue will be >>>>> scheduled at, but that option ignores the lengths of individual packets. >>>>> >>>>> In general, BMv2 is not intended for performance simulations of >>>>> networks of physical links and switches. It could be modified to make it >>>>> more suitable for this purpose, but it would be a non-trivial amount of >>>>> work and testing to get it to that point. >>>>> >>>>> Andy >>>>> >>>>> >>>>> On Sun, Jan 3, 2021 at 3:09 PM Bibek Shrestha <sbibek2050@gmail.com> >>>>> wrote: >>>>> >>>>>> Hi all, >>>>>> >>>>>> According to my understanding, the deq_timedelta should increase >>>>>> whenever there is an increase in enq_qdepth as the packet starts to >>>>>> experience queueing. When I tried to see this in action with P4 and BMV2, I >>>>>> found that whenever there is an increase in enq_qdepth, the deq_timedelta >>>>>> instead decreases producing results just opposite of my expectation. I >>>>>> tried this multiple times but had the same result. Is my understanding >>>>>> correct? If yes, is there any explanation to why I am getting such a result? >>>>>> >>>>>> Thanks >>>>>> B. >>>>>> >>>>>> _______________________________________________ >>>>>> P4-dev mailing list >>>>>> P4-dev@lists.p4.org >>>>>> http://lists.p4.org/mailman/listinfo/p4-dev_lists.p4.org >>>>>> >>>>> _______________________________________________ > P4-dev mailing list > P4-dev@lists.p4.org > http://lists.p4.org/mailman/listinfo/p4-dev_lists.p4.org >
BS
Bibek Shrestha
Wed, Jan 6, 2021 3:47 AM

I checked that repo and found broken links. So I am not sure if that
project is still maintained. Thank you for mentioning this resource.

Thanks
B.

On Mon, Jan 4, 2021 at 11:41 PM Raj Joshi rajjoshi@comp.nus.edu.sg wrote:

Hi Bibek,

This paper is relevant:
https://conferences.sigcomm.org/sosr/2018/sosr18-finals/sosr18-final13.pdf

The source code seems to be here: https://ns-4.github.io/

-- Raj

On Tue, Jan 5, 2021 at 5:48 AM Andy Fingerhut andy.fingerhut@gmail.com
wrote:

If you are trying to do performance simulations of a network that should
be at least semi-accurate with regard to timing of packets traversing over
network links, and using realistic packet scheduling algorithms in
switches, etc., then I have not used it myself, but I thought that is what
projects like NS3 are for.

I do not know of any working projects that combine a P4-programmable
device and NS3.  I haven't looked for one, either, so my not knowing about
one doesn't mean it doesn't exist.  Hopefully someone else reading this
might know of one.

Andy

On Mon, Jan 4, 2021 at 11:21 AM Bibek Shrestha sbibek2050@gmail.com
wrote:

Thank you for your explanation. I see that there is no way right now to
correct this results in existing BMv2 implementation. I am trying to work
with NS3 for the experiments now. I was looking at BMv2, P4 based NS3
simulators but was not able to find a stable one that works. Some repos I
found was not maintained and had no documentation to work with. Can you
point me in the right direction?

Thanks
B.

On Sun, Jan 3, 2021 at 5:49 PM Andy Fingerhut andy.fingerhut@gmail.com
wrote:

I suspect that there is a perfectly reasonable explanation for what you
are seeing, but that it might take several hours, or days, to determine
exactly what about the bmv2 behavior is causing what you see.

I have seen behavior where bmv2 gets into a mode where it schedules
packets from one of its queues for an "output queue" for a particular
output port very quickly, much faster than the link can drain the
packets, and then stops scheduling packets for that output port for a long
time, which I believe was because the software implementation using veth
links effectively allows packets to be enqueued for that port at nearly
arbitrarily fast rates, and then blocks further writes of packets to the
veth interface until some software queue in the kernel drains below some
threshold.  That queue is completely outside of bmv2, and is not visible to
it in the enq_qdepth nor deq_timedelta statistics that you are observing.
Maybe that explains what you are seeing, or maybe it is something else.

The veth links DO NOT behave like constant bit rate links between
switches.  I do not have a good characterization of precisely how they do
behave, but even if I did I know it would be very unlike a constant bit
rate link.

Andy

On Sun, Jan 3, 2021 at 5:35 PM Bibek Shrestha sbibek2050@gmail.com
wrote:

Thank you for the response. My case is, I track average deq_timedelta
each 100ms and also the enq_qdepth over the same period. Defaultly
the enq_qdepth is always zero and let's suppose the average deq_timedelta
is X (it is greater than zero).  Then I try to overload the switch by
initiating transfers between hosts. At that point, I see enq_qdepth starts
to take values greater than zero which suggests that the packet is getting
queued. At this point, I also measure the average deq_timedelta and
consider it Y.  Since I would assume that the amount of time a packet lives
in the queue increases if the queue is getting occupied, but that is not
what I see. I see that Y < X which is opposite of what I expected.

In my case, I have tweaked the BMV2 src to sync the time from the
system rather than starting from 0.

I understand your point that BMv2 is not intended for performance
simulations but I was still assuming that the deq_timedelta should
generally increase if packets are queued more often. From your answer, it
seems making such an assumption would be a mistake in this case. Am I right?

Thank you
B.

On Sun, Jan 3, 2021 at 4:58 PM Andy Fingerhut <
andy.fingerhut@gmail.com> wrote:

First, if the very first packet has deq_timedelta 0, or very close to
it, I would be very surprised if it goes down from 0, so I am guessing you
are describing some longer term behavior, not what happens on the second or
third packet.

Second, the packet scheduling behavior of queues in BMv2 is not
necessarily "schedule packets at a constant bit rate for each output port"
the way it would typically be on a physical switch ASIC.  There are options
for configuring the maximum packet rate that a particular queue will be
scheduled at, but that option ignores the lengths of individual packets.

In general, BMv2 is not intended for performance simulations of
networks of physical links and switches.  It could be modified to make it
more suitable for this purpose, but it would be a non-trivial amount of
work and testing to get it to that point.

Andy

On Sun, Jan 3, 2021 at 3:09 PM Bibek Shrestha sbibek2050@gmail.com
wrote:

Hi all,

According to my understanding, the deq_timedelta should increase
whenever there is an increase in enq_qdepth as the packet starts to
experience queueing. When I tried to see this in action with P4 and BMV2, I
found that whenever there is an increase in enq_qdepth, the deq_timedelta
instead decreases producing results just opposite of my expectation. I
tried this multiple times but had the same result. Is my understanding
correct? If yes, is there any explanation to why I am getting such a result?

Thanks
B.


P4-dev mailing list
P4-dev@lists.p4.org
http://lists.p4.org/mailman/listinfo/p4-dev_lists.p4.org


I checked that repo and found broken links. So I am not sure if that project is still maintained. Thank you for mentioning this resource. Thanks B. On Mon, Jan 4, 2021 at 11:41 PM Raj Joshi <rajjoshi@comp.nus.edu.sg> wrote: > Hi Bibek, > > This paper is relevant: > https://conferences.sigcomm.org/sosr/2018/sosr18-finals/sosr18-final13.pdf > > The source code seems to be here: https://ns-4.github.io/ > > -- Raj > > On Tue, Jan 5, 2021 at 5:48 AM Andy Fingerhut <andy.fingerhut@gmail.com> > wrote: > >> If you are trying to do performance simulations of a network that should >> be at least semi-accurate with regard to timing of packets traversing over >> network links, and using realistic packet scheduling algorithms in >> switches, etc., then I have not used it myself, but I thought that is what >> projects like NS3 are for. >> >> I do not know of any working projects that combine a P4-programmable >> device and NS3. I haven't looked for one, either, so my not knowing about >> one doesn't mean it doesn't exist. Hopefully someone else reading this >> might know of one. >> >> Andy >> >> >> On Mon, Jan 4, 2021 at 11:21 AM Bibek Shrestha <sbibek2050@gmail.com> >> wrote: >> >>> Thank you for your explanation. I see that there is no way right now to >>> correct this results in existing BMv2 implementation. I am trying to work >>> with NS3 for the experiments now. I was looking at BMv2, P4 based NS3 >>> simulators but was not able to find a stable one that works. Some repos I >>> found was not maintained and had no documentation to work with. Can you >>> point me in the right direction? >>> >>> Thanks >>> B. >>> >>> On Sun, Jan 3, 2021 at 5:49 PM Andy Fingerhut <andy.fingerhut@gmail.com> >>> wrote: >>> >>>> I suspect that there is a perfectly reasonable explanation for what you >>>> are seeing, but that it might take several hours, or days, to determine >>>> exactly what about the bmv2 behavior is causing what you see. >>>> >>>> I have seen behavior where bmv2 gets into a mode where it schedules >>>> packets from one of its queues for an "output queue" for a particular >>>> output port _very_ quickly, much faster than the link can drain the >>>> packets, and then stops scheduling packets for that output port for a long >>>> time, which I believe was because the software implementation using veth >>>> links effectively allows packets to be enqueued for that port at nearly >>>> arbitrarily fast rates, and then blocks further writes of packets to the >>>> veth interface until some software queue in the kernel drains below some >>>> threshold. That queue is completely outside of bmv2, and is not visible to >>>> it in the enq_qdepth nor deq_timedelta statistics that you are observing. >>>> Maybe that explains what you are seeing, or maybe it is something else. >>>> >>>> The veth links DO NOT behave like constant bit rate links between >>>> switches. I do not have a good characterization of precisely how they do >>>> behave, but even if I did I know it would be very unlike a constant bit >>>> rate link. >>>> >>>> Andy >>>> >>>> >>>> On Sun, Jan 3, 2021 at 5:35 PM Bibek Shrestha <sbibek2050@gmail.com> >>>> wrote: >>>> >>>>> Thank you for the response. My case is, I track average deq_timedelta >>>>> each 100ms and also the enq_qdepth over the same period. Defaultly >>>>> the enq_qdepth is always zero and let's suppose the average deq_timedelta >>>>> is X (it is greater than zero). Then I try to overload the switch by >>>>> initiating transfers between hosts. At that point, I see enq_qdepth starts >>>>> to take values greater than zero which suggests that the packet is getting >>>>> queued. At this point, I also measure the average deq_timedelta and >>>>> consider it Y. Since I would assume that the amount of time a packet lives >>>>> in the queue increases if the queue is getting occupied, but that is not >>>>> what I see. I see that Y < X which is opposite of what I expected. >>>>> >>>>> In my case, I have tweaked the BMV2 src to sync the time from the >>>>> system rather than starting from 0. >>>>> >>>>> I understand your point that BMv2 is not intended for performance >>>>> simulations but I was still assuming that the deq_timedelta should >>>>> generally increase if packets are queued more often. From your answer, it >>>>> seems making such an assumption would be a mistake in this case. Am I right? >>>>> >>>>> Thank you >>>>> B. >>>>> >>>>> >>>>> >>>>> >>>>> On Sun, Jan 3, 2021 at 4:58 PM Andy Fingerhut < >>>>> andy.fingerhut@gmail.com> wrote: >>>>> >>>>>> First, if the very first packet has deq_timedelta 0, or very close to >>>>>> it, I would be very surprised if it goes down from 0, so I am guessing you >>>>>> are describing some longer term behavior, not what happens on the second or >>>>>> third packet. >>>>>> >>>>>> Second, the packet scheduling behavior of queues in BMv2 is _not_ >>>>>> necessarily "schedule packets at a constant bit rate for each output port" >>>>>> the way it would typically be on a physical switch ASIC. There are options >>>>>> for configuring the maximum _packet rate_ that a particular queue will be >>>>>> scheduled at, but that option ignores the lengths of individual packets. >>>>>> >>>>>> In general, BMv2 is not intended for performance simulations of >>>>>> networks of physical links and switches. It could be modified to make it >>>>>> more suitable for this purpose, but it would be a non-trivial amount of >>>>>> work and testing to get it to that point. >>>>>> >>>>>> Andy >>>>>> >>>>>> >>>>>> On Sun, Jan 3, 2021 at 3:09 PM Bibek Shrestha <sbibek2050@gmail.com> >>>>>> wrote: >>>>>> >>>>>>> Hi all, >>>>>>> >>>>>>> According to my understanding, the deq_timedelta should increase >>>>>>> whenever there is an increase in enq_qdepth as the packet starts to >>>>>>> experience queueing. When I tried to see this in action with P4 and BMV2, I >>>>>>> found that whenever there is an increase in enq_qdepth, the deq_timedelta >>>>>>> instead decreases producing results just opposite of my expectation. I >>>>>>> tried this multiple times but had the same result. Is my understanding >>>>>>> correct? If yes, is there any explanation to why I am getting such a result? >>>>>>> >>>>>>> Thanks >>>>>>> B. >>>>>>> >>>>>>> _______________________________________________ >>>>>>> P4-dev mailing list >>>>>>> P4-dev@lists.p4.org >>>>>>> http://lists.p4.org/mailman/listinfo/p4-dev_lists.p4.org >>>>>>> >>>>>> _______________________________________________ >> P4-dev mailing list >> P4-dev@lists.p4.org >> http://lists.p4.org/mailman/listinfo/p4-dev_lists.p4.org >> >