[P4-dev] Priority Queueing in SimpleSwitch

Antonin Bas antonin at barefootnetworks.com
Mon Mar 19 17:50:49 EDT 2018


Hi Edgar,

I wasn't able to reproduce this issue. I ran a little experiment myself,
using tc from Mininet to limit bandwidth to 10Mbps per interface. The
client with high priority got a bandwidth of 8.41 Mbps, while the client
with low priority got a bandwidth of 1.77 Mbps.

I made a minor change to the 1sw_demo.py script included in the bmv2 repo:
diff --git a/mininet/1sw_demo.py b/mininet/1sw_demo.py
index 6472c19..eec16af 100755
--- a/mininet/1sw_demo.py
+++ b/mininet/1sw_demo.py
@@ -19,6 +19,7 @@ from mininet.net import Mininet
 from mininet.topo import Topo
 from mininet.log import setLogLevel, info
 from mininet.cli import CLI
+from mininet.link import TCLink

 from p4_mininet import P4Switch, P4Host

@@ -57,7 +58,7 @@ class SingleSwitchTopo(Topo):
             host = self.addHost('h%d' % (h + 1),
                                 ip = "10.0.%d.10/24" % h,
                                 mac = '00:04:00:00:00:%02x' %h)
-            self.addLink(host, switch)
+            self.addLink(host, switch, bw=10)

 def main():
     num_hosts = args.num_hosts
@@ -70,6 +71,7 @@ def main():
                             num_hosts)
     net = Mininet(topo = topo,
                   host = P4Host,
+                  link=TCLink,
                   switch = P4Switch,
                   controller = None)
     net.start()

I used the attached P4_14 program and the attached commands.txt files. I'm
also attaching the bmv2 JSON file so that you don't have to compile the
program yourself.
Here at the commands I used next:
- sudo python 1sw_demo.py --behavioral-exe <path to simple_switch>
--num-hosts 3 --json simple_router.json
- simple_switch_CLI < commands.txt

I then started an iperf server on h3 and an iperf client on h1 and h2.
Traffic originating from h1 has priority 0 (low-priority) and achieved a
significantly lower bandwidth.
I encourage you to try with my files and see if you observe the same
difference in bandwidth. If you don't, it is probably an issue with your P4
program, your CLI commands or the P4 compiler you are using.

Note that I also compiled bmv2 without the logging macros to ensure that
the bmv2 packet processing throughput would not be an issue:
./configure --disable-elogger --disable-logging-macros 'CFLAGS=-g -O2'
'CXXFLAGS=-g -O2'

I also tried the experiment with 100Mbps interfaces for 60 seconds for the
thrill of it, and observed 73 Mbps and 23 Mbps. If I increase the bandwidth
to 500Mbps per link, both clients achieve around 200Mbps, but I blame it on
my laptop being slow. I don't think the priority queue implementation will
give you great results under heavy load.


On Fri, Mar 16, 2018 at 12:40 PM, Costa Molero Edgar <cedgar at ethz.ch> wrote:

> I am trying to do some priority queueing with simple switch.
>
> I think I did what is needed to enable strict priority queueing in the
> simple switch, but I do not get what I was expecting:
>
> 1) uncommented in the simple_switch.h the line that enables multiple
> queues to be added at compile time. (#define SSWITCH_PRIORITY_QUEUEING_ON)
> 2) Added the intrinsic metadata to v1model.p4
>
>     //Priority queueing
>     @alias("queueing_metadata.qid")           bit<8>  qid;
>     @alias(“intrinsic_metadata.priority")     bit<3> priority;
>
>
> 3) Testing with a very simple topology:  3 hosts connected to a switch in
> a star topology, and 2 of them will send to the third host. Using traffic
> control I set a rate limit of 20 mbps to all the interfaces.
>
> It the ingress pipe line I just do this very simple thing: (h1 has
> priority 0, and h2 a priority of 7)
>
> if (hdr.ipv4.srcAddr == 0x0a000101){
>     standard_metadata.priority = (bit<3>)0;
> }
> else if (hdr.ipv4.srcAddr == 0x0a000102){
>     standard_metadata.priority = (bit<3>)7;
> }
>
>
> At the egress I set the tos field of the IP packet to ID of the queue that
> was used to verify that indeed the priority queueing was happening.
>
> hdr.ipv4.tos = standard_metadata.qid;
>
> Results:
>
> I send two TCP flows with iperf, h1->h3 and h2->h3, both get ~10mbps. Am I
> doing something wrong?
>
> ——-
>
> I have a second question, I saw that the CLI has a command to set a
> packets_per_second rate and a queue depth. However they work at a
> port-level. Is there a way to set that at a per queue-level ? I have been
> checking the source code and the CLI ends up using:
>
> int
> SimpleSwitch::set_egress_queue_rate(int port, const uint64_t rate_pps) {
>   egress_buffers.set_rate(port, rate_pps);
>   return 0;
> }
>
> Which used set_rate from queueing.h
>
> There you can find that there is a set_rate function with the priority
> parameter: void set_rate(size_t queue_id, size_t priority, uint64_t pps).
> Would it work if I modify the simple_switch.cpp and add a function that
> uses set_rate with priority? I have seen that you use some autogenerated
> python code, is it enough compiling again the entire thing to get that?
>
> I think a good way to go would be to add that a third option in the
> sswitch_CLI.py in the function do_set_queue_rate.  For example:
>
> def do_set_queue_rate(self, line):
>     "Set rate of one / all egress queue(s): set_queue_rate <rate_pps> [<egress_port>] [<priority>]"
>     args = line.split()
>     rate = int(args[0])
>     if len(args) > 2:
>         port = int(args[1])
>         priority = int(args[2])
>         self.sswitch_client.set_egress_queue_rate_priority(port, rate, priority)
>     elif len(args) > 1:
>         port = int(args[1])
>         self.sswitch_client.set_egress_queue_rate(port, rate)
>     else:
>         self.sswitch_client.set_all_egress_queue_rates(rate)
>
>
> Kind regards,
> Edgar
>
>
>
>
>
> _______________________________________________
> P4-dev mailing list
> P4-dev at lists.p4.org
> http://lists.p4.org/mailman/listinfo/p4-dev_lists.p4.org
>



-- 
Antonin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.p4.org/pipermail/p4-dev_lists.p4.org/attachments/20180319/12cb1d08/attachment-0001.html>
-------------- next part --------------
table_set_default send_frame _drop
table_set_default forward _drop
table_set_default ipv4_lpm _drop
table_add send_frame rewrite_mac 1 => 00:aa:bb:00:00:00
table_add send_frame rewrite_mac 2 => 00:aa:bb:00:00:01
table_add send_frame rewrite_mac 3 => 00:aa:bb:00:00:02
table_add forward set_dmac 10.0.0.10 => 00:04:00:00:00:00
table_add forward set_dmac 10.0.1.10 => 00:04:00:00:00:01
table_add forward set_dmac 10.0.2.10 => 00:04:00:00:00:02
table_add ipv4_lpm set_nhop 10.0.0.10/32 => 10.0.0.10 1
table_add ipv4_lpm set_nhop 10.0.1.10/32 => 10.0.1.10 2
table_add ipv4_lpm set_nhop 10.0.2.10/32 => 10.0.2.10 3
table_add set_pri_t set_pri 10.0.0.10 => 0
table_add set_pri_t set_pri 10.0.1.10 => 7
table_add set_pri_t set_pri 10.0.2.10 => 7
-------------- next part --------------
A non-text attachment was scrubbed...
Name: simple_router.p4
Type: application/octet-stream
Size: 3499 bytes
Desc: not available
URL: <http://lists.p4.org/pipermail/p4-dev_lists.p4.org/attachments/20180319/12cb1d08/attachment-0001.p4>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: simple_router.json
Type: application/json
Size: 23695 bytes
Desc: not available
URL: <http://lists.p4.org/pipermail/p4-dev_lists.p4.org/attachments/20180319/12cb1d08/attachment-0001.json>


More information about the P4-dev mailing list