From mboxrd@z Thu Jan 1 00:00:00 1970 From: David Marchand Subject: Re: DPDK 2.2 roadmap Date: Tue, 15 Sep 2015 11:16:17 +0200 Message-ID: References: <1882381.9qlZTmz9zB@xps13> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: "dev@dpdk.org" To: Thomas Monjalon Return-path: Received: from mail-ob0-f170.google.com (mail-ob0-f170.google.com [209.85.214.170]) by dpdk.org (Postfix) with ESMTP id 7925C91E4 for ; Tue, 15 Sep 2015 11:16:19 +0200 (CEST) Received: by obbbh8 with SMTP id bh8so129628891obb.0 for ; Tue, 15 Sep 2015 02:16:17 -0700 (PDT) In-Reply-To: <1882381.9qlZTmz9zB@xps13> List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hello all, My turn. As far as the 2.2 is concerned, I have some fixes/changes waiting for going upstream : - allow default mac removal (to be discussed) - kvargs api updates / cleanup (no change on abi, I would say) - vlan filtering api fixes and ixgbevf/igbvf associated fixes (might have an impact on abi) - ethdev fixes wrt hotplug framework - minor fixes in testpmd After this, depending on the schedule (so will most likely be for 2.3 or later), I have some ideas on : - cleanup for hotplug and maybe discussions on pci bind/unbind operations - provide a little tool to have informations/capabilities on drivers (=C3= =A0 la modinfo) - continue work on hotplug By the way, I have some questions to the community : - I noticed that with hotplug support, testpmd has become *really* hungry on mbufs and memory. The problem comes from the "basic" assumption that we must have enough memory/mbufs for the maximum number of ports that might be available but are not in the most common tests setup. One solution might be to rework the way mbufs are reserved : * either we let testpmd start with limited mbufs count the way it was working before edab33b1 ("app/testpmd: support port hotplug"), then when trying to start a port, this operation can fail if not enough mbufs are available for it * or we can try to create one mempool per port. The mempools would be populated at the port init / close (?). Anyone volunteers to rework this ? Other ideas ? - looking at a patch from Chao ( http://dpdk.org/ml/archives/dev/2015-August/022819.html), I think we need to rework the way the numa nodes are handled in the dpdk. The problem is that we rely on static arrays for some resources per socket. I suppose this was designed with the idea that socket "physical" indexes are contiguous, but this is not true on systems running power8 bare metal (where numa indexes can be 0, 1, 16, 17 on quad nodes servers). I suppose we can go with a mapping array (populated at the same time cpus are discovered), then use this mapping array and preserve all apis, but this might not be that trivial. Volunteers ? Ideas ? - finally, looking at the eal, there are still some cleanups to do. More specifically, are there any users of the ivshmem feature in dpdk ? I can see little value in keeping the ivshmem feature in the eal (well maybe because I don't use it) as it relies on hacks. So I can see two options: * someone still wants it to work, then we need a good rework to get rid of those hacks under #ifdef in eal and the special configuration files can disappear * or if nobody complains, we can schedule its deprecation then removal. Thanks. --=20 David Marchand