From mboxrd@z Thu Jan 1 00:00:00 1970 From: Stephen Warren Date: Tue, 22 Sep 2015 21:17:00 -0600 Subject: [U-Boot] [PATCH] FIX: fat: Provide correct return code from disk_{read|write} to upper layers In-Reply-To: <20150903161825.1d289255@amdc2363> References: <1440769821-24005-2-git-send-email-l.majewski@samsung.com> <1441282899-13569-1-git-send-email-l.majewski@samsung.com> <20150903124409.GA26226@bill-the-cat> <20150903154051.52436d8a@amdc2363> <20150903161825.1d289255@amdc2363> Message-ID: <560219AC.1070000@nvidia.com> List-Id: MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: u-boot@lists.denx.de On 09/03/2015 08:18 AM, Lukasz Majewski wrote: > Hi Lukasz, > >> Hi Tom, >> >>> On Thu, Sep 03, 2015 at 02:21:39PM +0200, Lukasz Majewski wrote: >>> >>>> It is very common that FAT code is using following pattern: >>>> if (disk_{read|write}() < 0) >>>> return -1; >>>> >>>> Up till now the above code was dead, since disk_{read|write) could >>>> only return value >= 0. >>>> As a result some errors from medium layer (i.e. eMMC/SD) were not >>>> caught. >>>> >>>> The above behavior was caused by block_{read|write|erase} declared >>>> at struct block_dev_desc (@part.h). It returns unsigned long, >>>> where 0 indicates error and > 0 indicates that medium operation >>>> was correct. >>>> >>>> This patch as error regards 0 returned from >>>> block_{read|write|erase} when nr_blocks is grater than zero. >>>> Read/Write operation with nr_blocks=0 should return 0 and hence >>>> is not considered as an error. >>>> >>>> Signed-off-by: Lukasz Majewski >>>> >>>> Test HW: Odroid XU3 - Exynos 5433 >>> >>> Can you pick up Stephen's FAT replacement series and see if it also >>> fixes this problem? Thanks! >>> >> >> Ok, I will test this fat implementation. > > I've applied v2 of this patchset > on top of SHA1: 79c884d7e449a63fa8f07b7495f8f9873355c48f > > Unfortunately, DFU tests fail with first attempt to pass the test. I've found a couple of problems. First up, file_fat_write() wasn't truncating the file when writing, so the file size wasn't changing when over-writing a large file with a small file. With this fixed, I can run the DFU tests just fine for all the small files (<1M). I've fixed this locally and in the ff branch on my github. Second, ff is slow: Some random old build I had in flash on my system: > Tegra124 (Jetson TK1) # load mmc 1:1 $loadaddr dfu1.bin > reading dfu1.bin > 1048576 bytes read in 95 ms (10.5 MiB/s) With my ff branch: > Tegra124 (Jetson TK1) # load mmc 1:1 $loadaddr dfu1.bin > 1048576 bytes read in 5038 ms (203.1 KiB/s) That's quite the slow-down! I believe this is causing dfu-util to time out on the larger files (1M+). Just for functional testing, I'll try and find a way to hack dfu-util to have a much larger timeout for the final flush operation. I wonder if the old FAT implementation had a disk cache (e.g. that 32K buffer in BSS?) and we need the same for ff? I'll try and track down why it's so slow. Perhaps there are other issues as yet unfound.