All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* [Xenomai-help] exception 768
@ 2006-11-13 12:13 Daniel Schnell
  2006-11-17 17:32 ` Philippe Gerum
  0 siblings, 1 reply; 15+ messages in thread
From: Daniel Schnell @ 2006-11-13 12:13 UTC (permalink / raw
  To: xenomai

[-- Attachment #1: Type: text/plain, Size: 799 bytes --]

Hi,
 
I updated to the newest version in svn and applied Xenomai to my Linux
kernel.
 
The first time I ran the application it was running much longer and I
almost thought everything works smooth now. But after some while it did
not react any more. No apparant crash message or Oops on the console.
Instead some strange exception messages in /var/log/messages.
 
Please find attached the messages and the kernel symbols. It seems that
a kernel exception happens inside __copy_tofrom_user().

Our application linker map was ~roughly 1 mb, so I did not attach it
here. Some of the addresses I could not resolve to anything inside the
kernel or our application, so probably some other shared objects like
glibc are involved here.


Any ideas ?


Best regards,

Daniel Schnell.

[-- Attachment #2: messages.exception768 --]
[-- Type: application/octet-stream, Size: 22163 bytes --]

Mar 11 00:18:19 V38B_8 syslogd 1.4.1: restart.
Mar 11 00:18:19 V38B_8 syslog: syslogd startup succeeded
Mar 11 00:18:19 V38B_8 syslog: klogd startup succeeded
Mar 11 00:18:19 V38B_8 kernel: klogd 1.4.1, log source = /proc/kmsg started.
Mar 11 00:18:19 V38B_8 kernel: Memory BAT mapping: BAT2=256Mb, BAT3=0Mb, residual: 0Mb
Mar 11 00:18:19 V38B_8 kernel: Linux version 2.4.25 (root@marel2487.marel.net) (gcc version 4.0.0 (DENX ELDK 4.0 4.0.0)) #1 Mon Nov 13 10:39:20 GMT 2006
Mar 11 00:18:19 V38B_8 kernel: On node 0 totalpages: 65536
Mar 11 00:18:19 V38B_8 kernel: zone(0): 65536 pages.
Mar 11 00:18:19 V38B_8 kernel: zone(1): 0 pages.
Mar 11 00:18:19 V38B_8 kernel: zone(2): 0 pages.
Mar 11 00:18:19 V38B_8 kernel: Kernel command line: root=/dev/nfs rw nfsroot=10.100.11.113:/opt/rootfs ip=10.100.99.8:10.100.11.113:10.100.254.254:255.255.0.0:V38B_8:eth0:off panic=1
Mar 11 00:18:19 V38B_8 kernel: I-pipe 1.2-01: pipeline enabled.
Mar 11 00:18:19 V38B_8 kernel: Calibrating delay loop... 263.78 BogoMIPS
Mar 11 00:18:19 V38B_8 kernel: Memory: 255760k available (1872k kernel code, 836k data, 80k init, 0k highmem)
Mar 11 00:18:19 V38B_8 kernel: Dentry cache hash table entries: 32768 (order: 6, 262144 bytes)
Mar 11 00:18:19 V38B_8 kernel: Inode cache hash table entries: 16384 (order: 5, 131072 bytes)
Mar 11 00:18:19 V38B_8 kernel: Mount cache hash table entries: 512 (order: 0, 4096 bytes)
Mar 11 00:18:19 V38B_8 kernel: Buffer cache hash table entries: 16384 (order: 4, 65536 bytes)
Mar 11 00:18:19 V38B_8 kernel: Page-cache hash table entries: 65536 (order: 6, 262144 bytes)
Mar 11 00:18:19 V38B_8 kernel: POSIX conformance testing by UNIFIX
Mar 11 00:18:19 V38B_8 kernel: PCI: Probing PCI hardware
Mar 11 00:18:19 V38B_8 kernel: Linux NET4.0 for Linux 2.4
Mar 11 00:18:19 V38B_8 kernel: Based upon Swansea University Computer Society NET3.039
Mar 11 00:18:19 V38B_8 kernel: Initializing RT netlink socket
Mar 11 00:18:19 V38B_8 kernel: Starting kswapd
Mar 11 00:18:19 V38B_8 kernel: Journalled Block Device driver loaded
Mar 11 00:18:19 V38B_8 kernel: JFFS2 version 2.2. (C) 2001-2003 Red Hat, Inc.
Mar 11 00:18:19 V38B_8 kernel: i2c-core.o: i2c core module version 2.6.1 (20010830)
Mar 11 00:18:19 V38B_8 kernel: i2c-dev.o: i2c /dev entries driver module version 2.6.1 (20010830)
Mar 11 00:18:19 V38B_8 kernel: pty: 256 Unix98 ptys configured
Mar 11 00:18:19 V38B_8 kernel: ttyS0 on PSC1
Mar 11 00:18:19 V38B_8 random: Initializing random number generator:  succeeded
Mar 11 00:18:19 V38B_8 kernel: ttyS1 on PSC2
Mar 11 00:18:19 V38B_8 kernel: ttyS2 on PSC3
Mar 11 00:18:19 V38B_8 kernel: PCF8563 Real-Time Clock Driver $Revision: 1.3 $ wd@denx.de
Mar 11 00:18:19 V38B_8 kernel: RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize
Mar 11 00:18:19 V38B_8 kernel: loop: loaded (max 8 devices)
Mar 11 00:18:19 V38B_8 kernel: Uniform Multi-Platform E-IDE driver Revision: 7.00beta4-2.4
Mar 11 00:18:19 V38B_8 kernel: ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx
Mar 11 00:18:19 V38B_8 kernel: Port Config is: 0x91050404
Mar 11 00:18:19 V38B_8 kernel: ipb=66MHz, set clock period to 15
Mar 11 00:18:19 V38B_8 kernel: GPIO config: 91050404
Mar 11 00:18:20 V38B_8 kernel: ATA invalid: 00800000
Mar 11 00:18:20 V38B_8 kernel: ATA hostcnf: 03000000
Mar 11 00:18:20 V38B_8 kernel: ATA pio1   : 100a0a00
Mar 11 00:18:20 V38B_8 kernel: ATA pio2   : 02040600
Mar 11 00:18:20 V38B_8 kernel: XLB Arb cnf: 0000a006
Mar 11 00:18:20 V38B_8 kernel: mpc5xxx_ide: Setting up IDE interface ide0...
Mar 11 00:18:20 V38B_8 kernel: Probing IDE interface ide0...
Mar 11 00:18:20 V38B_8 kernel: SCSI subsystem driver Revision: 1.00
Mar 11 00:18:20 V38B_8 kernel: kmod: failed to exec /sbin/modprobe -s -k scsi_hostadapter, errno = 2
Mar 11 00:18:20 V38B_8 kernel: NS4 Bank 0: Found 1 x16 devices at 0x0 in 8-bit bank
Mar 11 00:18:20 V38B_8 kernel:  Amd/Fujitsu Extended Query Table at 0x0040
Mar 11 00:18:20 V38B_8 kernel: NS4 Bank 0: CFI does not contain boot bank location. Assuming top.
Mar 11 00:18:20 V38B_8 kernel: number of CFI chips: 1
Mar 11 00:18:20 V38B_8 kernel: cfi_cmdset_0002: Disabling erase-suspend-program due to code brokenness.
Mar 11 00:18:20 V38B_8 kernel: NS4 flash bank 0: Using static image partition definition
Mar 11 00:18:20 V38B_8 kernel: Creating 5 MTD partitions on "NS4 Bank 0":
Mar 11 00:18:20 V38B_8 kernel: 0x00000000-0x00100000 : "U-Boot"
Mar 11 00:18:20 V38B_8 kernel: 0x00100000-0x00200000 : "kernel"
Mar 11 00:18:20 V38B_8 kernel: 0x00200000-0x00500000 : "initrd"
Mar 11 00:18:20 V38B_8 kernel: 0x00500000-0x00b00000 : "jffs2"
Mar 11 00:18:20 V38B_8 kernel: 0x00b00000-0x01000000 : "Spare"
Mar 11 00:18:20 V38B_8 kernel: rtc: unable to get misc minor
Mar 11 00:18:20 V38B_8 kernel: usb.c: registered new driver usbdevfs
Mar 11 00:18:20 V38B_8 kernel: usb.c: registered new driver hub
Mar 11 00:18:20 V38B_8 kernel: host/usb-ohci.c: USB OHCI at membase 0xf0001000, IRQ 44
Mar 11 00:18:20 V38B_8 kernel: host/usb-ohci.c: usb-0, Built-In ohci
Mar 11 00:18:20 V38B_8 kernel: usb.c: new USB bus registered, assigned bus number 1
Mar 11 00:18:20 V38B_8 kernel: hub.c: USB hub found
Mar 11 00:18:20 V38B_8 kernel: hub.c: 1 port detected
Mar 11 00:18:20 V38B_8 portmap: portmap startup succeeded
Mar 11 00:18:20 V38B_8 portmap[287]: user rpc not found, reverting to user bin
Mar 11 00:18:20 V38B_8 kernel: Initializing USB Mass Storage driver...
Mar 11 00:18:20 V38B_8 kernel: usb.c: registered new driver usb-storage
Mar 11 00:18:20 V38B_8 kernel: USB Mass Storage support registered.
Mar 11 00:18:20 V38B_8 kernel: i2c-pm520.o: I2C module #2 installed
Mar 11 00:18:20 V38B_8 kernel: I-pipe: Domain Xenomai registered.
Mar 11 00:18:21 V38B_8 kernel: Xenomai: hal/powerpc started.
Mar 11 00:18:21 V38B_8 kernel: Xenomai: real-time nucleus v2.3-rc1 (Baroque) loaded.
Mar 11 00:18:21 V38B_8 kernel: Xenomai: starting native API services.
Mar 11 00:18:21 V38B_8 kernel: Xenomai: starting POSIX services.
Mar 11 00:18:21 V38B_8 kernel: Xenomai: starting RTDM services.
Mar 11 00:18:21 V38B_8 kernel: RT-Socket-CAN 0.20.2 - (C) 2006 RT-Socket-CAN Development Team
Mar 11 00:18:21 V38B_8 kernel: MSCAN: CAN 1 routed to I2C1 pins and CAN2 to TMR01 pins
Mar 11 00:18:21 V38B_8 kernel: rtcan: registered rtcan0
Mar 11 00:18:21 V38B_8 kernel: rtcan0: MSCAN driver loaded (port 1, base-addr 0xf0000900 irq 55)
Mar 11 00:18:21 V38B_8 kernel: rtcan: registered rtcan1
Mar 11 00:18:21 V38B_8 kernel: rtcan1: MSCAN driver loaded (port 2, base-addr 0xf0000980 irq 56)
Mar 11 00:18:21 V38B_8 netfs: Mounting NFS filesystems:  succeeded
Mar 11 00:18:21 V38B_8 kernel: NET4: Linux TCP/IP 1.0 for NET4.0
Mar 11 00:18:21 V38B_8 kernel: IP Protocols: ICMP, UDP, TCP, IGMP
Mar 11 00:18:21 V38B_8 kernel: IP: routing cache hash table of 2048 buckets, 16Kbytes
Mar 11 00:18:21 V38B_8 kernel: TCP: Hash tables configured (established 16384 bind 32768)
Mar 11 00:18:21 V38B_8 kernel: eth0: Phy @ 0x0, type GENERIC (0x001cc852)
Mar 10 19:18:10 V38B_8 rc.sysinit: Building the cache succeeded 
Mar 11 00:18:21 V38B_8 kernel: IP-Config: Complete:
Mar 11 00:18:21 V38B_8 netfs: Mounting other filesystems:  succeeded
Mar 10 19:18:10 V38B_8 rc.sysinit: Mounting proc filesystem:  succeeded 
Mar 11 00:18:21 V38B_8 kernel:       device=eth0, addr=10.100.99.8, mask=255.255.0.0, gw=10.100.254.254,
Mar 10 19:18:10 V38B_8 sysctl: net.ipv4.ip_forward = 0 
Mar 11 00:18:21 V38B_8 kernel:      host=V38B_8, domain=, nis-domain=(none),
Mar 10 19:18:10 V38B_8 sysctl: net.ipv4.conf.default.rp_filter = 1 
Mar 11 00:18:21 V38B_8 kernel:      bootserver=10.100.11.113, rootserver=10.100.11.113, rootpath=
Mar 10 19:18:10 V38B_8 sysctl: kernel.core_uses_pid = 1 
Mar 11 00:18:21 V38B_8 kernel: NET4: Unix domain sockets 1.0/SMP for Linux NET4.0.
Mar 10 19:18:10 V38B_8 rc.sysinit: Configuring kernel parameters:  succeeded 
Mar 11 00:18:21 V38B_8 kernel: Looking up port of RPC 100003/2 on 10.100.11.113
Mar 11 00:18:12 V38B_8 date: Tue Mar 11 00:18:12 EST 2036 
Mar 11 00:18:21 V38B_8 kernel: Looking up port of RPC 100005/1 on 10.100.11.113
Mar 11 00:18:12 V38B_8 rc.sysinit: Setting clock : Tue Mar 11 00:18:12 EST 2036 succeeded 
Mar 11 00:18:21 V38B_8 kernel: VFS: Mounted root (nfs filesystem).
Mar 11 00:18:12 V38B_8 rc.sysinit: Setting hostname V38B_8:  succeeded 
Mar 11 00:18:21 V38B_8 kernel: Freeing unused kernel memory: 80k init
Mar 11 00:18:12 V38B_8 rc.sysinit: Mounting USB filesystem:  succeeded 
Mar 11 00:18:21 V38B_8 kernel: hub.c: Cannot enable port 1 of hub 1, disabling port.
Mar 11 00:18:12 V38B_8 rc.sysinit: Activating swap partitions:  succeeded 
Mar 11 00:18:21 V38B_8 kernel: hub.c: Maybe the USB cable is bad?
Mar 11 00:18:13 V38B_8 depmod: depmod:  
Mar 11 00:18:13 V38B_8 depmod: Can't open /lib/modules/2.4.25/modules.dep for writing 
Mar 11 00:18:13 V38B_8 rc.sysinit: Finding module dependencies:  failed 
Mar 11 00:18:13 V38B_8 rc.sysinit: Checking filesystems succeeded 
Mar 11 00:18:13 V38B_8 rc.sysinit: Mounting local filesystems:  succeeded 
Mar 11 00:18:14 V38B_8 rc.sysinit: Enabling swap space:  succeeded 
Mar 11 00:18:16 V38B_8 init: Entering runlevel: 3 
Mar 11 00:18:17 V38B_8 sysctl: net.ipv4.ip_forward = 0 
Mar 11 00:18:17 V38B_8 sysctl: net.ipv4.conf.default.rp_filter = 1 
Mar 11 00:18:17 V38B_8 sysctl: kernel.core_uses_pid = 1 
Mar 11 00:18:17 V38B_8 network: Setting network parameters:  succeeded 
Mar 11 00:18:18 V38B_8 network: Bringing up loopback interface:  succeeded 
Mar 11 00:18:22 V38B_8 xinetd[311]: xinetd Version 2.3.11 started with libwrap options compiled in.
Mar 11 00:18:22 V38B_8 xinetd[311]: Started working: 1 available service
Mar 11 00:18:25 V38B_8 xinetd: xinetd startup succeeded
Nov 13 06:12:29 V38B_8 modprobe: modprobe: Can't open dependencies file /lib/modules/2.4.25/modules.dep (No such file or directory)
Nov 13 06:12:31 V38B_8 login(pam_unix)[320]: session opened for user root by LOGIN(uid=0)
Nov 13 06:12:31 V38B_8  -- root[320]: ROOT LOGIN ON console
Nov 13 06:12:47 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0xfb6c608 (pid 350)
Nov 13 06:12:48 V38B_8 kernel: rtcan0: btr0=0x0a btr1=0x6f
Nov 13 06:12:48 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x100d8bb4 (pid 350)
Nov 13 06:12:48 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x100d8d68 (pid 350)
Nov 13 06:12:48 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x100f0c2c (pid 350)
Nov 13 06:12:49 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x100dde70 (pid 350)
Nov 13 06:12:49 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0xfde7fa8 (pid 368)
Nov 13 06:12:49 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0xfde7fa8 (pid 361)
Nov 13 06:12:49 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 in kernel-space at 0xc0011514 (pid 372)
Nov 13 06:12:49 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0xfde92d0 (pid 374)
Nov 13 06:12:49 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x1006e1c4 (pid 374)
Nov 13 06:12:49 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x1007e544 (pid 374)
Nov 13 06:12:49 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0xfde92d0 (pid 360)
Nov 13 06:12:50 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0xffb6360 (pid 371)
Nov 13 06:12:50 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x100e3e40 (pid 371)
Nov 13 06:12:50 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0xfb7174c (pid 371)
Nov 13 06:12:50 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x100a15ac (pid 371)
Nov 13 06:12:50 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x100a15ac (pid 371)
Nov 13 06:12:50 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 in kernel-space at 0xc0011514 (pid 371)
Nov 13 06:12:50 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0xfde7fa8 (pid 361)
Nov 13 06:12:50 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0xfde7fa8 (pid 368)
Nov 13 06:12:50 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0xfde92d0 (pid 374)
Nov 13 06:12:50 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x1006e1c4 (pid 374)
Nov 13 06:12:50 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x1007e544 (pid 374)
Nov 13 06:12:50 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0xfde92d0 (pid 360)
Nov 13 06:12:51 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x10121338 (pid 350)
Nov 13 06:12:51 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x1005d0e8 (pid 350)
Nov 13 06:12:51 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x1031aa98 (pid 350)
Nov 13 06:12:51 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x10323894 (pid 350)
Nov 13 06:12:51 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x1005d72c (pid 350)
Nov 13 06:12:51 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x100a17bc (pid 350)
Nov 13 06:12:51 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x1008eb08 (pid 350)
Nov 13 06:12:51 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0xfb6c608 (pid 350)
Nov 13 06:12:51 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0xfb78360 (pid 350)
Nov 13 06:12:51 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 in kernel-space at 0xc00115b0 (pid 359)
Nov 13 06:12:51 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0xfde92d0 (pid 371)
Nov 13 06:12:51 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0xfb79e18 (pid 350)
Nov 13 06:12:51 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0xfb79e0c (pid 350)
Nov 13 06:12:52 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x100a0848 (pid 350)
Nov 13 06:12:52 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0xffb6360 (pid 372)
Nov 13 06:12:52 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0xfb6c608 (pid 350)
Nov 13 06:12:52 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x10156a08 (pid 350)
Nov 13 06:12:52 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0xfb79e0c (pid 386)
Nov 13 06:12:52 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x100a0848 (pid 350)
Nov 13 06:12:52 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x100a0848 (pid 350)
Nov 13 06:12:52 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x1005d704 (pid 350)
Nov 13 06:12:53 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x1030503c (pid 350)
Nov 13 06:12:53 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x101be1a4 (pid 350)
Nov 13 06:12:53 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x1005d72c (pid 350)
Nov 13 06:12:53 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x1005d8fc (pid 350)
Nov 13 06:12:53 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x1008ed3c (pid 350)
Nov 13 06:12:53 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x100c5780 (pid 350)
Nov 13 06:12:53 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 in kernel-space at 0xc00115b0 (pid 364)
Nov 13 06:12:53 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0xfb6c608 (pid 350)
Nov 13 06:12:53 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0xfb6c608 (pid 350)
Nov 13 06:12:53 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x101373ec (pid 350)
Nov 13 06:12:53 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0xfb6c608 (pid 350)
Nov 13 06:12:53 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x10323154 (pid 350)
Nov 13 06:12:53 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 in kernel-space at 0xc00115b0 (pid 371)
Nov 13 06:12:53 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x100692d8 (pid 387)
Nov 13 06:12:53 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 in kernel-space at 0xc00115b0 (pid 362)
Nov 13 06:12:53 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 in kernel-space at 0xc00115b0 (pid 376)
Nov 13 06:12:53 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 in kernel-space at 0xc00115b0 (pid 374)
Nov 13 06:12:53 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x100dbfb0 (pid 376)
Nov 13 06:12:53 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 in kernel-space at 0xc00115b0 (pid 369)
Nov 13 06:12:53 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 in kernel-space at 0xc0011514 (pid 367)
Nov 13 06:12:53 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 in kernel-space at 0xc00115b0 (pid 360)
Nov 13 06:12:53 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0xfb79d10 (pid 376)
Nov 13 06:12:53 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x100de0c8 (pid 376)
Nov 13 06:12:53 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x3000b358 (pid 350)
Nov 13 06:12:54 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x10086d50 (pid 371)
Nov 13 06:12:54 V38B_8 last message repeated 2 times
Nov 13 06:12:54 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0xfb71d94 (pid 371)
Nov 13 06:12:54 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x10086d50 (pid 371)
Nov 13 06:12:54 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x10086d50 (pid 371)
Nov 13 06:13:02 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x10198080 (pid 383)
Nov 13 06:13:52 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0x10151370 (pid 384)
Nov 13 06:14:49 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0xfb71a00 (pid 372)
Nov 13 06:14:49 V38B_8 kernel: Xenomai: Switching ApplicationExe to secondary mode after exception #768 from user-space at 0xff4d11c (pid 372)
Nov 13 06:16:45 V38B_8 kernel: xeno_rtcan: socket buffer overflow (fd=0), message discarded
Nov 13 06:17:16 V38B_8 last message repeated 155 times
Nov 13 06:18:17 V38B_8 last message repeated 309 times
Nov 13 06:19:18 V38B_8 last message repeated 310 times
Nov 13 06:20:19 V38B_8 last message repeated 309 times
Nov 13 06:21:20 V38B_8 last message repeated 310 times
Nov 13 06:22:21 V38B_8 last message repeated 310 times
Nov 13 06:23:22 V38B_8 last message repeated 309 times
Nov 13 06:24:23 V38B_8 last message repeated 310 times
Nov 13 06:25:24 V38B_8 last message repeated 309 times
Nov 13 06:26:25 V38B_8 last message repeated 311 times
Nov 13 06:27:26 V38B_8 last message repeated 309 times
Nov 13 06:28:27 V38B_8 last message repeated 310 times
Nov 13 06:29:28 V38B_8 last message repeated 309 times
Nov 13 06:30:29 V38B_8 last message repeated 310 times
Nov 13 06:31:30 V38B_8 last message repeated 310 times
Nov 13 06:32:31 V38B_8 last message repeated 309 times
Nov 13 06:33:32 V38B_8 last message repeated 310 times

[-- Attachment #3: ksyms.exception768 --]
[-- Type: application/octet-stream, Size: 48124 bytes --]

c023db10 rtc_lock
c023c0a0 last_task_used_math
c0240358 context_map
c0007c98 _switch_to
c000ce94 ipipe_tune_timer
c000cd4c ipipe_get_sysinfo
c000cd88 ipipe_trigger_irq
c000ccf8 ipipe_critical_exit
c000ce84 ipipe_critical_enter
c023c0d0 __ipipe_decr_next
c023c0d8 __ipipe_decr_ticks
c023c0cc cur_cpu_spec
c0004f40 ret_from_intercept
c0208200 intercept_table
c0010a30 flush_hash_page
c023c084 disarm_decr
c0028ae4 handle_mm_fault
c0003624 set_context
c023c14c next_mmu_context
c0004f8c ret_to_user_hook
c000782c atomic_set_mask
c0007818 atomic_clear_mask
c0240190 __res
c0009f34 __down_interruptible
c0009e2c __down
c0009de4 __up
c023c1dc console_drivers
c00084c4 get_wchan
c023c090 tb_ticks_per_jiffy
c0006208 ppc_irq_dispatch_handler
c0224c60 irq_desc
c0000594 do_IRQ_intercept
c00070dc timer_interrupt
c0000990 timer_interrupt_intercept
c00072a0 __delay
c0007a1c abs
c001149c memchr
c001146c memcmp
c01c8e18 memscan
c0011340 memmove
c00111cc memset
c0011348 memcpy
c00079f8 __lshrdi3
c00079d4 __ashldi3
c00079a8 __ashrdi3
c0006bf0 to_tm
c023dd1c ppc_md
c000d624 xchg_u32
c000f980 flush_dcache_page
c0010278 flush_icache_page
c001020c flush_icache_user_range
c00076f4 invalidate_dcache_range
c00076c4 flush_dcache_range
c0007638 flush_icache_range
c0007c20 enable_kernel_fp
c0003300 giveup_fpu
c0007614 flush_instruction_cache
c0027cdc pte_alloc
c000d764 set_pte
c00075e0 _tlbia
c0014000 kernel_thread
c0008158 start_thread
c000e0a4 pci_bus_to_phys
c000e060 pci_phys_to_bus
c000de64 pci_resource_to_bus
c000dd88 pci_bus_to_hose
c000de30 pci_bus_mem_base_phys
c000ddfc pci_bus_io_base_phys
c000ddc8 pci_bus_io_base
c000e3c0 pci_free_consistent
c000e400 pci_alloc_consistent
c023c0e0 pci_dram_offset
c023c0e4 isa_mem_base
c023c0e8 isa_io_base
c023dd10 ppc_ide_md
c0010460 iounmap
c0010664 __ioremap
c00107c8 ioremap
c0010574 mm_ptov
c0010500 iopa
c0007984 _outsl_ns
c0007960 _insl_ns
c000793c _outsw_ns
c0007918 _insw_ns
c00078f4 _outsl
c00078d0 _insl
c00078ac _outsw
c0007888 _insw
c0007864 _outsb
c0007840 _insb
c001178c __strnlen_user
c0011750 __strncpy_from_user
c00116d4 __clear_user
c00114c4 __copy_tofrom_user
c0010e68 csum_tcpudp_magic
c0010e2c ip_fast_csum
c0010f08 csum_partial_copy_generic
c00118d4 __div64_32
c00117d4 strcasecmp
c01c8ad0 strncmp
c00110ec strcmp
c01c8be0 strnlen
c0011110 strlen
c01c8e54 strstr
c01c8cfc strtok
c01c8c98 strpbrk
c01c8b88 strrchr
c01c8b48 strchr
c01c8a68 strncat
c00110c0 strcat
c0011080 strncpy
c0011064 strcpy
c000d72c test_and_change_bit
c000d6f4 test_and_clear_bit
c000d6bc test_and_set_bit
c000d690 change_bit
c000d664 clear_bit
c000d638 set_bit
c023c0c8 DMA_MODE_WRITE
c023c0c4 DMA_MODE_READ
c023c0c0 ISA_DMA_THRESHOLD
c00064bc probe_irq_mask
c0005e70 disable_irq_nosync
c0005f24 disable_irq
c0005f78 enable_irq
c023d210 ppc_lost_interrupts
c023c06c ppc_n_lost_interrupts
c000921c sys_sigreturn
c0005728 SingleStepException
c0005a28 ProgramCheckException
c00058ec AlignmentException
c0005288 MachineCheckException
c00063f0 do_IRQ
c0003000 transfer_to_handler
c000954c syscall_trace
c0008960 do_signal
c0007770 clear_page
c023c1c8 abi_fake_utsname
c023c1c4 abi_traceflg
c021fc90 abi_defhandler_libcso
c021fc94 abi_defhandler_lcall7
c023c1c0 abi_defhandler_elf
c021fc98 abi_defhandler_coff
c0015278 __set_personality
c0015194 unregister_exec_domain
c001512c register_exec_domain
c00165f4 unregister_console
c00166c8 register_console
c0016270 console_unblank
c0016040 console_print
c0016534 acquire_console_sem
c0015f1c printk
c0020d84 unblock_all_signals
c00201ec block_all_signals
c0020540 send_sig_info
c0020a3c send_sig
c0020180 recalc_sigpending
c0020a38 notify_parent
c0020af4 kill_sl_info
c0020b8c kill_sl
c0020a50 kill_proc_info
c0021d1c kill_proc
c0020ba4 kill_pg_info
c0020c40 kill_pg
c0021010 force_sig_info
c00210f8 force_sig
c0020d74 flush_signals
c0020de8 dequeue_signal
c0022650 in_egroup_p
c0022638 in_group_p
c00221dc unregister_reboot_notifier
c00221cc register_reboot_notifier
c0022164 notifier_call_chain
c0022118 notifier_chain_unregister
c00220d0 notifier_chain_register
c002442c request_module
c00242b8 call_usermodehelper
c0024630 exec_usermodehelper
c020cd08 hotplug_path
c0024d14 flush_scheduled_tasks
c0024c70 schedule_task
c023c2a0 __ipipe_virtual_irq_map
c024dd68 __ipipe_pipelock
c023c2a8 __ipipe_pipeline
c0025a2c __ipipe_sync_stage
c0025800 __ipipe_dispatch_wired
c0025c68 __ipipe_dispatch_event
c0025120 __ipipe_test_root
c00259c0 __ipipe_test_and_stall_root
c0026284 __ipipe_restore_root
c0025964 __ipipe_stall_root
c0026224 __ipipe_unstall_root
c0012018 ipipe_reenter_root
c0011fc4 ipipe_setscheduler_root
c020cf60 ipipe_root
c021fd08 ipipe_percpu_domain
c00260e4 __ipipe_restore_pipeline_head
c00261a4 ipipe_unstall_pipeline_head
c00260ac ipipe_test_and_unstall_pipeline_from
c002609c ipipe_restore_pipeline_from
c002600c ipipe_unstall_pipeline_from
c00259e4 ipipe_test_and_stall_pipeline_from
c0025988 ipipe_stall_pipeline_from
c002540c ipipe_alloc_virq
c0025e34 ipipe_suspend_domain
c00256f4 ipipe_control_irq
c002554c ipipe_virtualize_irq
c0026af4 __ipipe_schedule_irq
c0026750 ipipe_send_ipi
c0026748 ipipe_set_irq_affinity
c00267f8 ipipe_get_ptd
c00267d4 ipipe_set_ptd
c0026758 ipipe_free_ptdkey
c0026818 ipipe_alloc_ptdkey
c002648c ipipe_catch_event
c0026460 ipipe_init_attr
c0026418 ipipe_free_virq
c0026364 ipipe_unregister_domain
c00268a0 ipipe_register_domain
c02b8bc8 _end
c01cae90 dump_stack
c0013f68 unshare_files
c0242c8c pidhash
c0226c60 tasklist_lock
c0208d30 init_task_union
c001b7c4 __tasklet_hi_schedule
c001b71c __tasklet_schedule
c001b110 cpu_raise_softirq
c001b67c raise_softirq
c001b4a4 do_softirq
c001b070 __run_task_queue
c001b39c tasklet_kill
c001b284 tasklet_init
c001b454 remove_bh
c001b2a0 init_bh
c024c520 bh_task_vec
c0226ca0 tasklet_vec
c0226c80 tasklet_hi_vec
c01cb498 bitreverse
c01cb38c crc32_be
c01cb530 crc32_le
c01c8dc0 strsep
c01c8c28 strspn
c01c8ee8 strnicmp
c004a214 get_write_access
c00653a4 disk_name
c004e5dc kill_fasync
c004e418 fasync_helper
c021fcf8 fs_overflowgid
c021fcfc fs_overflowuid
c0058764 __inode_dir_notify
c00407a8 brw_page
c023c270 event
c0057b94 is_bad_inode
c0057b5c make_bad_inode
c0040354 buffer_insert_list
c0055890 remove_inode_hash
c0055d60 insert_inode_hash
c0055ce0 new_inode
c003f8a0 get_hash_table
c026d6e0 read_ahead
c003e8c8 init_special_inode
c023c9c4 ___strtok
c005617c clear_inode
c00420cc fsync_buffers_list
c0041a74 file_fsync
c023c204 sys_tz
c000fa5c si_meminfo
c00477a4 open_exec
c0047108 kernel_read
c0047fbc flush_old_exec
c0048804 do_execve
c00487cc copy_strings_kernel
c0047e28 setup_arg_pages
c005af00 seq_release_private
c005aec0 single_release
c005ae10 single_open
c005af48 seq_lseek
c005b2a4 seq_read
c005aba4 seq_release
c005ab30 seq_open
c005ad10 seq_printf
c005abe0 seq_escape
c0010e90 csum_partial
c0012070 daemonize
c0011ea0 reparent_to_init
c021fcb8 cap_bset
c023c190 securebits
c00cfcbc get_random_bytes
c00d02ec secure_tcp_sequence_number
c021e9d8 _ctype
c000a99c machine_power_off
c000a9c8 machine_halt
c000a970 machine_restart
c02088f0 sys_call_table
c020cce8 uts_sem
c0208320 system_utsname
c01c8f94 simple_strtoull
c01c9e34 simple_strtoul
c01c9f1c simple_strtol
c003e83c cdevname
c00457c4 bdevname
c003e7f0 kdevname
c01c9f60 vsscanf
c01c9554 vsnprintf
c01c9dc0 vsprintf
c01ca668 sscanf
c01c9d68 snprintf
c01c9dd8 sprintf
c001557c __out_of_line_bug
c023c1d4 panic_timeout
c023c1d8 panic_notifier_list
c0015664 panic
c023c1bc nr_running
c02413e0 kstat
c021fc6c loops_per_jiffy
c0006e7c do_settimeofday
c0006fec do_gettimeofday
c023c250 xtime
c023c258 jiffies
c00131ac __cond_resched
c00131d4 yield
c00133ec schedule_timeout
c00129dc schedule
c0013584 interruptible_sleep_on_timeout
c00132a4 interruptible_sleep_on
c00134e0 sleep_on_timeout
c0013210 sleep_on
c001397c wake_up_process
c0013628 __wake_up_sync
c001377c __wake_up
c0019be4 complete_and_exit
c020bfd4 iomem_resource
c020bff0 ioport_resource
c001bf30 __release_region
c001bee0 __check_region
c001be30 __request_region
c001bc74 check_resource
c001bcdc allocate_resource
c001bc70 release_resource
c001bc40 request_resource
c0242c8c dma_spin_lock
c0013acc free_dma
c0013b48 request_dma
c0058490 kiobuf_wait_for_io
c0043388 brw_kiovec
c0027a58 unlock_kiovec
c002837c lock_kiovec
c00279c0 unmap_kiobuf
c00295e0 map_user_kiobuf
c00580d4 expand_kiobuf
c00582fc free_kiovec
c00583b4 alloc_kiovec
c021fcd4 tq_immediate
c021fcdc tq_timer
c001f504 mod_timer
c00064b4 probe_irq_off
c00064ac probe_irq_on
c0012318 complete
c0013338 wait_for_completion
c0014350 remove_wait_queue
c001405c add_wait_queue_exclusive
c00140c0 add_wait_queue
c024c7a0 irq_stat
c0005d6c free_irq
c00069a8 request_irq
c001f234 del_timer
c001f39c add_timer
c001d520 proc_doulongvec_minmax
c001d514 proc_doulongvec_ms_jiffies_minmax
c001cc50 proc_dointvec_minmax
c001d9b4 proc_dointvec_jiffies
c001da10 proc_dointvec
c001c88c proc_dostring
c001c748 sysctl_jiffies
c001c658 sysctl_intvec
c001c420 sysctl_string
c001c150 unregister_sysctl_table
c001da1c register_sysctl_table
c004729c set_binfmt
c00478a4 remove_arg_zero
c0047ccc compute_creds
c0047180 prepare_binprm
c0047a9c search_binary_handler
c0046f20 unregister_binfmt
c0046eb8 register_binfmt
c0058b24 may_umount
c0058aa8 __mntput
c0044d0c kern_mount
c0043a98 unregister_filesystem
c0043a14 register_filesystem
c00c603c do_SAK
c00c6078 tty_get_baud_rate
c00c6e30 tty_flip_buffer_push
c00c5f8c tty_hung_up_p
c00c5e6c tty_check_change
c00cbb68 tty_wait_until_sent
c00c5f68 tty_hangup
c0263b7c max_readahead
c0263780 max_sectors
c003f1d4 refile_buffer
c003f208 init_buffer
c021fda8 tq_disk
c0065888 register_disk
c00655c0 grok_partitions
c004570c ioctl_by_bdev
c0045b74 blkdev_put
c0045f68 blkdev_get
c0045f20 blkdev_open
c00653a0 devfs_register_partitions
c0041db4 sync_dev
c005584c bmap
c00d2c60 set_device_ro
c00d2c1c is_read_only
c0264b6c blk_dev
c0264770 blk_size
c0263f78 hardsect_size
c0264374 blksize_size
c025c460 tty_std_termios
c00c6150 tty_unregister_driver
c00c67bc tty_register_driver
c00455d0 unregister_blkdev
c0044f5c register_blkdev
c003e5f8 unregister_chrdev
c003e560 register_chrdev
c002bad4 wakeup_page_waiters
c002b70c unlock_page
c002b5e0 lock_page
c002c3e0 filemap_fdatawait
c002c164 filemap_fdatasync
c002c02c filemap_fdatawrite
c002be58 filemap_sync
c002ec60 filemap_nopage
c003c6a4 dentry_open
c003d8a0 default_llseek
c0210a04 dcache_dir_ops
c004fab0 dcache_readdir
c004f110 dcache_dir_fsync
c004f8f8 dcache_dir_lseek
c004f0e8 dcache_dir_close
c004f09c dcache_dir_open
c0051870 lock_may_write
c00517b0 lock_may_read
c0051368 lease_get_mtime
c005308c __get_lease
c004f5bc vfs_readdir
c003f6c4 block_symlink
c021099c page_symlink_inode_operations
c004c7e4 page_follow_link
c004a374 page_readlink
c004dd54 vfs_follow_link
c0049ae0 vfs_readlink
c002b55c mark_page_accessed
c002bd9c set_page_dirty
c002e178 read_cache_page
c002ff50 grab_cache_page_nowait
c0030054 find_or_create_page
c002d414 find_trylock_page
c002b8b0 __find_lock_page
c002b50c __find_get_page
c023c01c ROOT_DEV
c004fcac poll_freewait
c004fd2c __pollwait
c003d894 no_llseek
c003d7bc generic_file_llseek
c003d7b4 generic_read_dir
c003bb18 vfs_statfs
c004ba3c vfs_rename
c004af40 vfs_unlink
c004ace0 vfs_rmdir
c004a9e4 vfs_link
c004a894 vfs_symlink
c004a5a8 vfs_mknod
c004a744 vfs_mkdir
c004a454 vfs_create
c0026c24 put_unused_fd
c003cbe0 get_unused_fd
c0053d6c is_subdir
c0054ad4 find_inode_number
c0055100 shrink_dcache_parent
c005527c shrink_dcache_sb
c0054ed0 prune_dcache
c0054e08 d_prune_aliases
c0053f94 d_find_alias
c0053e08 have_submounts
c00548f0 dput
c005289c locks_mandatory_area
c0051924 posix_locks_deadlock
c00521bc posix_unblock_lock
c0052274 posix_block_lock
c0051c20 posix_test_lock
c0052a68 posix_lock_file
c0050f18 locks_copy_lock
c0050e9c locks_init_lock
c021fd74 file_lock_list
c023c308 page_hash_table
c023c30c page_hash_bits
c002b9fc generic_buffer_fdatasync
c0210520 generic_ro_fops
c002b1f0 generic_file_mmap
c002df6c generic_file_write
c002d158 do_generic_direct_write
c002c78c do_generic_direct_read
c002dac4 do_generic_file_write
c002f3ac do_generic_file_read
c002fd2c generic_file_read
c003f3b8 generic_block_bmap
c0042420 block_truncate_page
c0040b78 generic_commit_write
c004301c cont_prepare_write
c003ffb4 generic_cont_expand
c0040770 block_sync_page
c0042fb4 block_prepare_write
c0040c5c block_read_full_page
c0042b48 block_write_full_page
c0041370 discard_bh_page
c0043700 generic_direct_IO
c002c2f0 ___wait_on_page
c00404a0 __wait_on_buffer
c003fc38 unlock_buffer
c00d3a44 submit_bh
c00d3be4 ll_rw_block
c00408d8 __bforget
c003fb20 __brelse
c0042384 bread
c0045a8c bdput
c00458a4 bdget
c004631c cdput
c004625c cdget
c004154c getblk
c004617c sb_min_blocksize
c0046110 sb_set_blocksize
c0045fe4 set_blocksize
c005797c notify_change
c0056bb4 write_inode_now
c00577d0 inode_change_ok
c0057698 inode_setattr
c0049e0c vfs_permission
c0049f84 permission
c0041db8 fsync_no_super
c0041b3c fsync_dev
c002ce8c truncate_inode_pages
c002bbf0 invalidate_inode_pages
c00565c0 invalidate_device
c0056510 invalidate_inodes
c0041ed8 invalidate_bdev
c0042088 __invalidate_buffers
c004563c check_disk_change
c0251cd0 files_lock
c003ee0c put_filp
c003c484 filp_close
c003cb68 filp_open
c003e9c0 init_private_file
c003eb28 get_empty_filp
c003bc64 fd_install
c0055a54 __mark_inode_dirty
c0040928 __mark_buffer_dirty
c00401f0 end_buffer_io_async
c003f548 set_buffer_async_io
c0040964 mark_buffer_dirty
c00540d0 __d_path
c00544c4 d_lookup
c0054708 d_alloc
c005423c d_instantiate
c005435c d_move
c00551dc d_invalidate
c00542b8 d_rehash
c0054620 d_validate
c0053f44 dget_locked
c0053e7c d_delete
c0054890 d_alloc_root
c02529cc dcache_lock
c003cdf4 sys_close
c0049fd0 lookup_hash
c004a168 lookup_one_len
c004db1c __user_walk
c004a3e4 path_release
c004cb60 path_lookup
c004c690 path_walk
c004c9c0 path_init
c0058e08 lookup_mnt
c004bb18 follow_down
c004ab6c follow_up
c0055834 force_delete
c00558ac __inode_init_once
c0055964 inode_init_once
c0056e90 iput
c0055718 unlock_new_inode
c0056768 iget4_locked
c0056634 ilookup
c00559ac iunique
c0055ffc igrab
c003ea48 fget
c003ec80 fput
c023c3b8 names_cachep
c0049bb4 getname
c0044398 drop_super
c004440c get_super
c0043b78 get_fs_type
c0055b70 update_atime
c0210690 def_blk_fops
c020b3f0 init_mm
c0029a58 get_unmapped_area
c0029774 find_vma
c00281f8 vmtruncate
c023c2f4 high_memory
c023c300 max_mapnr
c00286bc remap_page_range
c023c2f0 mem_map
c0027b94 vmalloc_to_page
c00320f8 vmap
c0032424 __vmalloc
c0032034 vfree
c0033358 kfree
c0033934 kmalloc
c0032708 kmem_cache_size
c0032b2c kmem_cache_free
c0033460 kmem_cache_alloc
c0033078 kmem_cache_shrink
c00330f8 kmem_cache_destroy
c0033b28 kmem_cache_create
c00326a0 kmem_find_general_cachep
c023c2fc num_physpages
c0035d58 free_pages
c0035d2c __free_pages
c0036718 get_zeroed_page
c0036798 __get_free_pages
c0038c38 alloc_pages_node
c0036150 __alloc_pages
c0036704 _alloc_pages
c0020cd0 exit_sighand
c0019c94 exit_fs
c00195c0 exit_files
c0019c0c exit_mm
c002ad70 do_brk
c002a27c do_munmap
c002a80c do_mmap_pgoff
c0017510 try_inc_mod_count
c001742c inter_module_put
c00175f4 inter_module_get_request
c0017554 inter_module_get
c0017130 inter_module_unregister
c0016fec inter_module_register
c00276b0 rthal_exit
c0027474 rthal_init
c023c2c0 rthal_proc_root
c023c2cc rthal_tunables
c024de00 rthal_domain
c0026ce4 rthal_critical_exit
c0026c5c rthal_critical_enter
c0026ff0 rthal_apc_schedule
c0026fa4 rthal_apc_free
c0026ed0 rthal_apc_alloc
c016a024 rthal_timer_calibrate
c016a4f0 rthal_timer_release
c016a5ac rthal_timer_request
c0026df8 rthal_trap_catch
c0026df0 rthal_irq_affinity
c0026dc4 rthal_irq_host_pend
c016a10c rthal_irq_host_release
c016a03c rthal_irq_host_request
c016a2ac rthal_irq_end
c016a230 rthal_irq_disable
c016a1b8 rthal_irq_enable
c0026d74 rthal_irq_release
c0026d1c rthal_irq_request
c00277d4 kthread_stop
c0027730 kthread_should_stop
c002789c kthread_create
c0029264 get_user_pages
c002b8b8 fail_writepage
c021fd18 vm_min_readahead
c021fd1c vm_max_readahead
c0250de0 zone_table
c003a45c shmem_file_setup
c003c574 generic_file_open
c0041060 try_to_free_buffers
c0042678 waitfor_one_page
c003f60c writeout_one_page
c00406c8 create_empty_buffers
c003f570 set_bh_page
c003f2f4 get_unused_buffer_head
c003fc34 put_unused_buffer_head
c003f2e8 get_buffer_flushtime
c003f2cc set_buffer_flushtime
c003fe88 balance_dirty
c023c3ac bh_cachep
c023c3ec nfsd_linkage
c023c408 proc_root_driver
c023c40c proc_bus
c023c410 proc_net
c023c414 proc_root_fs
c0210c74 proc_root
c0062714 remove_proc_entry
c0062414 create_proc_entry
c0062624 proc_mkdir
c00625b0 proc_mknod
c00624f8 proc_symlink
c023c404 proc_sys_root
c007538c journal_force_commit
c007b7ec journal_bmap
c0074964 journal_try_to_free_buffers
c00742c4 journal_flushpage
c007a8d4 journal_blocks_per_page
c007bb44 journal_wipe
c007a2ec log_start_commit
c007a384 log_wait_commit
c007b570 journal_clear_err
c007a838 journal_ack_err
c007b4d0 journal_errno
c007b758 journal_abort
c007b614 journal_update_superblock
c007862c journal_recover
c007bebc journal_destroy
c007c234 journal_load
c007c30c journal_create
c007a6c8 journal_set_features
c007a684 journal_check_available_features
c007a620 journal_check_used_features
c007c48c journal_update_format
c007b89c journal_init_inode
c007b160 journal_init_dev
c00738c0 journal_callback_set
c0079ba8 journal_revoke
c007bc34 journal_flush
c00764b4 journal_forget
c0076228 journal_dirty_metadata
c0074730 journal_dirty_data
c00760b4 journal_get_undo_access
c00756a8 journal_get_create_access
c0075ff8 journal_get_write_access
c0075544 journal_unlock_updates
c00753ec journal_lock_updates
c00738e0 journal_stop
c0074d60 journal_extend
c007510c journal_restart
c0074ae4 journal_try_start
c007528c journal_start
c00858e8 fat_brelse
c0088824 fat_truncate
c0086ba0 fat_dir_empty
c0086cf8 fat_add_entries
c00884d4 fat_dir_ioctl
c0085db0 fat_get_cluster
c008bc2c unregister_cvf_format
c008bb4c register_cvf_format
c00891c8 fat_write_inode
c0088b98 fat_statfs
c008b45c fat_scan
c008863c fat_readdir
c0086f24 fat_search_long
c0089878 fat_read_super
c0089424 fat_build_inode
c0089190 fat_detach
c0089144 fat_attach
c0088a58 fat_put_super
c0088d7c fat_notify_change
c008591c fat_mark_buffer_dirty
c008aecc fat__get_entry
c00889ac fat_delete_inode
c008ac8c fat_date_unix2dos
c0088a00 fat_clear_inode
c008869c fat_get_block
c0086a08 fat_new_dir
c008bee0 msdos_put_super
c008c6ac msdos_read_super
c008c800 msdos_unlink
c008c704 msdos_rmdir
c008cd80 msdos_rename
c008c9b4 msdos_mkdir
c008c5e0 msdos_lookup
c008cbc8 msdos_create
c008d5c4 vfat_lookup
c008d710 vfat_read_super
c008efb8 vfat_rename
c008de78 vfat_rmdir
c008f270 vfat_mkdir
c008ddd4 vfat_unlink
c008f3fc vfat_create
c023c4a4 nlmsvc_ops
c00a3520 nlmsvc_invalidate_client
c009f518 nlmclnt_proc
c00a0648 lockd_down
c00a07d0 lockd_up
c00a7734 utf8_wcstombs
c00a767c utf8_wctomb
c00a75c4 utf8_mbstowcs
c00a74e8 utf8_mbtowc
c00a7aa0 load_nls_default
c00a7934 load_nls
c00a79ec unload_nls
c00a7860 unregister_nls
c00a77f8 register_nls
c00c614c tty_unregister_devfs
c00c6038 tty_register_devfs
c00c5d8c tty_register_ldisc
c00cbec4 n_tty_ioctl
c00cdcf8 misc_deregister
c00cddf0 misc_register
c00cffb0 generate_random_uuid
c00ce544 batch_entropy_store
c00ce90c add_blkdev_randomness
c00ce874 add_interrupt_randomness
c00ce864 add_mouse_randomness
c00ce83c add_keyboard_randomness
c00d1a08 gs_got_break
c00d18e0 gs_getserial
c00d1794 gs_setserial
c00d1624 gs_init_port
c00d12c0 gs_set_termios
c00d0f94 gs_close
c00d1db0 gs_block_til_ready
c00d0ea8 gs_do_softint
c00d0dd4 gs_hangup
c00d0c2c gs_start
c00d0b54 gs_stop
c00d0a6c gs_flush_chars
c00d0974 gs_flush_buffer
c00d053c gs_chars_in_buffer
c00d04b4 gs_write_room
c00d1a84 gs_write
c00d03cc gs_put_char
c023c574 blk_nohighio
c00d2bb0 blk_seg_merge_ok
c023c57c blk_max_pfn
c023c580 blk_max_low_pfn
c00d2abc blk_queue_bounce_limit
c00d380c generic_unplug_device
c00d2fd0 blkdev_release_request
c00d387c generic_make_request
c00d2ab4 blk_queue_make_request
c00d2aa4 blk_queue_throttle_sectors
c00d2a94 blk_queue_headactive
c00d360c blk_cleanup_queue
c00d2a48 blk_get_queue
c00d340c blk_init_queue
c00d32e4 blk_grow_request_list
c00d3160 end_that_request_last
c00d2e08 end_that_request_first
c0263780 io_request_lock
c00d4e30 blk_ioctl
c00d5384 get_gendisk
c00d5328 del_gendisk
c00d52d4 add_gendisk
c023c588 gendisk_head
c00d6624 loop_unregister_transfer
c00d65e4 loop_register_transfer
c00d831c unregister_netdev
c00d8280 register_netdev
c00d81c4 ether_setup
c00d8148 alloc_etherdev
c00d8024 init_etherdev
c00d7f8c alloc_netdev
c00d8750 generic_mii_ioctl
c00d89a4 mii_check_media
c00d8dc0 mii_check_link
c00d84a4 mii_ethtool_sset
c00d8bbc mii_ethtool_gset
c00d86e4 mii_nway_restart
c00d8688 mii_link_ok
c00d8e9c autoirq_report
c00d8e74 autoirq_setup
c00da92c ide_do_reset
c00d9aec ide_execute_command
c00d9a8c ide_set_handler
c00d99f0 __ide_set_handler
c00da300 ide_config_drive_speed
c00db124 ide_driveid_update
c00d996c ide_auto_reduce_xfer
c00d9914 set_transfer
c00d9878 ide_ata66_check
c00d9840 eighty_ninty_three
c00da108 ide_wait_stat
c00d9ffc wait_for_ready
c00d97e0 drive_is_ready
c00d96e0 ide_fixstring
c00daa64 ide_fix_driveid
c00d9670 atapi_output_bytes
c00d9600 atapi_input_bytes
c00d9528 ata_output_data
c00d9450 ata_input_data
c00d93e8 ata_vlb_sync
c00d93a4 QUIRK_LIST
c00d9370 SELECT_MASK
c00d9314 SELECT_INTERRUPT
c00d92bc SELECT_DRIVE
c00d9240 read_24
c00d920c default_hwif_transport
c00d9184 default_hwif_mmiops
c00d9020 default_hwif_iops
c00d8f18 unplugged_hwif_iops
c00dc32c flagged_taskfile
c00dc218 ide_task_ioctl
c00dc1c8 ide_wait_cmd_task
c00dd7d0 ide_cmd_ioctl
c00dc13c ide_wait_cmd
c00dd1a0 ide_taskfile_ioctl
c00dc084 ide_raw_taskfile
c00dbfd0 ide_diag_taskfile
c00dbf94 ide_init_drive_taskfile
c00dbe80 ide_cmd_type_parser
c00dbe78 ide_post_handler_parser
c00dbdc0 ide_handler_parser
c00dbd20 ide_pre_handler_parser
c00de020 task_mulout_intr
c00dbc4c pre_task_mulout_intr
c00ddea0 task_out_intr
c00dddb0 pre_task_out_intr
c00ddbbc task_mulin_intr
c00dda50 task_in_intr
c00dbb78 task_no_data_intr
c00dbb04 recal_intr
c00dce94 set_geometry_intr
c00dba6c set_multmode_intr
c00de76c taskfile_error
c00db9e0 task_try_to_flush_leftover_data
c00db7b0 ide_end_taskfile
c00de2bc taskfile_dump_status
c00db500 do_rw_taskfile
c00dc090 taskfile_lib_get_identify
c00db470 taskfile_output_data
c00db408 taskfile_input_data
c00db34c task_read_24
c023c5c8 ide_devfs_handle
c023c5e0 ide_probe
c00dffbc ide_geninit
c0216c7c ide_fops
c00dff70 ide_unregister_module
c00dfef8 ide_register_module
c00e03f0 ide_unregister_subdriver
c00dfd5c ide_register_subdriver
c00dfc74 ide_scan_devices
c0233214 ide_register_driver
c00df9cc ide_attach_drive
c00df918 ide_replace_subdriver
c00df904 system_bus_clock
c00df8b8 ide_delay_50ms
c00e0810 GPLONLY_ide_add_generic_settings
c00e0158 ide_write_setting
c00df614 ide_spin_wait_hwgroup
c00e0574 ide_remove_setting
c00e060c ide_add_setting
c0216c94 ide_setting_sem
c00e1a68 ide_register
c00e1348 ide_register_hw
c00df3f0 ide_setup_ports
c00e0ad8 ide_unregister
c00df2d0 hwif_unregister
c00df1e0 ide_driver_module
c00df198 ide_probe_module
c00defb4 ide_revalidate_disk
c00e152c ide_dump_status
c00def70 current_capacity
c023c5cc idescsi
c023c5d0 idetape
c023c5d4 idefloppy
c023c5d8 idecd
c023c5dc idedisk
c0270010 ide_hwifs
c021fdbc noautodma
c00e2fc0 GPLONLY_ide_set_xfer_rate
c00e2f48 ide_toggle_bounce
c00e2d20 GPLONLY_ide_get_best_pio_mode
c01d6554 GPLONLY_ide_pio_timings
c00e2cc8 ide_dma_enable
c00e2cb4 ide_rate_filter
c00e2ae4 ide_dma_speed
c00e29c4 ide_xfer_verbose
c00e413c ide_do_drive_cmd
c00e3c3c ide_init_drive_cmd
c00e3bbc ide_info_ptr
c00e4268 ide_intr
c00e4440 ide_timer_expiry
c00e4770 do_ide_request
c00e3b88 ide_get_queue
c00e3c78 ide_do_request
c00e3b68 ide_stall_queue
c00e395c execute_drive_cmd
c00e38dc do_special
c00e3784 drive_cmd_intr
c00e36f0 ide_cmd
c00e363c ide_abort
c00e3404 ide_error
c00e3368 try_to_flush_leftover_data
c00e3018 ide_end_drive_cmd
c00e477c ide_end_request
c00e5644 proc_ide_destroy
c00e55cc proc_ide_create
c00e554c destroy_proc_ide_interfaces
c00e54b8 create_proc_ide_interfaces
c00e5464 destroy_proc_ide_drives
c00e53e8 destroy_proc_ide_device
c00e5330 recreate_proc_ide_device
c00e5268 create_proc_ide_drives
c00e520c ide_remove_proc_entries
c00e517c ide_add_proc_entries
c00e5088 proc_ide_read_media
c00e5688 proc_ide_write_driver
c00e5008 proc_ide_read_driver
c00e4f5c proc_ide_read_dmodel
c00e4eb0 proc_ide_read_geometry
c00e4e1c proc_ide_read_capacity
c00e5b70 proc_ide_write_settings
c00e5e88 proc_ide_read_settings
c00e4d0c proc_ide_read_identify
c00e4cb4 proc_ide_read_channel
c00e4c04 proc_ide_read_mate
c00e4a9c proc_ide_read_imodel
c00e49e4 proc_ide_read_drivers
c00e498c proc_ide_read_config
c00e6988 export_ide_init_queue
c00e7b7c hwif_init
c00e66c8 init_gendisk
c00e62fc init_irq
c00e6224 save_match
c00e774c probe_hwif
c00eae08 GPLONLY___ide_do_rw_disk
c00f0784 scsi_delete_timer
c00f0710 scsi_add_timer
c01d6718 scsi_device_types
c023c648 scsi_devicelist
c023c644 scsi_hosts
c023c64c scsi_hostlist
c00ed458 scsi_reset_provider
c00f343c scsi_deregister_blocked_host
c00f3438 scsi_register_blocked_host
c00f3e00 scsi_end_request
c00f3e0c scsi_io_completion
c023c628 proc_scsi
c00f0270 proc_print_scsidevice
c00f10b0 scsi_sleep
c00ed3f4 scsi_free_host_dev
c00ed360 scsi_get_host_dev
c00f3734 scsi_unblock_requests
c00f33e4 scsi_block_requests
c00f33f4 scsi_report_bus_reset
c00ee568 scsi_do_req
c00ee62c scsi_wait_req
c00ee6c4 scsi_release_request
c00ec154 scsi_allocate_request
c00eed8c scsi_ioctl_send_command
c00f23e0 scsi_mark_host_reset
c00f07c0 scsi_block_when_processing_errors
c00efa98 print_Scsi_Cmnd
c00ee470 scsi_release_command
c023c660 scsi_need_isa_buffer
c00ef640 kernel_scsi_ioctl
c023c664 scsi_dma_free_sectors
c00ef718 print_status
c00ef9c8 print_msg
c00ef9bc print_req_sense
c00ef9b0 print_sense
c00ef678 print_command
c00ef158 scsi_ioctl
c021fdc0 scsi_command_size
c00ec770 scsi_do_cmd
c00edb58 scsi_allocate_device
c00efb44 scsi_partsize
c00efc5c scsicam_bios_param
c00ee70c scsi_unregister
c00ee7ac scsi_register
c00f5784 scsi_malloc
c00f5894 scsi_free
c00edb30 scsi_unregister_module
c00ede8c scsi_register_module
c00fafa4 pci_pool_free
c00fb824 pci_pool_alloc
c00fbbe4 pci_pool_destroy
c00fad8c pci_pool_create
c023c698 pci_pci_problems
c023c69c isa_dma_bridge_buggy
c00fc0e4 pcibios_find_device
c00fc050 pcibios_find_class
c00fc310 pcibios_write_config_dword
c00fc2c0 pcibios_write_config_word
c00fc270 pcibios_write_config_byte
c00fc220 pcibios_read_config_dword
c00fc1d0 pcibios_read_config_word
c00fc180 pcibios_read_config_byte
c00fc030 pcibios_present
c00fb72c pci_enable_wake
c00f9914 pci_restore_state
c00f97d4 pci_save_state
c00f95b8 pci_set_power_state
c023c6a0 proc_bus_pci_dir
c00fc858 pci_proc_detach_bus
c00fc7e8 pci_proc_attach_bus
c00fc784 pci_proc_detach_device
c00fc6a0 pci_proc_attach_device
c00fa668 pci_read_bridge_bases
c00fac38 pci_scan_device
c00fb6e0 pci_scan_bus
c00fb358 pci_scan_slot
c00fb478 pci_do_scan_bus
c00fb2a0 pci_add_new_bus
c00fa144 pci_announce_device_to_drivers
c00fbb24 pci_remove_device
c00fb238 pci_insert_device
c00faa64 pci_setup_device
c00f94b0 pci_find_parent_resource
c00f9e8c pci_match_device
c00f9fc8 pci_dev_driver
c00fba84 pci_unregister_driver
c00fb1a8 pci_register_driver
c00fd2c4 pci_assign_resource
c00fa38c pci_dac_set_dma_mask
c00fa37c pci_set_dma_mask
c00fa324 pci_clear_mwi
c00fa210 pci_set_mwi
c00fa1b8 pci_set_master
c00f9190 pci_find_subsys
c00f9148 pci_find_slot
c00f9254 pci_find_device
c00f9264 pci_find_class
c00f9bcc pci_request_region
c00f9b04 pci_release_region
c00f9d84 pci_request_regions
c00f9d40 pci_release_regions
c00f93b8 pci_find_capability
c00f9a08 pci_disable_device
c00f9a00 pci_enable_device
c00f99b8 pci_enable_device_bars
c021fdd4 pci_root_buses
c021fdcc pci_devices
c00f9828 pci_write_config_dword
c00f953c pci_write_config_word
c00f98a4 pci_write_config_byte
c00f9758 pci_read_config_dword
c00f92cc pci_read_config_word
c00f9348 pci_read_config_byte
c00fd884 default_mtd_readv
c00fd790 default_mtd_writev
c00fe0f8 unregister_mtd_user
c00fdd14 register_mtd_user
c00fdad0 put_mtd_device
c00fdbd0 get_mtd_device
c00fdfac del_mtd_device
c00fde18 add_mtd_device
c0272360 GPLONLY_mtd_table
c0217ab8 GPLONLY_mtd_table_mutex
c00fef90 GPLONLY_deregister_mtd_parser
c00fef6c GPLONLY_register_mtd_parser
c00ff060 GPLONLY_parse_mtd_partitions
c00ff194 del_mtd_partitions
c00ff238 add_mtd_partitions
c00fec90 GPLONLY_mtd_erase_callback
c01022c0 GPLONLY_del_mtd_blktrans_dev
c0101cd0 GPLONLY_add_mtd_blktrans_dev
c01016fc GPLONLY_deregister_mtd_blktrans
c0101fb8 GPLONLY_register_mtd_blktrans
c0102800 map_destroy
c0102928 do_map_probe
c01027dc unregister_mtd_chip_driver
c01027b8 register_mtd_chip_driver
c0103d98 cfi_varsize_frob
c0103ce0 cfi_fixup
c0104044 cfi_read_pri
c0108bc8 mtd_do_chip_probe
c010905c simple_map_init
c023c6bc usb_devfs_handle
c010b7f8 usb_bulk_msg
c010b928 usb_control_msg
c010a088 usb_unlink_urb
c010a024 usb_submit_urb
c0109ffc usb_free_urb
c0109fa0 usb_alloc_urb
c010a124 usb_get_current_frame_number
c010bdf4 usb_get_status
c010bacc usb_set_configuration
c010c1c8 usb_get_configuration
c010bbd4 usb_set_interface
c010ca2c usb_clear_halt
c010bcac usb_set_idle
c010b9dc usb_set_report
c010ba4c usb_get_report
c010bd18 usb_set_protocol
c010bd7c usb_get_protocol
c010bed4 usb_string
c010be64 usb_get_string
c010c434 usb_get_device_descriptor
c010a268 __usb_get_extra_descriptor
c010c080 usb_get_class_descriptor
c010c100 usb_get_descriptor
c010c4a8 usb_set_address
c01098f0 usb_release_bandwidth
c01098a8 usb_claim_bandwidth
c01097c0 usb_check_bandwidth
c01095b8 usb_calc_bus_time
c010b2ec usb_disconnect
c010cd20 usb_connect
c010e570 usb_reset_device
c010c508 usb_new_device
c010a154 usb_root_hub_string
c0109a1c usb_match_id
c0109444 usb_driver_release_interface
c01099f8 usb_interface_claimed
c01099dc usb_driver_claim_interface
c010a7e8 usb_find_interface_driver_for_ifnum
c0109f88 usb_inc_dev_use
c010a508 usb_free_dev
c010c85c usb_alloc_dev
c010b1e8 usb_deregister_bus
c010cba0 usb_register_bus
c010a4fc usb_free_bus
c0109944 usb_alloc_bus
c010aa4c usb_scan_devices
c010c918 usb_deregister
c010c79c usb_register
c010951c usb_epnum_to_ep_desc
c01094c0 usb_ifnum_to_if
c0109468 usb_ifnum_to_ifpos
c0114c4c usb_hcd_giveback_urb
c01145f0 usb_hcd_pci_remove
c01147f4 usb_hcd_pci_probe
c011e3a0 i2c_check_functionality
c011e360 i2c_get_functionality
c011eaf8 i2c_smbus_write_block_data
c011eba8 i2c_smbus_read_block_data
c011ec2c i2c_smbus_process_call
c011ec80 i2c_smbus_write_word_data
c011ecc4 i2c_smbus_read_word_data
c011ed14 i2c_smbus_write_byte_data
c011ed58 i2c_smbus_read_byte_data
c011eda8 i2c_smbus_write_byte
c011edcc i2c_smbus_read_byte
c011ee1c i2c_smbus_write_quick
c011e724 i2c_smbus_xfer
c011ee40 i2c_probe
c011e328 i2c_adapter_id
c011e628 i2c_transfer
c011e25c i2c_control
c011e3dc i2c_master_recv
c011e518 i2c_master_send
c011d83c i2c_check_addr
c011dd74 i2c_release_client
c011dd0c i2c_use_client
c011db84 i2c_get_client
c011db28 i2c_dec_use_client
c011dacc i2c_inc_use_client
c011d99c i2c_detach_client
c011d874 i2c_attach_client
c011f2b8 i2c_del_driver
c011f594 i2c_add_driver
c011f740 i2c_del_adapter
c011f9f4 i2c_add_adapter
c0120730 i2c_mpc5xxx_del_bus
c0120b94 i2c_mpc5xxx_add_bus
c0272548 kheap
c0121434 xnheap_check_block
c01213c4 xnheap_finalize_free_inner
c0120dc8 xnheap_schedule_free
c01214ec xnheap_init
c01213bc xnheap_free
c0121174 xnheap_test_and_free
c0121810 xnheap_extend
c0121d80 xnheap_destroy
c0121014 xnheap_alloc
c0121f90 xnheap_destroy_mapped
c012225c xnheap_init_mapped
c0121a8c __va_to_kva
c012269c xnintr_init
c012287c xnintr_affinity
c0122834 xnintr_enable
c0122858 xnintr_disable
c01226e0 xnintr_detach
c0122760 xnintr_destroy
c0122784 xnintr_attach
c0123984 xnmod_alloc_glinks
c0272618 xnmod_glink_queue
c023c708 nktickdef
c023c714 nkpod
c02725e0 nkclock
c0125200 xnpod_welcome_thread
c01287c4 xnpod_wait_thread_period
c0129824 xnpod_unblock_thread
c0124f54 xnpod_trap_fault
c01283b8 xnpod_suspend_thread
c01266d4 xnpod_reset_timer
c0124eb0 xnpod_stop_timer
c012640c xnpod_start_timer
c0129a80 xnpod_start_thread
c0127bec xnpod_shutdown
c0126384 xnpod_set_time
c01289c0 xnpod_set_thread_periodic
c0125350 xnpod_set_thread_mode
c012a190 xnpod_schedule_runnable
c0126874 xnpod_schedule
c01295c4 xnpod_rotate_readyq
c0129020 xnpod_resume_thread
c01298d0 xnpod_restart_thread
c012981c xnpod_renice_thread
c0125c64 xnpod_remove_hook
c0129e74 xnpod_migrate_thread
c0128bb4 xnpod_init_thread
c0127d84 xnpod_init
c012616c xnpod_get_time
c0125470 xnpod_fatal_helper
c01275d8 xnpod_delete_thread
c012508c xnpod_deactivate_rr
c0125b64 xnpod_check_context
c0124fb8 xnpod_announce_tick
c0125ecc xnpod_add_hook
c0124fec xnpod_activate_rr
c012dcb4 xnsynch_wakeup_this_sleeper
c012dee0 xnsynch_wakeup_one_sleeper
c012c28c xnsynch_sleep_on
c012b7c4 xnsynch_renice_sleeper
c012e128 xnsynch_release_all_ownerships
c012b76c xnsynch_init
c012d3dc xnsynch_forget_sleeper
c012da6c xnsynch_flush
c012e398 xnthread_get_errno_location
c021fdfc nktimer
c012e698 xntimer_get_timeout
c012e8a0 xntimer_get_date
c012e6ec xntimer_freeze
c012e63c xntimer_destroy
c012e730 xntimer_init
c023c71c nkerrptd
c023c720 nkthrptd
c0130bf4 xnshadow_ppd_get
c0131390 xnshadow_suspend
c013118c xnshadow_wait_barrier
c0130948 xnshadow_unregister_interface
c0131370 xnshadow_send_sig
c0132b58 xnshadow_unmap
c01304c8 xnshadow_signal_completion
c01313c0 xnshadow_start
c013233c xnshadow_relax
c0130fd4 xnshadow_harden
c0130844 xnshadow_register_interface
c0131f78 xnshadow_map
c013335c xncore_detach
c013320c xncore_attach
c01333b0 xnpipe_setup
c013347c xnpipe_inquire
c0134fbc xnpipe_recv
c01333c4 xnpipe_mfixup
c013360c xnpipe_send
c0134aec xnpipe_disconnect
c0134648 xnpipe_connect
c0137348 xnregistry_put
c01371c8 xnregistry_fetch
c0137280 xnregistry_get
c0138b54 xnregistry_remove_safe
c0137f80 xnregistry_remove
c0137018 xnregistry_bind
c0137458 xnregistry_enter
c013afe8 rt_task_reply
c013b1b4 rt_task_receive
c013ae10 rt_task_send
c013ad24 rt_task_slice
c013acdc rt_task_self
c013abf0 rt_task_set_mode
c013aad8 rt_task_notify
c013a9f4 rt_task_catch
c013a05c rt_task_remove_hook
c013a03c rt_task_add_hook
c013a8f0 rt_task_inquire
c0139fc0 rt_task_unblock
c013a7d8 rt_task_sleep_until
c013a714 rt_task_sleep
c013a628 rt_task_set_priority
c013a1bc rt_task_wait_period
c013a534 rt_task_set_periodic
c013a14c rt_task_yield
c013a3c8 rt_task_delete
c0139f24 rt_task_resume
c013a2c0 rt_task_suspend
c013a07c rt_task_start
c013b5b4 rt_task_create
c013ba80 rt_timer_set_mode
c013bb0c rt_timer_spin
c013ba50 rt_timer_tsc
c013ba30 rt_timer_read
c013c1e0 rt_timer_inquire
c013c01c rt_timer_tsc2ns
c013bdc4 rt_timer_ns2tsc
c013ba2c rt_timer_ticks2ns
c013ba28 rt_timer_ns2ticks
c013f104 rt_alarm_handler
c01409d4 rt_pipe_free
c0140aa0 rt_pipe_alloc
c0140c08 rt_pipe_flush
c0140ae8 rt_pipe_stream
c0140f04 rt_pipe_write
c0140e5c rt_pipe_read
c01409f8 rt_pipe_send
c0140d7c rt_pipe_receive
c0140c74 rt_pipe_delete
c0140f98 rt_pipe_create
c01412d8 rt_sem_broadcast
c0141374 rt_sem_inquire
c0141234 rt_sem_v
c014162c rt_sem_p
c0141410 rt_sem_delete
c01414e8 rt_sem_create
c01418fc rt_event_inquire
c014186c rt_event_clear
c0141b90 rt_event_wait
c0141d2c rt_event_signal
c0141998 rt_event_delete
c0141a70 rt_event_create
c01420a8 rt_mutex_inquire
c0142498 rt_mutex_release
c014233c rt_mutex_acquire
c0142144 rt_mutex_delete
c014221c rt_mutex_create
c014274c rt_cond_inquire
c0142640 rt_cond_wait
c01425b8 rt_cond_broadcast
c01429d0 rt_cond_signal
c01427e0 rt_cond_delete
c01428b8 rt_cond_create
c0142e58 rt_queue_inquire
c01438fc rt_queue_read
c01435f8 rt_queue_receive
c0143554 rt_queue_write
c014302c rt_queue_send
c0142ca0 rt_queue_free
c0142ba8 rt_queue_alloc
c0142d44 rt_queue_delete
c0143984 rt_queue_create
c0143bec rt_heap_inquire
c014409c rt_heap_free
c0143dac rt_heap_alloc
c0143c90 rt_heap_delete
c01441d4 rt_heap_create
c014467c rt_alarm_inquire
c01445e8 rt_alarm_stop
c0144540 rt_alarm_start
c0144714 rt_alarm_delete
c01447e4 rt_alarm_create
c0144bc0 sched_yield
c0144a7c pthread_setschedparam
c01449e8 pthread_getschedparam
c0144920 sched_rr_get_interval
c0144c30 sched_get_priority_max
c0144bf8 sched_get_priority_min
c014575c pthread_attr_setaffinity_np
c01456d4 pthread_attr_getaffinity_np
c0145650 pthread_attr_setfp_np
c01455c8 pthread_attr_getfp_np
c01454e8 pthread_attr_setname_np
c0145460 pthread_attr_getname_np
c01453bc pthread_attr_setscope
c0145334 pthread_attr_getscope
c0145280 pthread_attr_setschedparam
c01451f8 pthread_attr_getschedparam
c0145158 pthread_attr_setschedpolicy
c01450d0 pthread_attr_getschedpolicy
c014503c pthread_attr_setinheritsched
c0144fc0 pthread_attr_getinheritsched
c0144f14 pthread_attr_setstacksize
c0144e8c pthread_attr_getstacksize
c0144de0 pthread_attr_setdetachstate
c0144d58 pthread_attr_getdetachstate
c0144cb8 pthread_attr_destroy
c0144c88 pthread_attr_init
c01468a0 pthread_wait_np
c0146404 pthread_make_periodic_np
c0145b78 pthread_self
c01465cc pthread_join
c0145ac0 pthread_exit
c01458e8 pthread_equal
c0145820 pthread_detach
c0145bb4 pthread_create
c0146d6c pthread_mutexattr_setpshared
c0146cd4 pthread_mutexattr_getpshared
c0146c0c pthread_mutexattr_setprotocol
c0146b74 pthread_mutexattr_getprotocol
c0146ad0 pthread_mutexattr_settype
c0146a38 pthread_mutexattr_gettype
c01469a8 pthread_mutexattr_destroy
c0146988 pthread_mutexattr_init
c0146f90 pthread_mutex_unlock
c01479dc pthread_mutex_timedlock
c0147abc pthread_mutex_lock
c0146e54 pthread_mutex_trylock
c014734c pthread_mutex_destroy
c014743c pthread_mutex_init
c0147d70 pthread_condattr_setpshared
c0147cd8 pthread_condattr_getpshared
c0147c38 pthread_condattr_setclock
c0147bac pthread_condattr_getclock
c0147b1c pthread_condattr_destroy
c0147afc pthread_condattr_init
c0147ea4 pthread_cond_broadcast
c0147e14 pthread_cond_signal
c0148bec pthread_cond_timedwait
c0148d60 pthread_cond_wait
c0148204 pthread_cond_destroy
c01482f4 pthread_cond_init
c01491fc sem_unlink
c0149ec4 sem_close
c0149bec sem_open
c0148e40 sem_getvalue
c0149444 sem_timedwait
c014a020 sem_wait
c01492b0 sem_trywait
c014935c sem_post
c01496b4 sem_destroy
c0149d44 pse51_sem_init
c014a284 pthread_testcancel
c014a7e8 pthread_setcanceltype
c014a8e8 pthread_setcancelstate
c014a9e8 pthread_cleanup_pop
c014a344 pthread_cleanup_push
c014a1b4 pthread_cancel
c014ac64 pthread_once
c014c8ac sigtimedwait
c014c7e0 sigwaitinfo
c014c854 sigwait
c014c320 sigpending
c014c1e0 pse51_sigaction
c014d4ec pthread_sigqueue_np
c014b43c pthread_sigmask
c014d5cc pthread_kill
c014c170 pse51_sigismember
c014c0e8 pse51_sigdelset
c014c068 pse51_sigaddset
c014ad04 pse51_sigfillset
c014adc0 pse51_sigemptyset
c014d8d0 pthread_setspecific
c014d6ac pthread_getspecific
c014d9e0 pthread_key_delete
c014de04 pthread_key_create
c014f0b0 nanosleep
c014e9fc clock_nanosleep
c014e6ac clock_settime
c014e770 clock_gettime
c014e714 clock_getres
c014f184 timer_getoverrun
c014fefc timer_gettime
c014ffac timer_settime
c014fd6c timer_delete
c015080c timer_create
c0152f6c mq_unlink
c0152df8 mq_close
c0153648 mq_timedreceive
c01537ac mq_receive
c0153ea0 mq_timedsend
c0154004 mq_send
c0153124 mq_setattr
c0153074 mq_getattr
c01528b0 mq_open
c015866c _rtdm_sendmsg
c0158760 _rtdm_recvmsg
c0158854 _rtdm_write
c0158948 _rtdm_read
c0158a3c _rtdm_ioctl
c0158378 _rtdm_close
c01580a4 _rtdm_socket
c0158210 _rtdm_open
c01585b0 rtdm_context_get
c01592f0 rtdm_dev_unregister
c0158c50 rtdm_dev_register
c01596ec rtdm_munmap
c015a284 rtdm_iomap_to_user
c015a2bc rtdm_mmap_to_user
c0159a34 rtdm_mutex_timedlock
c0159ba8 rtdm_mutex_lock
c0159668 rtdm_sem_up
c0159fa0 rtdm_sem_timeddown
c015a110 rtdm_sem_down
c0159614 rtdm_event_clear
c0159e30 rtdm_event_timedwait
c0159f90 rtdm_event_wait
c01595a8 rtdm_event_signal
c015952c _rtdm_synch_flush
c015a2f4 rtdm_task_busy_sleep
c0159d40 rtdm_task_sleep_until
c0159c84 rtdm_task_sleep
c0159970 rtdm_task_join_nrt
c0159bb8 rtdm_task_init
c015d8f0 GPLONLY_rtcan_dev_get_by_index
c015dc64 GPLONLY_rtcan_dev_get_by_name
c015dab0 GPLONLY_rtcan_dev_unregister
c015dd8c GPLONLY_rtcan_dev_register
c015dd04 GPLONLY_rtcan_dev_alloc_name
c015d988 GPLONLY_rtcan_dev_alloc
c015da70 GPLONLY_rtcan_dev_free
c02b3bb4 GPLONLY_rtcan_recv_list_lock
c02b3bb4 GPLONLY_rtcan_socket_lock
c015faac rtcan_rcv
c016a770 rthal_restore_fpu
c016a6c0 rthal_save_fpu
c016a75c rthal_init_fpu
c016a9b0 rthal_thread_trampoline
c016a880 rthal_thread_switch
c016a460 rthal_arch_cleanup
c016a3fc rthal_arch_init
c023c9ac nlm_debug
c023c9b0 nfsd_debug
c023c9b4 nfs_debug
c023c9b8 rpc_debug
c01c8090 xdr_shift_buf
c01c76c0 xdr_inline_pages
c01c7684 xdr_encode_pages
c01c7440 xdr_encode_netobj
c01c7514 xdr_decode_netobj
c01c7658 xdr_decode_string_inplace
c01c75cc xdr_decode_string
c01c758c xdr_encode_string
c01c7544 xdr_encode_array
c01c8220 svc_proc_read
c01c83dc svc_proc_unregister
c01c84a0 svc_proc_register
c01c8094 rpc_proc_read
c01c83b4 rpc_proc_unregister
c01c8588 rpc_proc_register
c01c4ae8 svc_reserve
c01c5b20 svc_makesock
c01c4420 svc_wake_up
c01c50ec svc_recv
c01c384c svc_process
c01c50a0 svc_drop
c01c3e1c svc_destroy
c01c3f2c svc_exit_thread
c01c3fa8 svc_create_thread
c01c3788 svc_create
c01c25dc put_rpccred
c01c24f0 rpcauth_matchcred
c01c2d2c rpcauth_bindcred
c01c2dc4 rpcauth_lookupcred
c01c29b4 rpcauth_insert_credcache
c01c29e4 rpcauth_free_credcache
c01c24b0 rpcauth_init_credcache
c01c23d8 rpcauth_unregister
c01c239c rpcauth_register
c01bd4c4 xprt_set_timeout
c01be6dc xprt_destroy
c01bdf84 xprt_create_proto
c01bc094 rpc_setbufsize
c01bc0e8 rpc_restart_call
c01c07b8 rpc_delay
c01bbf94 rpc_clnt_sigunmask
c01bd258 rpc_clnt_sigmask
c01bc004 rpc_call_setup
c01bd314 rpc_call_async
c01bd3fc rpc_call_sync
c01c1f10 rpc_killall_tasks
c01bbe10 rpc_shutdown_client
c01bbd84 rpc_destroy_client
c01bbbd0 rpc_create_client
c01c0a08 rpc_release_task
c01c1674 rpc_wake_up_status
c01c0d28 rpc_new_task
c01c0304 rpciod_up
c01bfdf8 rpciod_down
c01c10ac rpc_run_child
c01c0f18 rpc_new_child
c01c1878 rpc_wake_up_task
c01c17a4 rpc_wake_up_next
c01c0744 rpc_sleep_on
c01c07dc rpc_init_task
c01c1e6c rpc_execute
c01c025c rpc_free
c01c010c rpc_allocate
c0175688 ethtool_op_set_sg
c017567c ethtool_op_get_sg
c017564c ethtool_op_set_tx_csum
c0175640 ethtool_op_get_tx_csum
c0176ea4 ethtool_op_get_link
c0226de0 softnet_data
c0172408 register_gifconf
c02b5e18 qdisc_tree_lock
c021c678 noop_qdisc
c017cc3c qdisc_create_dflt
c017ceb0 qdisc_restart
c017c6f0 qdisc_reset
c017c908 qdisc_destroy
c021ff10 sysctl_ip_default_ttl
c021fe7c sysctl_rmem_max
c021fe80 sysctl_wmem_max
c021c020 if_port_text
c004e528 __kill_fasync
c0176f0c dev_mc_upload
c0176f80 dev_mc_delete
c01770d4 dev_mc_add
c0173d50 dev_close
c027000c dev_base_lock
c021fdb8 dev_base
c0174b58 dev_queue_xmit
c0173f50 dev_ioctl
c0173ee8 dev_load
c017cbc0 __netdev_watchdog_up
c017298c dev_alloc_name
c0172a74 dev_alloc
c01728b8 dev_get
c01725dc dev_remove_pack
c017252c dev_add_pack
c0175250 netif_receive_skb
c0174ea0 netif_rx
c01704f0 skb_copy
c016ea08 skb_clone
c016f4c0 __kfree_skb
c0170084 alloc_skb
c017c280 eth_type_trans
c0174594 netdev_set_master
c0172430 netdev_finish_unregister
c0172854 __dev_get_by_name
c01733dc dev_get_by_name
c01728e4 __dev_get_by_index
c0173418 dev_get_by_index
c0172934 __dev_get_by_flags
c0173454 dev_get_by_flags
c01732c8 dev_new_index
c0172afc netdev_state_change
c01746f8 unregister_netdevice
c0173a1c register_netdevice
c021602c loopback_dev
c0172b80 unregister_netdevice_notifier
c0172b54 register_netdevice_notifier
c01a82f8 arp_find
c021ce4c arp_tbl
c01a8e18 arp_rcv
c01846a8 ip_rcv
c01a8fe8 xrlim_allow
c0173c3c dev_open
c02b6d60 ipv4_config
c016aa00 move_addr_to_user
c016b02c move_addr_to_kernel
c017b1e8 rtnl_unlock
c017b26c rtnl_lock
c021c4c0 rtnl_sem
c016e4d4 sklist_remove_socket
c0173078 dev_set_promiscuity
c0173110 dev_set_allmulti
c01784fc neigh_dump_info
c017a3dc neigh_add
c017adb0 neigh_delete
c023c878 rtnl
c017be04 rtnetlink_put_metrics
c017bd70 rtnetlink_dump_ifinfo
c017b744 __rta_fill
c02b5d40 rtnetlink_links
c017af98 rtattr_parse
c017d628 netlink_unregister_notifier
c017d5fc netlink_register_notifier
c017d4c4 netlink_set_nonroot
c017e554 netlink_ack
c017f070 netlink_dump_start
c017de20 netlink_kernel_create
c017e1f8 netlink_unicast
c017e73c netlink_broadcast
c017d420 netlink_set_err
c018cdb0 tcp_read_sock
c02b6020 ip_statistics
c01a9fe4 unregister_inetaddr_notifier
c01a9fb8 register_inetaddr_notifier
c01ab7cc devinet_ioctl
c01b294c ip_rt_ioctl
c0184e1c ip_defrag
c01aa7e8 in_dev_finish_destroy
c01aa79c inetdev_by_index
c01b2f50 ip_dev_find
c01aa4c0 inet_select_addr
c01b3030 inet_addr_type
c0188d50 ip_cmsg_recv
c021d858 inet_dgram_ops
c021d89c inet_stream_ops
c0187608 ip_finish_output
c01afba0 ip_mc_join_group
c01b1d54 ip_mc_dec_group
c01af8d0 ip_mc_inc_group
c017f1bc in_aton
c021d848 inet_family_ops
c01879e4 ip_fragment
c018701c ip_send_check
c01807e4 __ip_select_ident
c021cffc arp_broken_ops
c01a8588 arp_send
c018649c ip_options_undo
c0186770 ip_options_compile
c021d090 icmp_err_convert
c02b67e0 icmp_statistics
c01a9c08 icmp_send
c0182c30 ip_route_input
c0181dd0 ip_route_output_key
c01ac584 inet_unregister_protosw
c01ac620 inet_register_protosw
c01841bc inet_del_protocol
c01840f4 inet_add_protocol
c02b6ac0 inetdev_lock
c01721c4 scm_detach_fds
c016ce94 sklist_insert_socket
c016e5bc sklist_destroy_socket
c0170874 memcpy_toiovec
c02105f8 files_stat
c0171cd0 scm_fp_dup
c0171eec __scm_send
c0171c5c __scm_destroy
c017bfb0 net_srandom
c017bf84 net_random
c017bfc4 net_ratelimit
c01777d0 dst_destroy
c0177544 __dst_free
c01776b0 dst_alloc
c017aaf8 neigh_changeaddr
c0178930 neigh_compat_output
c0177bac neigh_rand_reach_time
c01781f0 neigh_parms_release
c0177f3c neigh_parms_alloc
c0179564 neigh_destroy
c017927c pneigh_enqueue
c0177bec pneigh_lookup
c017803c neigh_sysctl_register
c017a780 neigh_ifdown
c017a6b0 neigh_event_ns
c0178b2c __neigh_event_send
c01782fc neigh_lookup
c0179a90 neigh_create
c0179f44 neigh_update
c0179148 neigh_connected_output
c0178e40 neigh_resolve_output
c017aa40 neigh_table_clear
c01783f8 neigh_table_init
c016b5e4 sockfd_lookup
c016bfa4 sock_map_fd
c016cd80 sock_kfree_s
c016d2c8 sock_kmalloc
c0171d64 put_cmsg
c01710b0 datagram_poll
c01707c4 skb_realloc_headroom
c017062c pskb_copy
c016f6fc pskb_expand_head
c016fc98 __pskb_pull_tail
c016f87c ___pskb_trim
c01702b0 skb_copy_expand
c016eeac skb_copy_and_csum_dev
c016ebd4 skb_copy_and_csum_bits
c016ef98 skb_copy_bits
c0171b28 skb_copy_and_csum_datagram_iovec
c0171428 skb_copy_datagram_iovec
c01716a8 skb_copy_datagram
c0171064 skb_free_datagram
c01711d8 skb_recv_datagram
c0172ce4 skb_checksum_help
c016f214 skb_checksum
c016fa6c skb_linearize
c016d234 sock_rmalloc
c016d508 sock_wmalloc
c016e440 sock_wfree
c016d214 sock_rfree
c016d45c sock_no_sendpage
c016cff0 sock_no_mmap
c016cfe8 sock_no_recvmsg
c016cfe0 sock_no_sendmsg
c016cf68 sock_no_setsockopt
c016cf70 sock_no_getsockopt
c016cf60 sock_no_shutdown
c016cf58 sock_no_listen
c016cf50 sock_no_ioctl
c016cf48 sock_no_poll
c016cf40 sock_no_getname
c016cf38 sock_no_accept
c016cf30 sock_no_socketpair
c016cf28 sock_no_connect
c016cf20 sock_no_bind
c016cf18 sock_no_release
c016d354 sock_init_data
c016e094 sock_alloc_send_pskb
c016e430 sock_alloc_send_skb
c016ad98 sock_wake_async
c016cc28 sk_free
c016ccf8 sk_alloc
c016ac90 sock_recvmsg
c016c440 sock_sendmsg
c016dc48 sock_getsockopt
c016d5b0 sock_setsockopt
c016ac08 sock_release
c016b89c sock_alloc
c016b944 sock_create
c0170984 memcpy_tokerneliovec
c0170b30 memcpy_fromiovec
c016cc90 __release_sock
c016cdc8 __lock_sock
c016af50 sock_unregister
c016aec0 sock_register
c01703b8 skb_pad
c016e73c skb_under_panic
c016e7dc skb_over_panic
c01ca75c get_options
c01ca6c4 get_option
c01ca7f4 memparse
c01cab04 rb_erase
c01ca9dc rb_insert_color
c01caeb8 rwsem_wake
c01cb1c8 rwsem_down_write_failed
c01caffc rwsem_down_read_failed
c01cddac zlib_inflateIncomp
c01cdd5c zlib_inflateSyncPoint
c01cd420 zlib_inflateReset
c01cdc1c zlib_inflateSync
c01cd4a8 zlib_inflateEnd
c01cd514 zlib_inflateInit2_
c01cd618 zlib_inflateInit_
c01cd628 zlib_inflate
c01cd414 zlib_inflate_workspacesize
c01cecd0 zlib_deflateParams
c01cf144 zlib_deflateCopy
c01cedc4 zlib_deflateReset
c01cf0e0 zlib_deflateEnd
c01cef14 zlib_deflateInit2_
c01cf0c4 zlib_deflateInit_
c01ce9a0 zlib_deflate
c01d0198 zlib_deflate_workspacesize

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Xenomai-help] exception 768
  2006-11-13 12:13 [Xenomai-help] exception 768 Daniel Schnell
@ 2006-11-17 17:32 ` Philippe Gerum
  2006-11-17 18:02   ` Daniel Schnell
  2006-11-17 18:41   ` [Xenomai-core] XENO_OPT_DEBUG impact (was: exception 768) Jan Kiszka
  0 siblings, 2 replies; 15+ messages in thread
From: Philippe Gerum @ 2006-11-17 17:32 UTC (permalink / raw
  To: Daniel Schnell; +Cc: xenomai

On Mon, 2006-11-13 at 12:13 +0000, Daniel Schnell wrote:
> Hi,
>  
> I updated to the newest version in svn and applied Xenomai to my Linux
> kernel.
>  
> The first time I ran the application it was running much longer and I
> almost thought everything works smooth now. But after some while it did
> not react any more. No apparant crash message or Oops on the console.
> Instead some strange exception messages in /var/log/messages.
>  
> Please find attached the messages and the kernel symbols. It seems that
> a kernel exception happens inside __copy_tofrom_user().
> 
> Our application linker map was ~roughly 1 mb, so I did not attach it
> here. Some of the addresses I could not resolve to anything inside the
> kernel or our application, so probably some other shared objects like
> glibc are involved here.
> 
> 
> Any ideas ?

Disable CONFIG_XENO_OPT_DEBUG from your kernel configuration, this is
the source of the PTE misses you are seeing now on ppc. This is not to
say that those errors are normal, and this issue still remains to be
fixed, but unless you want to debug the Xenomai nucleus, you don't need
this option on (additionally, it adds a large overhead which translates
in significantly augmented jitter).

-- 
Philippe.




^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [Xenomai-help] exception 768
  2006-11-17 17:32 ` Philippe Gerum
@ 2006-11-17 18:02   ` Daniel Schnell
  2006-11-17 18:41   ` [Xenomai-core] XENO_OPT_DEBUG impact (was: exception 768) Jan Kiszka
  1 sibling, 0 replies; 15+ messages in thread
From: Daniel Schnell @ 2006-11-17 18:02 UTC (permalink / raw
  To: rpm; +Cc: xenomai

Hi,


Philippe Gerum wrote:
> 
> Disable CONFIG_XENO_OPT_DEBUG from your kernel configuration, this is
> the source of the PTE misses you are seeing now on ppc. This is not
> to say that those errors are normal, and this issue still remains to
> be fixed, but unless you want to debug the Xenomai nucleus, you don't
> need this option on (additionally, it adds a large overhead which
> translates in significantly augmented jitter).     

I can confirm that. As soon we access a file (so far over NFS) these
messages appear if this option is turned on. If the option is disabled,
these messages go away.

We will test on Monday if it will fix also other strangenesses we
encountered so far.

Thanks !


Regards,

Daniel Schnell.



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Xenomai-core] XENO_OPT_DEBUG impact (was: exception 768)
  2006-11-17 17:32 ` Philippe Gerum
  2006-11-17 18:02   ` Daniel Schnell
@ 2006-11-17 18:41   ` Jan Kiszka
  2006-11-17 19:05     ` [Xenomai-core] " Philippe Gerum
  2006-11-20  9:07     ` Gilles Chanteperdrix
  1 sibling, 2 replies; 15+ messages in thread
From: Jan Kiszka @ 2006-11-17 18:41 UTC (permalink / raw
  To: rpm, Gilles Chanteperdrix; +Cc: xenomai-core

[-- Attachment #1: Type: text/plain, Size: 932 bytes --]

Philippe Gerum wrote:
> ...
> Disable CONFIG_XENO_OPT_DEBUG from your kernel configuration, this is
> the source of the PTE misses you are seeing now on ppc. This is not to
> say that those errors are normal, and this issue still remains to be
> fixed, but unless you want to debug the Xenomai nucleus, you don't need
> this option on (additionally, it adds a large overhead which translates
> in significantly augmented jitter).

CONFIG_XENO_OPT_DEBUG should not add large jitters - that's what e.g.
CONFIG_XENO_OPT_DEBUG_QUEUES is now for.

I'm currently seeing two potential "misuses" of the common switch:

 - the posix skin (Gilles, how heavy-weighted are those checks?)
   => CONFIG_XENO_OPT_DEBUG_POSIX

 - CONFIG_XENO_SPINLOCK_DEBUG => CONFIG_XENO_OPT_DEBUG_SPINLOCK

Both should be explicitly controllable in Kconfig.

Gilles, is CONFIG_XENO_OPT_DEBUG_BHEAP used in any way? Doesn't seem so.

Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 250 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Xenomai-core] Re: XENO_OPT_DEBUG impact (was: exception 768)
  2006-11-17 18:41   ` [Xenomai-core] XENO_OPT_DEBUG impact (was: exception 768) Jan Kiszka
@ 2006-11-17 19:05     ` Philippe Gerum
  2006-11-17 19:10       ` [Xenomai-core] Re: XENO_OPT_DEBUG impact Jan Kiszka
  2006-11-20  9:07     ` Gilles Chanteperdrix
  1 sibling, 1 reply; 15+ messages in thread
From: Philippe Gerum @ 2006-11-17 19:05 UTC (permalink / raw
  To: Jan Kiszka; +Cc: xenomai-core

On Fri, 2006-11-17 at 19:41 +0100, Jan Kiszka wrote:
> Philippe Gerum wrote:
> > ...
> > Disable CONFIG_XENO_OPT_DEBUG from your kernel configuration, this is
> > the source of the PTE misses you are seeing now on ppc. This is not to
> > say that those errors are normal, and this issue still remains to be
> > fixed, but unless you want to debug the Xenomai nucleus, you don't need
> > this option on (additionally, it adds a large overhead which translates
> > in significantly augmented jitter).
> 
> CONFIG_XENO_OPT_DEBUG should not add large jitters - that's what e.g.

CONFIG_SMP + CONFIG_XENO_OPT_DEBUG instruments the nucleus lock macros
with consistency and latency tracking code. This does not come for free,
even if an order of magnitude less than checking the queues.

> CONFIG_XENO_OPT_DEBUG_QUEUES is now for.
> 
> I'm currently seeing two potential "misuses" of the common switch:
> 
>  - the posix skin (Gilles, how heavy-weighted are those checks?)
>    => CONFIG_XENO_OPT_DEBUG_POSIX
> 
>  - CONFIG_XENO_SPINLOCK_DEBUG => CONFIG_XENO_OPT_DEBUG_SPINLOCK
> 
> Both should be explicitly controllable in Kconfig.
> 

Nack for CONFIG_XENO_OPT_DEBUG_SPINLOCK. Most of the issue we tracked
with Gilles regarding the domain migration code had side-effects on the
nucleus lock. So having CONFIG_XENO_OPT_DEBUG enabled for identifying
internal state weirdnesses - like those triggered by migration bugs -
implies enabling the spinlock watchdogs too.

> Gilles, is CONFIG_XENO_OPT_DEBUG_BHEAP used in any way? Doesn't seem so.
> 
> Jan
> 
-- 
Philippe.




^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Xenomai-core] Re: XENO_OPT_DEBUG impact
  2006-11-17 19:05     ` [Xenomai-core] " Philippe Gerum
@ 2006-11-17 19:10       ` Jan Kiszka
  2006-11-17 21:58         ` Philippe Gerum
  0 siblings, 1 reply; 15+ messages in thread
From: Jan Kiszka @ 2006-11-17 19:10 UTC (permalink / raw
  To: rpm; +Cc: xenomai-core

[-- Attachment #1: Type: text/plain, Size: 976 bytes --]

Philippe Gerum wrote:
> On Fri, 2006-11-17 at 19:41 +0100, Jan Kiszka wrote:
>> I'm currently seeing two potential "misuses" of the common switch:
>>
>>  - the posix skin (Gilles, how heavy-weighted are those checks?)
>>    => CONFIG_XENO_OPT_DEBUG_POSIX
>>
>>  - CONFIG_XENO_SPINLOCK_DEBUG => CONFIG_XENO_OPT_DEBUG_SPINLOCK
>>
>> Both should be explicitly controllable in Kconfig.
>>
> 
> Nack for CONFIG_XENO_OPT_DEBUG_SPINLOCK. Most of the issue we tracked
> with Gilles regarding the domain migration code had side-effects on the
> nucleus lock. So having CONFIG_XENO_OPT_DEBUG enabled for identifying
> internal state weirdnesses - like those triggered by migration bugs -
> implies enabling the spinlock watchdogs too.

Ok, if it only makes sense to have both enabled at the same time, then
let us create XENO_OPT_DEBUG_NUCLEUS. It should include both, but it
shall not be automatically on when, say, only XENO_OPT_DEBUG_RTDM is
required.

Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 250 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Xenomai-core] Re: XENO_OPT_DEBUG impact
  2006-11-17 19:10       ` [Xenomai-core] Re: XENO_OPT_DEBUG impact Jan Kiszka
@ 2006-11-17 21:58         ` Philippe Gerum
  2006-11-20  9:20           ` Jan Kiszka
  0 siblings, 1 reply; 15+ messages in thread
From: Philippe Gerum @ 2006-11-17 21:58 UTC (permalink / raw
  To: Jan Kiszka; +Cc: xenomai-core

On Fri, 2006-11-17 at 20:10 +0100, Jan Kiszka wrote:
> Philippe Gerum wrote:
> > On Fri, 2006-11-17 at 19:41 +0100, Jan Kiszka wrote:
> >> I'm currently seeing two potential "misuses" of the common switch:
> >>
> >>  - the posix skin (Gilles, how heavy-weighted are those checks?)
> >>    => CONFIG_XENO_OPT_DEBUG_POSIX
> >>
> >>  - CONFIG_XENO_SPINLOCK_DEBUG => CONFIG_XENO_OPT_DEBUG_SPINLOCK
> >>
> >> Both should be explicitly controllable in Kconfig.
> >>
> > 
> > Nack for CONFIG_XENO_OPT_DEBUG_SPINLOCK. Most of the issue we tracked
> > with Gilles regarding the domain migration code had side-effects on the
> > nucleus lock. So having CONFIG_XENO_OPT_DEBUG enabled for identifying
> > internal state weirdnesses - like those triggered by migration bugs -
> > implies enabling the spinlock watchdogs too.
> 
> Ok, if it only makes sense to have both enabled at the same time, then
> let us create XENO_OPT_DEBUG_NUCLEUS. It should include both, but it
> shall not be automatically on when, say, only XENO_OPT_DEBUG_RTDM is
> required.

No objection.

> 
> Jan
> 
-- 
Philippe.




^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Xenomai-core] Re: XENO_OPT_DEBUG impact
  2006-11-17 18:41   ` [Xenomai-core] XENO_OPT_DEBUG impact (was: exception 768) Jan Kiszka
  2006-11-17 19:05     ` [Xenomai-core] " Philippe Gerum
@ 2006-11-20  9:07     ` Gilles Chanteperdrix
  2006-11-20  9:14       ` Jan Kiszka
  1 sibling, 1 reply; 15+ messages in thread
From: Gilles Chanteperdrix @ 2006-11-20  9:07 UTC (permalink / raw
  To: Jan Kiszka; +Cc: xenomai-core

Jan Kiszka wrote:
> Philippe Gerum wrote:
> 
>>...
>>Disable CONFIG_XENO_OPT_DEBUG from your kernel configuration, this is
>>the source of the PTE misses you are seeing now on ppc. This is not to
>>say that those errors are normal, and this issue still remains to be
>>fixed, but unless you want to debug the Xenomai nucleus, you don't need
>>this option on (additionally, it adds a large overhead which translates
>>in significantly augmented jitter).
> 
> 
> CONFIG_XENO_OPT_DEBUG should not add large jitters - that's what e.g.
> CONFIG_XENO_OPT_DEBUG_QUEUES is now for.
> 
> I'm currently seeing two potential "misuses" of the common switch:
> 
>  - the posix skin (Gilles, how heavy-weighted are those checks?)
>    => CONFIG_XENO_OPT_DEBUG_POSIX

The posix skin printks are issued by the root thread without holding the
nklock, so they should have no impact on the maximum latency.

> 
>  - CONFIG_XENO_SPINLOCK_DEBUG => CONFIG_XENO_OPT_DEBUG_SPINLOCK
> 
> Both should be explicitly controllable in Kconfig.
> 
> Gilles, is CONFIG_XENO_OPT_DEBUG_BHEAP used in any way? Doesn't seem so.

What about using CONFIG_XENO_OPT_DEBUG_QUEUE instead ?

-- 
                                                 Gilles Chanteperdrix


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Xenomai-core] Re: XENO_OPT_DEBUG impact
  2006-11-20  9:07     ` Gilles Chanteperdrix
@ 2006-11-20  9:14       ` Jan Kiszka
  0 siblings, 0 replies; 15+ messages in thread
From: Jan Kiszka @ 2006-11-20  9:14 UTC (permalink / raw
  To: Gilles Chanteperdrix; +Cc: xenomai-core

[-- Attachment #1: Type: text/plain, Size: 1530 bytes --]

Gilles Chanteperdrix wrote:
> Jan Kiszka wrote:
>> Philippe Gerum wrote:
>>
>>> ...
>>> Disable CONFIG_XENO_OPT_DEBUG from your kernel configuration, this is
>>> the source of the PTE misses you are seeing now on ppc. This is not to
>>> say that those errors are normal, and this issue still remains to be
>>> fixed, but unless you want to debug the Xenomai nucleus, you don't need
>>> this option on (additionally, it adds a large overhead which translates
>>> in significantly augmented jitter).
>>
>> CONFIG_XENO_OPT_DEBUG should not add large jitters - that's what e.g.
>> CONFIG_XENO_OPT_DEBUG_QUEUES is now for.
>>
>> I'm currently seeing two potential "misuses" of the common switch:
>>
>>  - the posix skin (Gilles, how heavy-weighted are those checks?)
>>    => CONFIG_XENO_OPT_DEBUG_POSIX
> 
> The posix skin printks are issued by the root thread without holding the
> nklock, so they should have no impact on the maximum latency.

Still I think that a separate switch (like we also have for RTDM) is
cleaner. You could then add more checks as well when ever you feel like
they could be useful.

> 
>>  - CONFIG_XENO_SPINLOCK_DEBUG => CONFIG_XENO_OPT_DEBUG_SPINLOCK
>>
>> Both should be explicitly controllable in Kconfig.
>>
>> Gilles, is CONFIG_XENO_OPT_DEBUG_BHEAP used in any way? Doesn't seem so.
> 
> What about using CONFIG_XENO_OPT_DEBUG_QUEUE instead ?
> 

Either this way, or sort it to upcoming XENO_OPT_DEBUG_NUCLEUS. Hmm, the
queues may be even more fitting.

Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 250 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Xenomai-core] Re: XENO_OPT_DEBUG impact
  2006-11-17 21:58         ` Philippe Gerum
@ 2006-11-20  9:20           ` Jan Kiszka
  2006-11-20  9:38             ` Philippe Gerum
  0 siblings, 1 reply; 15+ messages in thread
From: Jan Kiszka @ 2006-11-20  9:20 UTC (permalink / raw
  To: rpm; +Cc: xenomai-core

[-- Attachment #1: Type: text/plain, Size: 1522 bytes --]

Philippe Gerum wrote:
> On Fri, 2006-11-17 at 20:10 +0100, Jan Kiszka wrote:
>> Philippe Gerum wrote:
>>> On Fri, 2006-11-17 at 19:41 +0100, Jan Kiszka wrote:
>>>> I'm currently seeing two potential "misuses" of the common switch:
>>>>
>>>>  - the posix skin (Gilles, how heavy-weighted are those checks?)
>>>>    => CONFIG_XENO_OPT_DEBUG_POSIX
>>>>
>>>>  - CONFIG_XENO_SPINLOCK_DEBUG => CONFIG_XENO_OPT_DEBUG_SPINLOCK
>>>>
>>>> Both should be explicitly controllable in Kconfig.
>>>>
>>> Nack for CONFIG_XENO_OPT_DEBUG_SPINLOCK. Most of the issue we tracked
>>> with Gilles regarding the domain migration code had side-effects on the
>>> nucleus lock. So having CONFIG_XENO_OPT_DEBUG enabled for identifying
>>> internal state weirdnesses - like those triggered by migration bugs -
>>> implies enabling the spinlock watchdogs too.
>> Ok, if it only makes sense to have both enabled at the same time, then
>> let us create XENO_OPT_DEBUG_NUCLEUS. It should include both, but it
>> shall not be automatically on when, say, only XENO_OPT_DEBUG_RTDM is
>> required.
> 
> No objection.
> 

Looking at the spinlock debugging code: it serves two inseparable
purposes, a watchdog for stuck locks + lock statistics. The latter make
this feature pop up when XENO_OPT_STATS are set on a SMP box - rather
surprising effect. Do we still need the stats? If not, I would kick them
out in favour of using the latency tracer for such analysis, making
spinlock debugging a real pure debug feature.

Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 250 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Xenomai-core] Re: XENO_OPT_DEBUG impact
  2006-11-20  9:20           ` Jan Kiszka
@ 2006-11-20  9:38             ` Philippe Gerum
  2006-11-20 10:01               ` Jan Kiszka
  0 siblings, 1 reply; 15+ messages in thread
From: Philippe Gerum @ 2006-11-20  9:38 UTC (permalink / raw
  To: Jan Kiszka; +Cc: xenomai-core

On Mon, 2006-11-20 at 10:20 +0100, Jan Kiszka wrote:
> Philippe Gerum wrote:
> > On Fri, 2006-11-17 at 20:10 +0100, Jan Kiszka wrote:
> >> Philippe Gerum wrote:
> >>> On Fri, 2006-11-17 at 19:41 +0100, Jan Kiszka wrote:
> >>>> I'm currently seeing two potential "misuses" of the common switch:
> >>>>
> >>>>  - the posix skin (Gilles, how heavy-weighted are those checks?)
> >>>>    => CONFIG_XENO_OPT_DEBUG_POSIX
> >>>>
> >>>>  - CONFIG_XENO_SPINLOCK_DEBUG => CONFIG_XENO_OPT_DEBUG_SPINLOCK
> >>>>
> >>>> Both should be explicitly controllable in Kconfig.
> >>>>
> >>> Nack for CONFIG_XENO_OPT_DEBUG_SPINLOCK. Most of the issue we tracked
> >>> with Gilles regarding the domain migration code had side-effects on the
> >>> nucleus lock. So having CONFIG_XENO_OPT_DEBUG enabled for identifying
> >>> internal state weirdnesses - like those triggered by migration bugs -
> >>> implies enabling the spinlock watchdogs too.
> >> Ok, if it only makes sense to have both enabled at the same time, then
> >> let us create XENO_OPT_DEBUG_NUCLEUS. It should include both, but it
> >> shall not be automatically on when, say, only XENO_OPT_DEBUG_RTDM is
> >> required.
> > 
> > No objection.
> > 
> 
> Looking at the spinlock debugging code: it serves two inseparable
> purposes, a watchdog for stuck locks + lock statistics. The latter make
> this feature pop up when XENO_OPT_STATS are set on a SMP box - rather
> surprising effect. Do we still need the stats? If not, I would kick them
> out in favour of using the latency tracer for such analysis, making
> spinlock debugging a real pure debug feature.
> 

The spinlock stats are about uncovering a problem, the latency tracer is
about finding where the problem lies. Both are orthogonal.

> Jan
> 
-- 
Philippe.




^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Xenomai-core] Re: XENO_OPT_DEBUG impact
  2006-11-20  9:38             ` Philippe Gerum
@ 2006-11-20 10:01               ` Jan Kiszka
  2006-11-20 10:46                 ` Philippe Gerum
  0 siblings, 1 reply; 15+ messages in thread
From: Jan Kiszka @ 2006-11-20 10:01 UTC (permalink / raw
  To: rpm; +Cc: xenomai-core

[-- Attachment #1: Type: text/plain, Size: 2544 bytes --]

Philippe Gerum wrote:
> On Mon, 2006-11-20 at 10:20 +0100, Jan Kiszka wrote:
>> Philippe Gerum wrote:
>>> On Fri, 2006-11-17 at 20:10 +0100, Jan Kiszka wrote:
>>>> Philippe Gerum wrote:
>>>>> On Fri, 2006-11-17 at 19:41 +0100, Jan Kiszka wrote:
>>>>>> I'm currently seeing two potential "misuses" of the common switch:
>>>>>>
>>>>>>  - the posix skin (Gilles, how heavy-weighted are those checks?)
>>>>>>    => CONFIG_XENO_OPT_DEBUG_POSIX
>>>>>>
>>>>>>  - CONFIG_XENO_SPINLOCK_DEBUG => CONFIG_XENO_OPT_DEBUG_SPINLOCK
>>>>>>
>>>>>> Both should be explicitly controllable in Kconfig.
>>>>>>
>>>>> Nack for CONFIG_XENO_OPT_DEBUG_SPINLOCK. Most of the issue we tracked
>>>>> with Gilles regarding the domain migration code had side-effects on the
>>>>> nucleus lock. So having CONFIG_XENO_OPT_DEBUG enabled for identifying
>>>>> internal state weirdnesses - like those triggered by migration bugs -
>>>>> implies enabling the spinlock watchdogs too.
>>>> Ok, if it only makes sense to have both enabled at the same time, then
>>>> let us create XENO_OPT_DEBUG_NUCLEUS. It should include both, but it
>>>> shall not be automatically on when, say, only XENO_OPT_DEBUG_RTDM is
>>>> required.
>>> No objection.
>>>
>> Looking at the spinlock debugging code: it serves two inseparable
>> purposes, a watchdog for stuck locks + lock statistics. The latter make
>> this feature pop up when XENO_OPT_STATS are set on a SMP box - rather
>> surprising effect. Do we still need the stats? If not, I would kick them
>> out in favour of using the latency tracer for such analysis, making
>> spinlock debugging a real pure debug feature.
>>
> 
> The spinlock stats are about uncovering a problem, the latency tracer is
> about finding where the problem lies. Both are orthogonal.

Not fully true: the tracer provides the same information when you enable
CONFIG_IPIPE_TRACE_IRQSOFF. When you disable CONFIG_IPIPE_TRACE_MCOUNT,
you even get this at comparable (if not lower) costs. I once played with
the spinlock debug code before decided to invest time into the tracer. I
think I even posted a patch to enable that code on UP. But I didn't find
the spinlock stats useful enough, even for the scenario "lock length
analysis".

We basically have now two ways to get the same information (or please
explain what is missing with the tracer). Besides the redundancy, there
is the problem that one of this way comes in via two different,
orthogonal paths (STATS+SMP || DEBUG). That's not very consistent IMHO.

Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 250 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Xenomai-core] Re: XENO_OPT_DEBUG impact
  2006-11-20 10:01               ` Jan Kiszka
@ 2006-11-20 10:46                 ` Philippe Gerum
  2006-11-20 11:39                   ` Jan Kiszka
  0 siblings, 1 reply; 15+ messages in thread
From: Philippe Gerum @ 2006-11-20 10:46 UTC (permalink / raw
  To: Jan Kiszka; +Cc: xenomai-core

On Mon, 2006-11-20 at 11:01 +0100, Jan Kiszka wrote:
> Philippe Gerum wrote:
> > On Mon, 2006-11-20 at 10:20 +0100, Jan Kiszka wrote:
> >> Philippe Gerum wrote:
> >>> On Fri, 2006-11-17 at 20:10 +0100, Jan Kiszka wrote:
> >>>> Philippe Gerum wrote:
> >>>>> On Fri, 2006-11-17 at 19:41 +0100, Jan Kiszka wrote:
> >>>>>> I'm currently seeing two potential "misuses" of the common switch:
> >>>>>>
> >>>>>>  - the posix skin (Gilles, how heavy-weighted are those checks?)
> >>>>>>    => CONFIG_XENO_OPT_DEBUG_POSIX
> >>>>>>
> >>>>>>  - CONFIG_XENO_SPINLOCK_DEBUG => CONFIG_XENO_OPT_DEBUG_SPINLOCK
> >>>>>>
> >>>>>> Both should be explicitly controllable in Kconfig.
> >>>>>>
> >>>>> Nack for CONFIG_XENO_OPT_DEBUG_SPINLOCK. Most of the issue we tracked
> >>>>> with Gilles regarding the domain migration code had side-effects on the
> >>>>> nucleus lock. So having CONFIG_XENO_OPT_DEBUG enabled for identifying
> >>>>> internal state weirdnesses - like those triggered by migration bugs -
> >>>>> implies enabling the spinlock watchdogs too.
> >>>> Ok, if it only makes sense to have both enabled at the same time, then
> >>>> let us create XENO_OPT_DEBUG_NUCLEUS. It should include both, but it
> >>>> shall not be automatically on when, say, only XENO_OPT_DEBUG_RTDM is
> >>>> required.
> >>> No objection.
> >>>
> >> Looking at the spinlock debugging code: it serves two inseparable
> >> purposes, a watchdog for stuck locks + lock statistics. The latter make
> >> this feature pop up when XENO_OPT_STATS are set on a SMP box - rather
> >> surprising effect. Do we still need the stats? If not, I would kick them
> >> out in favour of using the latency tracer for such analysis, making
> >> spinlock debugging a real pure debug feature.
> >>
> > 
> > The spinlock stats are about uncovering a problem, the latency tracer is
> > about finding where the problem lies. Both are orthogonal.
> 
> Not fully true: the tracer provides the same information when you enable
> CONFIG_IPIPE_TRACE_IRQSOFF. When you disable CONFIG_IPIPE_TRACE_MCOUNT,
> you even get this at comparable (if not lower) costs. I once played with
> the spinlock debug code before decided to invest time into the tracer. I
> think I even posted a patch to enable that code on UP. But I didn't find
> the spinlock stats useful enough, even for the scenario "lock length
> analysis".
> 
> We basically have now two ways to get the same information (or please
> explain what is missing with the tracer). Besides the redundancy, there
> is the problem that one of this way comes in via two different,
> orthogonal paths (STATS+SMP || DEBUG). That's not very consistent IMHO.
> 

Nothing is missing in the tracer. The point is that you don't
immediately know that you are having a spinlock issue which would make
you build the tracer support, and having those stats is a cheap way to
detect such problem in a lightweight manner. Running with the tracer
enabled usually means that you are chasing an issue you have already
detected.

> Jan
> 
-- 
Philippe.




^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Xenomai-core] Re: XENO_OPT_DEBUG impact
  2006-11-20 10:46                 ` Philippe Gerum
@ 2006-11-20 11:39                   ` Jan Kiszka
  2006-11-20 13:22                     ` Philippe Gerum
  0 siblings, 1 reply; 15+ messages in thread
From: Jan Kiszka @ 2006-11-20 11:39 UTC (permalink / raw
  To: rpm; +Cc: xenomai-core

[-- Attachment #1: Type: text/plain, Size: 3836 bytes --]

Philippe Gerum wrote:
> On Mon, 2006-11-20 at 11:01 +0100, Jan Kiszka wrote:
>> Philippe Gerum wrote:
>>> On Mon, 2006-11-20 at 10:20 +0100, Jan Kiszka wrote:
>>>> Philippe Gerum wrote:
>>>>> On Fri, 2006-11-17 at 20:10 +0100, Jan Kiszka wrote:
>>>>>> Philippe Gerum wrote:
>>>>>>> On Fri, 2006-11-17 at 19:41 +0100, Jan Kiszka wrote:
>>>>>>>> I'm currently seeing two potential "misuses" of the common switch:
>>>>>>>>
>>>>>>>>  - the posix skin (Gilles, how heavy-weighted are those checks?)
>>>>>>>>    => CONFIG_XENO_OPT_DEBUG_POSIX
>>>>>>>>
>>>>>>>>  - CONFIG_XENO_SPINLOCK_DEBUG => CONFIG_XENO_OPT_DEBUG_SPINLOCK
>>>>>>>>
>>>>>>>> Both should be explicitly controllable in Kconfig.
>>>>>>>>
>>>>>>> Nack for CONFIG_XENO_OPT_DEBUG_SPINLOCK. Most of the issue we tracked
>>>>>>> with Gilles regarding the domain migration code had side-effects on the
>>>>>>> nucleus lock. So having CONFIG_XENO_OPT_DEBUG enabled for identifying
>>>>>>> internal state weirdnesses - like those triggered by migration bugs -
>>>>>>> implies enabling the spinlock watchdogs too.
>>>>>> Ok, if it only makes sense to have both enabled at the same time, then
>>>>>> let us create XENO_OPT_DEBUG_NUCLEUS. It should include both, but it
>>>>>> shall not be automatically on when, say, only XENO_OPT_DEBUG_RTDM is
>>>>>> required.
>>>>> No objection.
>>>>>
>>>> Looking at the spinlock debugging code: it serves two inseparable
>>>> purposes, a watchdog for stuck locks + lock statistics. The latter make
>>>> this feature pop up when XENO_OPT_STATS are set on a SMP box - rather
>>>> surprising effect. Do we still need the stats? If not, I would kick them
>>>> out in favour of using the latency tracer for such analysis, making
>>>> spinlock debugging a real pure debug feature.
>>>>
>>> The spinlock stats are about uncovering a problem, the latency tracer is
>>> about finding where the problem lies. Both are orthogonal.
>> Not fully true: the tracer provides the same information when you enable
>> CONFIG_IPIPE_TRACE_IRQSOFF. When you disable CONFIG_IPIPE_TRACE_MCOUNT,
>> you even get this at comparable (if not lower) costs. I once played with
>> the spinlock debug code before decided to invest time into the tracer. I
>> think I even posted a patch to enable that code on UP. But I didn't find
>> the spinlock stats useful enough, even for the scenario "lock length
>> analysis".
>>
>> We basically have now two ways to get the same information (or please
>> explain what is missing with the tracer). Besides the redundancy, there
>> is the problem that one of this way comes in via two different,
>> orthogonal paths (STATS+SMP || DEBUG). That's not very consistent IMHO.
>>
> 
> Nothing is missing in the tracer. The point is that you don't
> immediately know that you are having a spinlock issue which would make
> you build the tracer support, and having those stats is a cheap way to
> detect such problem in a lightweight manner. 

If it were cheap, we wouldn't discuss it here. Actually, due to its
inline nature, this instrumentation is fairly costly. That's ok, as long
as you can explicitly ask for such a feature.

But now we have the situation that the (default y!) XENO_OPT_STAT
feature on UP is far more costly than on SMP. You know that the stats
are very useful already without any spinlock instrumentation, i.e. for
analysing the RT-system load. My feeling is that, for SMP, we currently
have a huge config mess here. And this is what I'm trying to address,
/maybe/ also by removing redundant instrumentation means.

> Running with the tracer
> enabled usually means that you are chasing an issue you have already
> detected.

Again, tracer != mcount. It can be used just like that spinlock stats:
to *detect* long locking periods. Have a look.

Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 250 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Xenomai-core] Re: XENO_OPT_DEBUG impact
  2006-11-20 11:39                   ` Jan Kiszka
@ 2006-11-20 13:22                     ` Philippe Gerum
  0 siblings, 0 replies; 15+ messages in thread
From: Philippe Gerum @ 2006-11-20 13:22 UTC (permalink / raw
  To: Jan Kiszka; +Cc: xenomai-core

On Mon, 2006-11-20 at 12:39 +0100, Jan Kiszka wrote:
> Philippe Gerum wrote:
> > On Mon, 2006-11-20 at 11:01 +0100, Jan Kiszka wrote:
> >> Philippe Gerum wrote:
> >>> On Mon, 2006-11-20 at 10:20 +0100, Jan Kiszka wrote:
> >>>> Philippe Gerum wrote:
> >>>>> On Fri, 2006-11-17 at 20:10 +0100, Jan Kiszka wrote:
> >>>>>> Philippe Gerum wrote:
> >>>>>>> On Fri, 2006-11-17 at 19:41 +0100, Jan Kiszka wrote:
> >>>>>>>> I'm currently seeing two potential "misuses" of the common switch:
> >>>>>>>>
> >>>>>>>>  - the posix skin (Gilles, how heavy-weighted are those checks?)
> >>>>>>>>    => CONFIG_XENO_OPT_DEBUG_POSIX
> >>>>>>>>
> >>>>>>>>  - CONFIG_XENO_SPINLOCK_DEBUG => CONFIG_XENO_OPT_DEBUG_SPINLOCK
> >>>>>>>>
> >>>>>>>> Both should be explicitly controllable in Kconfig.
> >>>>>>>>
> >>>>>>> Nack for CONFIG_XENO_OPT_DEBUG_SPINLOCK. Most of the issue we tracked
> >>>>>>> with Gilles regarding the domain migration code had side-effects on the
> >>>>>>> nucleus lock. So having CONFIG_XENO_OPT_DEBUG enabled for identifying
> >>>>>>> internal state weirdnesses - like those triggered by migration bugs -
> >>>>>>> implies enabling the spinlock watchdogs too.
> >>>>>> Ok, if it only makes sense to have both enabled at the same time, then
> >>>>>> let us create XENO_OPT_DEBUG_NUCLEUS. It should include both, but it
> >>>>>> shall not be automatically on when, say, only XENO_OPT_DEBUG_RTDM is
> >>>>>> required.
> >>>>> No objection.
> >>>>>
> >>>> Looking at the spinlock debugging code: it serves two inseparable
> >>>> purposes, a watchdog for stuck locks + lock statistics. The latter make
> >>>> this feature pop up when XENO_OPT_STATS are set on a SMP box - rather
> >>>> surprising effect. Do we still need the stats? If not, I would kick them
> >>>> out in favour of using the latency tracer for such analysis, making
> >>>> spinlock debugging a real pure debug feature.
> >>>>
> >>> The spinlock stats are about uncovering a problem, the latency tracer is
> >>> about finding where the problem lies. Both are orthogonal.
> >> Not fully true: the tracer provides the same information when you enable
> >> CONFIG_IPIPE_TRACE_IRQSOFF. When you disable CONFIG_IPIPE_TRACE_MCOUNT,
> >> you even get this at comparable (if not lower) costs. I once played with
> >> the spinlock debug code before decided to invest time into the tracer. I
> >> think I even posted a patch to enable that code on UP. But I didn't find
> >> the spinlock stats useful enough, even for the scenario "lock length
> >> analysis".
> >>
> >> We basically have now two ways to get the same information (or please
> >> explain what is missing with the tracer). Besides the redundancy, there
> >> is the problem that one of this way comes in via two different,
> >> orthogonal paths (STATS+SMP || DEBUG). That's not very consistent IMHO.
> >>
> > 
> > Nothing is missing in the tracer. The point is that you don't
> > immediately know that you are having a spinlock issue which would make
> > you build the tracer support, and having those stats is a cheap way to
> > detect such problem in a lightweight manner. 
> 
> If it were cheap, we wouldn't discuss it here. Actually, due to its
> inline nature, this instrumentation is fairly costly. That's ok, as long
> as you can explicitly ask for such a feature.
> 

You are talking about different issues here:
#1 - having SMP+STAT enabling the SPINLOCK_DEBUG is suboptimal
#2 - because you don't like #1, we should kill it entirely, and only
rely on the tracer to provide spinlock latency tracing.

I agree on your conclusion regarding #1. I need to be sure that #2 is
not going to kill us too, during SMP debugging sessions.

Fixing #1 is a matter of decoupling config options, but does not require
#2. Going for #2 requires to make sure that we are not going to add some
temporal perturbations caused by the tracer. (Btw, it would be quite
easy to reduce the impact of SPINLOCK_DEBUG on the I-cache, by moving
the stamping code out of line, so this is not a bad code "by design",
it's just a suboptimal implementation).

> But now we have the situation that the (default y!) XENO_OPT_STAT
> feature on UP is far more costly than on SMP.

You mean the opposite, I guess.

>  You know that the stats
> are very useful already without any spinlock instrumentation, i.e. for
> analysing the RT-system load. My feeling is that, for SMP, we currently
> have a huge config mess here. And this is what I'm trying to address,
> /maybe/ also by removing redundant instrumentation means.
> 

I would not call a mess something you don't happen to like; it may still
serve legitimate purposes. It's just a feature after all, which has
proven to be quite useful to the people debugging SMP issues. It's not
redundant in my mind, for the reasons already given. This does not
preclude the opportunity to improve the config situation, though.

> > Running with the tracer
> > enabled usually means that you are chasing an issue you have already
> > detected.
> 
> Again, tracer != mcount. It can be used just like that spinlock stats:
> to *detect* long locking periods. Have a look.
> 

Relax, I had a look already a fair number of times, and I agree with you
that the tracer provides a very useful set of latency tracing data, but
the point is that I'm worried about the perturbations the tracer adds,
which are real, mcount or not, and I don't want to chase the wild goose
when tracking SMP latency issues.

On the other hand, only idiots never change mind, so let's move on the
smart way: please submit your ideal fix for that issue. Since Gilles and
I are usually the ones who bang their heads on SMP issues, we will
experiment with the tracer as a SMP latency tracking tool for Xenomai.
If we actually save some debug time using the tracer, or at least don't
lose any, then I will merge this patch.

> Jan
> 
-- 
Philippe.




^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2006-11-20 13:22 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-11-13 12:13 [Xenomai-help] exception 768 Daniel Schnell
2006-11-17 17:32 ` Philippe Gerum
2006-11-17 18:02   ` Daniel Schnell
2006-11-17 18:41   ` [Xenomai-core] XENO_OPT_DEBUG impact (was: exception 768) Jan Kiszka
2006-11-17 19:05     ` [Xenomai-core] " Philippe Gerum
2006-11-17 19:10       ` [Xenomai-core] Re: XENO_OPT_DEBUG impact Jan Kiszka
2006-11-17 21:58         ` Philippe Gerum
2006-11-20  9:20           ` Jan Kiszka
2006-11-20  9:38             ` Philippe Gerum
2006-11-20 10:01               ` Jan Kiszka
2006-11-20 10:46                 ` Philippe Gerum
2006-11-20 11:39                   ` Jan Kiszka
2006-11-20 13:22                     ` Philippe Gerum
2006-11-20  9:07     ` Gilles Chanteperdrix
2006-11-20  9:14       ` Jan Kiszka

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.