locking, lib/atomic64: Annotate atomic64_lock::lock as raw
authorShan Hai <haishan.bai@gmail.com>
Thu, 1 Sep 2011 03:32:03 +0000 (11:32 +0800)
committerIngo Molnar <mingo@elte.hu>
Tue, 13 Sep 2011 09:12:22 +0000 (11:12 +0200)
commitf59ca05871a055a73f8e626f2d868f0da248e22c
treedd077b2cbbf92f9dbda925d8bff9e567f5a56b3f
parent3b8f40481513a7b6123def5a02db4cff96ae2198
locking, lib/atomic64: Annotate atomic64_lock::lock as raw

The spinlock protected atomic64 operations must be irq safe as they
are used in hard interrupt context and cannot be preempted on -rt:

 NIP [c068b218] rt_spin_lock_slowlock+0x78/0x3a8
  LR [c068b1e0] rt_spin_lock_slowlock+0x40/0x3a8
 Call Trace:
  [eb459b90] [c068b1e0] rt_spin_lock_slowlock+0x40/0x3a8 (unreliable)
  [eb459c20] [c068bdb0] rt_spin_lock+0x40/0x98
  [eb459c40] [c03d2a14] atomic64_read+0x48/0x84
  [eb459c60] [c001aaf4] perf_event_interrupt+0xec/0x28c
  [eb459d10] [c0010138] performance_monitor_exception+0x7c/0x150
  [eb459d30] [c0014170] ret_from_except_full+0x0/0x4c

So annotate it.

In mainline this change documents the low level nature of
the lock - otherwise there's no functional difference. Lockdep
and Sparse checking will work as usual.

Signed-off-by: Shan Hai <haishan.bai@gmail.com>
Reviewed-by: Yong Zhang <yong.zhang0@gmail.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
lib/atomic64.c