LKML Archive mirror
 help / color / mirror / Atom feed
* [PATCH V2 0/3] Urgent fixes for Intel CQM/MBM counting
@ 2016-05-06 23:44 Vikas Shivappa
  2016-05-06 23:44 ` [PATCH 1/3] perf/x86/cqm,mbm: Store cqm,mbm count for all events when RMID is recycled Vikas Shivappa
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Vikas Shivappa @ 2016-05-06 23:44 UTC (permalink / raw
  To: vikas.shivappa, x86, linux-kernel, hpa, tglx
  Cc: mingo, peterz, ravi.v.shankar, tony.luck, fenghua.yu,
	vikas.shivappa

Sending some urgent fixes for the MBM(memory b/w monitoring) which is
upstreamed from 4.6-rc1. Patches apply on 4.6-rc6.

CQM and MBM counters reported some incorrect counts for different
scenarios like interval mode or for multiple perf instances. 

A second attempt to send V2 as per Peter feedback: 
fixing a few indenting issues and
adding some better changelogs/comments. Removed the patch to send error
for some broken features - since these were broken anyways since 4.1.
Removed the confusing rc_count and st_count and tried to implement close
to the perf way.

[PATCH 1/3] perf/x86/cqm,mbm: Store cqm,mbm count for all events when
[PATCH 2/3] perf/x86/mbm: Fix mbm counting for RMID reuse
[PATCH 3/3] perf/x86/mbm: Fix mbm counting during RMID recycling

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/3] perf/x86/cqm,mbm: Store cqm,mbm count for all events when RMID is recycled
  2016-05-06 23:44 [PATCH V2 0/3] Urgent fixes for Intel CQM/MBM counting Vikas Shivappa
@ 2016-05-06 23:44 ` Vikas Shivappa
  2016-05-06 23:44 ` [PATCH 2/3] perf/x86/mbm: Fix mbm counting for RMID reuse Vikas Shivappa
  2016-05-06 23:44 ` [PATCH 3/3] perf/x86/mbm: Fix mbm counting during RMID recycling Vikas Shivappa
  2 siblings, 0 replies; 7+ messages in thread
From: Vikas Shivappa @ 2016-05-06 23:44 UTC (permalink / raw
  To: vikas.shivappa, x86, linux-kernel, hpa, tglx
  Cc: mingo, peterz, ravi.v.shankar, tony.luck, fenghua.yu,
	vikas.shivappa

During RMID recycling, when an event loses the RMID we saved the counter
for group leader but it was not being saved for all the events in an
event group. This would lead to a situation where if 2 perf instances
are counting the same PID one of them would not see the updated count
which other perf instance is seeing. This patch tries to fix the issue
by saving the count for all the events in the same event group.

Signed-off-by: Vikas Shivappa <vikas.shivappa@linux.intel.com>
---
 arch/x86/events/intel/cqm.c | 39 ++++++++++++++++++++++++---------------
 1 file changed, 24 insertions(+), 15 deletions(-)

diff --git a/arch/x86/events/intel/cqm.c b/arch/x86/events/intel/cqm.c
index 7b5fd81..5f2104a 100644
--- a/arch/x86/events/intel/cqm.c
+++ b/arch/x86/events/intel/cqm.c
@@ -14,6 +14,14 @@
 #define MSR_IA32_QM_EVTSEL	0x0c8d
 
 #define MBM_CNTR_WIDTH		24
+
+#define __init_rr(old_rmid, config, val)	\
+((struct rmid_read) {				\
+	.rmid = old_rmid,			\
+	.evt_type = config,			\
+	.value = ATOMIC64_INIT(val),		\
+})
+
 /*
  * Guaranteed time in ms as per SDM where MBM counters will not overflow.
  */
@@ -478,7 +486,8 @@ static u32 intel_cqm_xchg_rmid(struct perf_event *group, u32 rmid)
 {
 	struct perf_event *event;
 	struct list_head *head = &group->hw.cqm_group_entry;
-	u32 old_rmid = group->hw.cqm_rmid;
+	u32 old_rmid = group->hw.cqm_rmid, evttype;
+	struct rmid_read rr;
 
 	lockdep_assert_held(&cache_mutex);
 
@@ -486,14 +495,21 @@ static u32 intel_cqm_xchg_rmid(struct perf_event *group, u32 rmid)
 	 * If our RMID is being deallocated, perform a read now.
 	 */
 	if (__rmid_valid(old_rmid) && !__rmid_valid(rmid)) {
-		struct rmid_read rr = {
-			.rmid = old_rmid,
-			.evt_type = group->attr.config,
-			.value = ATOMIC64_INIT(0),
-		};
 
+		rr = __init_rr(old_rmid, group->attr.config, 0);
 		cqm_mask_call(&rr);
 		local64_set(&group->count, atomic64_read(&rr.value));
+		list_for_each_entry(event, head, hw.cqm_group_entry) {
+			if (event->hw.is_group_event) {
+
+				evttype = event->attr.config;
+				rr = __init_rr(old_rmid, evttype, 0);
+
+				cqm_mask_call(&rr);
+				local64_set(&event->count,
+					    atomic64_read(&rr.value));
+			}
+		}
 	}
 
 	raw_spin_lock_irq(&cache_lock);
@@ -983,11 +999,7 @@ static void __intel_mbm_event_init(void *info)
 
 static void init_mbm_sample(u32 rmid, u32 evt_type)
 {
-	struct rmid_read rr = {
-		.rmid = rmid,
-		.evt_type = evt_type,
-		.value = ATOMIC64_INIT(0),
-	};
+	struct rmid_read rr = __init_rr(rmid, evt_type, 0);
 
 	/* on each socket, init sample */
 	on_each_cpu_mask(&cqm_cpumask, __intel_mbm_event_init, &rr, 1);
@@ -1181,10 +1193,7 @@ static void mbm_hrtimer_init(void)
 static u64 intel_cqm_event_count(struct perf_event *event)
 {
 	unsigned long flags;
-	struct rmid_read rr = {
-		.evt_type = event->attr.config,
-		.value = ATOMIC64_INIT(0),
-	};
+	struct rmid_read rr = __init_rr(-1, event->attr.config, 0);
 
 	/*
 	 * We only need to worry about task events. System-wide events
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/3] perf/x86/mbm: Fix mbm counting for RMID reuse
  2016-05-06 23:44 [PATCH V2 0/3] Urgent fixes for Intel CQM/MBM counting Vikas Shivappa
  2016-05-06 23:44 ` [PATCH 1/3] perf/x86/cqm,mbm: Store cqm,mbm count for all events when RMID is recycled Vikas Shivappa
@ 2016-05-06 23:44 ` Vikas Shivappa
  2016-05-10 12:15   ` Peter Zijlstra
  2016-05-06 23:44 ` [PATCH 3/3] perf/x86/mbm: Fix mbm counting during RMID recycling Vikas Shivappa
  2 siblings, 1 reply; 7+ messages in thread
From: Vikas Shivappa @ 2016-05-06 23:44 UTC (permalink / raw
  To: vikas.shivappa, x86, linux-kernel, hpa, tglx
  Cc: mingo, peterz, ravi.v.shankar, tony.luck, fenghua.yu,
	vikas.shivappa

This patch tries to fix the issue where multiple perf instances try to
monitor the same PID.
MBM cannot count directly in the usual perf way of continuously adding
 the diff of current h/w counter and the prev count to the event->count
because of some h/w dependencies: (1) the mbm h/w counters overflow. (2)
There are limited h/w RMIDs and hence we recycle the RMIDs due to
which an event may count from different RMIDs. (3) Also we may not want to
count at every sched_in and sched_out because the MSR reads involve
quite a bit of overhead.

However we try to do something similar to usual perf way in this patch
and mainly handle (1) and (3).
update_sample takes care of the overflow in the hardware counters and
provides abstraction by returning total bytes counted as if there was no
overflow. We use this abstraction to count as below:

init:
  event->prev_count = update_sample(rmid) //returns current total_bytes

count: // MBM right now uses count instead of read
  cur_count = update_sample(rmid)
  event->count += cur_count - event->prev_count
  event->prev_count = cur_count

Signed-off-by: Vikas Shivappa <vikas.shivappa@linux.intel.com>
---
 arch/x86/events/intel/cqm.c | 66 ++++++++++++++++++++++++++++++++++++++++++---
 include/linux/perf_event.h  |  1 +
 2 files changed, 63 insertions(+), 4 deletions(-)

diff --git a/arch/x86/events/intel/cqm.c b/arch/x86/events/intel/cqm.c
index 5f2104a..a98d841 100644
--- a/arch/x86/events/intel/cqm.c
+++ b/arch/x86/events/intel/cqm.c
@@ -479,6 +479,14 @@ static void cqm_mask_call(struct rmid_read *rr)
 		on_each_cpu_mask(&cqm_cpumask, __intel_cqm_event_count, rr, 1);
 }
 
+static void update_mbm_count(u64 val, struct perf_event *event)
+{
+	u64 diff = val - local64_read(&event->hw.cqm_prev_count);
+
+	local64_add(diff, &event->count);
+	local64_set(&event->hw.cqm_prev_count, val);
+}
+
 /*
  * Exchange the RMID of a group of events.
  */
@@ -1005,6 +1013,52 @@ static void init_mbm_sample(u32 rmid, u32 evt_type)
 	on_each_cpu_mask(&cqm_cpumask, __intel_mbm_event_init, &rr, 1);
 }
 
+static inline bool first_event_ingroup(struct perf_event *group,
+				    struct perf_event *event)
+{
+	struct list_head *head = &group->hw.cqm_group_entry;
+	u32 evt_type = event->attr.config;
+
+	if (evt_type == group->attr.config)
+		return false;
+	list_for_each_entry(event, head, hw.cqm_group_entry) {
+		if (evt_type == event->attr.config)
+			return false;
+	}
+
+	return true;
+}
+
+/*
+ * mbm_setup_event - Does mbm specific count initialization
+ * when multiple events share RMID.
+ *
+ * If this is the first mbm event then the event prev_count is 0 bytes,
+ * else the current bytes of the RMID is the prev_count.
+*/
+static inline void mbm_setup_event(u32 rmid, struct perf_event *group,
+					  struct perf_event *event)
+{
+	u32 evt_type = event->attr.config;
+	struct rmid_read rr;
+	u64 val;
+
+	if (first_event_ingroup(group, event)) {
+		init_mbm_sample(rmid, evt_type);
+	} else {
+		rr = __init_rr(rmid, evt_type, 0);
+		cqm_mask_call(&rr);
+		val = atomic64_read(&rr.value);
+		local64_set(&event->hw.cqm_prev_count, val);
+	}
+}
+
+static inline void mbm_setup_event_init(struct perf_event *event)
+{
+	event->hw.is_group_event = false;
+	local64_set(&event->hw.cqm_prev_count, 0UL);
+}
+
 /*
  * Find a group and setup RMID.
  *
@@ -1017,7 +1071,7 @@ static void intel_cqm_setup_event(struct perf_event *event,
 	bool conflict = false;
 	u32 rmid;
 
-	event->hw.is_group_event = false;
+	mbm_setup_event_init(event);
 	list_for_each_entry(iter, &cache_groups, hw.cqm_groups_entry) {
 		rmid = iter->hw.cqm_rmid;
 
@@ -1026,7 +1080,7 @@ static void intel_cqm_setup_event(struct perf_event *event,
 			event->hw.cqm_rmid = rmid;
 			*group = iter;
 			if (is_mbm_event(event->attr.config) && __rmid_valid(rmid))
-				init_mbm_sample(rmid, event->attr.config);
+				mbm_setup_event(rmid, iter, event);
 			return;
 		}
 
@@ -1244,8 +1298,12 @@ static u64 intel_cqm_event_count(struct perf_event *event)
 	cqm_mask_call(&rr);
 
 	raw_spin_lock_irqsave(&cache_lock, flags);
-	if (event->hw.cqm_rmid == rr.rmid)
-		local64_set(&event->count, atomic64_read(&rr.value));
+	if (event->hw.cqm_rmid == rr.rmid) {
+		if (is_mbm_event(event->attr.config))
+			update_mbm_count(atomic64_read(&rr.value), event);
+		else
+			local64_set(&event->count, atomic64_read(&rr.value));
+	}
 	raw_spin_unlock_irqrestore(&cache_lock, flags);
 out:
 	return __perf_event_count(event);
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index f291275..9298a89 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -122,6 +122,7 @@ struct hw_perf_event {
 			int			cqm_state;
 			u32			cqm_rmid;
 			int			is_group_event;
+			local64_t		cqm_prev_count;
 			struct list_head	cqm_events_entry;
 			struct list_head	cqm_groups_entry;
 			struct list_head	cqm_group_entry;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/3] perf/x86/mbm: Fix mbm counting during RMID recycling
  2016-05-06 23:44 [PATCH V2 0/3] Urgent fixes for Intel CQM/MBM counting Vikas Shivappa
  2016-05-06 23:44 ` [PATCH 1/3] perf/x86/cqm,mbm: Store cqm,mbm count for all events when RMID is recycled Vikas Shivappa
  2016-05-06 23:44 ` [PATCH 2/3] perf/x86/mbm: Fix mbm counting for RMID reuse Vikas Shivappa
@ 2016-05-06 23:44 ` Vikas Shivappa
  2 siblings, 0 replies; 7+ messages in thread
From: Vikas Shivappa @ 2016-05-06 23:44 UTC (permalink / raw
  To: vikas.shivappa, x86, linux-kernel, hpa, tglx
  Cc: mingo, peterz, ravi.v.shankar, tony.luck, fenghua.yu,
	vikas.shivappa

Continuing from the previous patch, this handles the recycling part of
the counting.

When an event loses RMID we need to update the event->count
with total bytes till now and when an event gets a new RMID we need to
update the event->prev_count to zero bytes as we start new counting.

Signed-off-by: Vikas Shivappa <vikas.shivappa@linux.intel.com>
---
 arch/x86/events/intel/cqm.c | 39 ++++++++++++++++++++++++++++++++-------
 1 file changed, 32 insertions(+), 7 deletions(-)

diff --git a/arch/x86/events/intel/cqm.c b/arch/x86/events/intel/cqm.c
index a98d841..f91a172 100644
--- a/arch/x86/events/intel/cqm.c
+++ b/arch/x86/events/intel/cqm.c
@@ -487,6 +487,14 @@ static void update_mbm_count(u64 val, struct perf_event *event)
 	local64_set(&event->hw.cqm_prev_count, val);
 }
 
+static inline void
+mbm_init_rccount(u32 rmid, struct perf_event *event, bool mbm_bytes_init)
+{
+	if (!mbm_bytes_init)
+		init_mbm_sample(rmid, event->attr.config);
+	local64_set(&event->hw.cqm_prev_count, 0UL);
+}
+
 /*
  * Exchange the RMID of a group of events.
  */
@@ -495,18 +503,26 @@ static u32 intel_cqm_xchg_rmid(struct perf_event *group, u32 rmid)
 	struct perf_event *event;
 	struct list_head *head = &group->hw.cqm_group_entry;
 	u32 old_rmid = group->hw.cqm_rmid, evttype;
+	bool mbm_bytes_init = false;
 	struct rmid_read rr;
+	u64 val;
 
 	lockdep_assert_held(&cache_mutex);
 
 	/*
 	 * If our RMID is being deallocated, perform a read now.
+	 * For mbm, we need to store the bytes that were counted till now
 	 */
 	if (__rmid_valid(old_rmid) && !__rmid_valid(rmid)) {
 
 		rr = __init_rr(old_rmid, group->attr.config, 0);
 		cqm_mask_call(&rr);
-		local64_set(&group->count, atomic64_read(&rr.value));
+
+		if (is_mbm_event(group->attr.config))
+			update_mbm_count(atomic64_read(&rr.value), group);
+		else
+			local64_set(&group->count, atomic64_read(&rr.value));
+
 		list_for_each_entry(event, head, hw.cqm_group_entry) {
 			if (event->hw.is_group_event) {
 
@@ -514,8 +530,12 @@ static u32 intel_cqm_xchg_rmid(struct perf_event *group, u32 rmid)
 				rr = __init_rr(old_rmid, evttype, 0);
 
 				cqm_mask_call(&rr);
-				local64_set(&event->count,
-					    atomic64_read(&rr.value));
+				val = atomic64_read(&rr.value);
+				if (is_mbm_event(event->attr.config))
+					update_mbm_count(val, event);
+				else
+					local64_set(&event->count,
+						    atomic64_read(&rr.value));
 			}
 		}
 	}
@@ -535,12 +555,17 @@ static u32 intel_cqm_xchg_rmid(struct perf_event *group, u32 rmid)
 	 */
 	if (__rmid_valid(rmid)) {
 		event = group;
-		if (is_mbm_event(event->attr.config))
-			init_mbm_sample(rmid, event->attr.config);
+
+		if (is_mbm_event(event->attr.config)) {
+			mbm_init_rccount(rmid, event, mbm_bytes_init);
+			mbm_bytes_init = true;
+		}
 
 		list_for_each_entry(event, head, hw.cqm_group_entry) {
-			if (is_mbm_event(event->attr.config))
-				init_mbm_sample(rmid, event->attr.config);
+			if (is_mbm_event(event->attr.config)) {
+				mbm_init_rccount(rmid, event, mbm_bytes_init);
+				mbm_bytes_init = true;
+			}
 		}
 	}
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/3] perf/x86/mbm: Fix mbm counting for RMID reuse
  2016-05-06 23:44 ` [PATCH 2/3] perf/x86/mbm: Fix mbm counting for RMID reuse Vikas Shivappa
@ 2016-05-10 12:15   ` Peter Zijlstra
  2016-05-10 16:39     ` Luck, Tony
  0 siblings, 1 reply; 7+ messages in thread
From: Peter Zijlstra @ 2016-05-10 12:15 UTC (permalink / raw
  To: Vikas Shivappa
  Cc: vikas.shivappa, x86, linux-kernel, hpa, tglx, mingo,
	ravi.v.shankar, tony.luck, fenghua.yu

On Fri, May 06, 2016 at 04:44:14PM -0700, Vikas Shivappa wrote:
> This patch tries to fix the issue where multiple perf instances try to
> monitor the same PID.

> MBM cannot count directly in the usual perf way of continuously adding
> the diff of current h/w counter and the prev count to the event->count
> because of some h/w dependencies:

And yet the patch appears to do exactly that; *confused*.

>  (1) the mbm h/w counters overflow.

As do most other counters.. so your point is? You also have the software
timer < overflow period..

>  (2) There are limited h/w RMIDs and hence we recycle the RMIDs due to
>      which an event may count from different RMIDs.

This fails to explain why this is a problem.

>  (3) Also we may not want to count at every sched_in and sched_out
>      because the MSR reads involve quite a bit of overhead.

Every single other PMU driver just does this; why are you special?

You list 3 issues of why you think you cannot do the regular thing, but
completely fail to explain how these issues are a problem.

> However we try to do something similar to usual perf way in this patch
> and mainly handle (1) and (3).

> update_sample takes care of the overflow in the hardware counters and
> provides abstraction by returning total bytes counted as if there was no
> overflow. We use this abstraction to count as below:
> 
> init:
>   event->prev_count = update_sample(rmid) //returns current total_bytes
> 
> count: // MBM right now uses count instead of read
>   cur_count = update_sample(rmid)
>   event->count += cur_count - event->prev_count
>   event->prev_count = cur_count

So where does cqm_prev_count come from and why do you need it? What's
wrong with event->hw.prev_count ?

In fact, I cannot seem to find any event->hw.prev_count usage in this or
the next patch, so can we simply use that and not add pointless new
members?

^ permalink raw reply	[flat|nested] 7+ messages in thread

* RE: [PATCH 2/3] perf/x86/mbm: Fix mbm counting for RMID reuse
  2016-05-10 12:15   ` Peter Zijlstra
@ 2016-05-10 16:39     ` Luck, Tony
  2016-05-11  7:23       ` Peter Zijlstra
  0 siblings, 1 reply; 7+ messages in thread
From: Luck, Tony @ 2016-05-10 16:39 UTC (permalink / raw
  To: Peter Zijlstra, Vikas Shivappa
  Cc: Shivappa, Vikas, x86@kernel.org, linux-kernel@vger.kernel.org,
	hpa@zytor.com, tglx@linutronix.de, mingo@kernel.org,
	Shankar, Ravi V, Yu, Fenghua, davidcc@google.com,
	Stephane Eranian (eranian@google.com)

>>  (3) Also we may not want to count at every sched_in and sched_out
>>      because the MSR reads involve quite a bit of overhead.
>
> Every single other PMU driver just does this; why are you special?

They just have to read a register.  We have to write the IA32_EM_EVT_SEL MSR
and then read from the IA32_QM_CTR MSR ... if we are tracking both local
and total bandwidth, we have to do repeat and wrmr/rdmsr again to get the
other counter.  That seems like it will noticeably affect the system if we do it
on every sched_in and sched_out.

But the more we make this complicated, the more I think that we should not
go through the pain of stealing/recycling RMIDs and just limit the number of
things that can be simultaneously monitored.  If someone tries to monitor one
more thing when all the RMIDs are in use, we should just error out with
-ERUNOUTOFRMIDSTRYAGAINLATER (maybe -EAGAIN???)

-Tony

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/3] perf/x86/mbm: Fix mbm counting for RMID reuse
  2016-05-10 16:39     ` Luck, Tony
@ 2016-05-11  7:23       ` Peter Zijlstra
  0 siblings, 0 replies; 7+ messages in thread
From: Peter Zijlstra @ 2016-05-11  7:23 UTC (permalink / raw
  To: Luck, Tony
  Cc: Vikas Shivappa, Shivappa, Vikas, x86@kernel.org,
	linux-kernel@vger.kernel.org, hpa@zytor.com, tglx@linutronix.de,
	mingo@kernel.org, Shankar, Ravi V, Yu, Fenghua,
	davidcc@google.com, Stephane Eranian (eranian@google.com)

On Tue, May 10, 2016 at 04:39:39PM +0000, Luck, Tony wrote:
> >>  (3) Also we may not want to count at every sched_in and sched_out
> >>      because the MSR reads involve quite a bit of overhead.
> >
> > Every single other PMU driver just does this; why are you special?
> 
> They just have to read a register.  We have to write the IA32_EM_EVT_SEL MSR
> and then read from the IA32_QM_CTR MSR ... if we are tracking both local
> and total bandwidth, we have to do repeat and wrmr/rdmsr again to get the
> other counter.  That seems like it will noticeably affect the system if we do it
> on every sched_in and sched_out.

Right; but Vikas didn't say that did he ;-), he just mentioned msr-read.

Also; I don't think you actually have to do it on every sched event,
only when the event<->rmid association changes. As long as the
event<->rmid association doesn't change, you can forgo updates.

> But the more we make this complicated, the more I think that we should not
> go through the pain of stealing/recycling RMIDs and just limit the number of
> things that can be simultaneously monitored.  If someone tries to monitor one
> more thing when all the RMIDs are in use, we should just error out with
> -ERUNOUTOFRMIDSTRYAGAINLATER (maybe -EAGAIN???)

Possibly; but I would like to minimize churn at this point to let the
Google guys get their patches in shape. They seem to have definite ideas
about that as well :-)

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2016-05-11  7:23 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-05-06 23:44 [PATCH V2 0/3] Urgent fixes for Intel CQM/MBM counting Vikas Shivappa
2016-05-06 23:44 ` [PATCH 1/3] perf/x86/cqm,mbm: Store cqm,mbm count for all events when RMID is recycled Vikas Shivappa
2016-05-06 23:44 ` [PATCH 2/3] perf/x86/mbm: Fix mbm counting for RMID reuse Vikas Shivappa
2016-05-10 12:15   ` Peter Zijlstra
2016-05-10 16:39     ` Luck, Tony
2016-05-11  7:23       ` Peter Zijlstra
2016-05-06 23:44 ` [PATCH 3/3] perf/x86/mbm: Fix mbm counting during RMID recycling Vikas Shivappa

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).