Skip to content

Commit 322a906

Browse files
paulburtonKAGA-KOKO
authored andcommitted
irqchip/mips-gic: Multi-cluster support
The MIPS I6500 CPU & CM (Coherence Manager) 3.5 introduce the concept of multiple clusters to the system. In these systems, each cluster contains its own GIC, so the GIC isn't truly global any longer. Access to registers in the GICs of remote clusters is possible using a redirect register block much like the redirect register blocks provided by the CM & CPC, and configured through the same GCR_REDIRECT register that mips_cm_lock_other() abstraction builds upon. It is expected that external interrupts are connected identically on all clusters. That is, if there is a device providing an interrupt connected to GIC interrupt pin 0 then it should be connected to pin 0 of every GIC in the system. For the most part, the GIC can be treated as though it is still truly global, so long as interrupts in the cluster are configured properly. Introduce support for such multi-cluster systems in the MIPS GIC irqchip driver. A newly introduced gic_irq_lock_cluster() function allows: 1) Configure access to a GIC in a remote cluster via the redirect register block, using mips_cm_lock_other(). Or: 2) Detect that the interrupt in question is affine to the local cluster and plain old GIC register access to the GIC in the local cluster should be used. It is possible to access the local cluster's GIC registers via the redirect block, but keeping the special case for them is both good for performance (because we avoid the locking & indirection overhead of using the redirect block) and necessary to maintain compatibility with systems using CM revisions prior to 3.5 which don't support the redirect block. The gic_irq_lock_cluster() function relies upon an IRQs effective affinity in order to discover which cluster the IRQ is affine to. In order to track this & allow it to be updated at an appropriate point during gic_set_affinity() select the generic support for effective affinity using CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK. gic_set_affinity() is the one function which gains much complexity. It now deconfigures routing to any VP(E), ie. CPU, on the old cluster when moving affinity to a new cluster. gic_shared_irq_domain_map() moves its update of the IRQs effective affinity to before its use of gic_irq_lock_cluster(), to ensure that operation is on the cluster the IRQ is affine to. The remaining changes are straightforward use of the gic_irq_lock_cluster() function to select between local cluster & remote cluster code-paths when configuring interrupts. Signed-off-by: Paul Burton <[email protected]> Signed-off-by: Chao-ying Fu <[email protected]> Signed-off-by: Dragan Mladjenovic <[email protected]> Signed-off-by: Aleksandar Rikalo <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Serge Semin <[email protected]> Tested-by: Gregory CLEMENT <[email protected]> Link: https://lore.kernel.org/all/[email protected]
1 parent c7c0d13 commit 322a906

File tree

2 files changed

+143
-19
lines changed

2 files changed

+143
-19
lines changed

drivers/irqchip/Kconfig

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -352,6 +352,7 @@ config KEYSTONE_IRQ
352352

353353
config MIPS_GIC
354354
bool
355+
select GENERIC_IRQ_EFFECTIVE_AFF_MASK
355356
select GENERIC_IRQ_IPI if SMP
356357
select IRQ_DOMAIN_HIERARCHY
357358
select MIPS_CM

drivers/irqchip/irq-mips-gic.c

Lines changed: 142 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -111,6 +111,41 @@ static inline void gic_unlock_cluster(void)
111111
gic_unlock_cluster(), \
112112
(cpu) = __gic_with_next_online_cpu(cpu))
113113

114+
/**
115+
* gic_irq_lock_cluster() - Lock redirect block access to IRQ's cluster
116+
* @d: struct irq_data corresponding to the interrupt we're interested in
117+
*
118+
* Locks redirect register block access to the global register block of the GIC
119+
* within the remote cluster that the IRQ corresponding to @d is affine to,
120+
* returning true when this redirect block setup & locking has been performed.
121+
*
122+
* If @d is affine to the local cluster then no locking is performed and this
123+
* function will return false, indicating to the caller that it should access
124+
* the local clusters registers without the overhead of indirection through the
125+
* redirect block.
126+
*
127+
* In summary, if this function returns true then the caller should access GIC
128+
* registers using redirect register block accessors & then call
129+
* mips_cm_unlock_other() when done. If this function returns false then the
130+
* caller should trivially access GIC registers in the local cluster.
131+
*
132+
* Returns true if locking performed, else false.
133+
*/
134+
static bool gic_irq_lock_cluster(struct irq_data *d)
135+
{
136+
unsigned int cpu, cl;
137+
138+
cpu = cpumask_first(irq_data_get_effective_affinity_mask(d));
139+
BUG_ON(cpu >= NR_CPUS);
140+
141+
cl = cpu_cluster(&cpu_data[cpu]);
142+
if (cl == cpu_cluster(&current_cpu_data))
143+
return false;
144+
145+
mips_cm_lock_other(cl, 0, 0, CM_GCR_Cx_OTHER_BLOCK_GLOBAL);
146+
return true;
147+
}
148+
114149
static void gic_clear_pcpu_masks(unsigned int intr)
115150
{
116151
unsigned int i;
@@ -157,7 +192,12 @@ static void gic_send_ipi(struct irq_data *d, unsigned int cpu)
157192
{
158193
irq_hw_number_t hwirq = GIC_HWIRQ_TO_SHARED(irqd_to_hwirq(d));
159194

160-
write_gic_wedge(GIC_WEDGE_RW | hwirq);
195+
if (gic_irq_lock_cluster(d)) {
196+
write_gic_redir_wedge(GIC_WEDGE_RW | hwirq);
197+
mips_cm_unlock_other();
198+
} else {
199+
write_gic_wedge(GIC_WEDGE_RW | hwirq);
200+
}
161201
}
162202

163203
int gic_get_c0_compare_int(void)
@@ -225,7 +265,13 @@ static void gic_mask_irq(struct irq_data *d)
225265
{
226266
unsigned int intr = GIC_HWIRQ_TO_SHARED(d->hwirq);
227267

228-
write_gic_rmask(intr);
268+
if (gic_irq_lock_cluster(d)) {
269+
write_gic_redir_rmask(intr);
270+
mips_cm_unlock_other();
271+
} else {
272+
write_gic_rmask(intr);
273+
}
274+
229275
gic_clear_pcpu_masks(intr);
230276
}
231277

@@ -234,7 +280,12 @@ static void gic_unmask_irq(struct irq_data *d)
234280
unsigned int intr = GIC_HWIRQ_TO_SHARED(d->hwirq);
235281
unsigned int cpu;
236282

237-
write_gic_smask(intr);
283+
if (gic_irq_lock_cluster(d)) {
284+
write_gic_redir_smask(intr);
285+
mips_cm_unlock_other();
286+
} else {
287+
write_gic_smask(intr);
288+
}
238289

239290
gic_clear_pcpu_masks(intr);
240291
cpu = cpumask_first(irq_data_get_effective_affinity_mask(d));
@@ -245,7 +296,12 @@ static void gic_ack_irq(struct irq_data *d)
245296
{
246297
unsigned int irq = GIC_HWIRQ_TO_SHARED(d->hwirq);
247298

248-
write_gic_wedge(irq);
299+
if (gic_irq_lock_cluster(d)) {
300+
write_gic_redir_wedge(irq);
301+
mips_cm_unlock_other();
302+
} else {
303+
write_gic_wedge(irq);
304+
}
249305
}
250306

251307
static int gic_set_type(struct irq_data *d, unsigned int type)
@@ -285,9 +341,16 @@ static int gic_set_type(struct irq_data *d, unsigned int type)
285341
break;
286342
}
287343

288-
change_gic_pol(irq, pol);
289-
change_gic_trig(irq, trig);
290-
change_gic_dual(irq, dual);
344+
if (gic_irq_lock_cluster(d)) {
345+
change_gic_redir_pol(irq, pol);
346+
change_gic_redir_trig(irq, trig);
347+
change_gic_redir_dual(irq, dual);
348+
mips_cm_unlock_other();
349+
} else {
350+
change_gic_pol(irq, pol);
351+
change_gic_trig(irq, trig);
352+
change_gic_dual(irq, dual);
353+
}
291354

292355
if (trig == GIC_TRIG_EDGE)
293356
irq_set_chip_handler_name_locked(d, &gic_edge_irq_controller,
@@ -305,25 +368,72 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *cpumask,
305368
bool force)
306369
{
307370
unsigned int irq = GIC_HWIRQ_TO_SHARED(d->hwirq);
371+
unsigned int cpu, cl, old_cpu, old_cl;
308372
unsigned long flags;
309-
unsigned int cpu;
310373

374+
/*
375+
* The GIC specifies that we can only route an interrupt to one VP(E),
376+
* ie. CPU in Linux parlance, at a time. Therefore we always route to
377+
* the first online CPU in the mask.
378+
*/
311379
cpu = cpumask_first_and(cpumask, cpu_online_mask);
312380
if (cpu >= NR_CPUS)
313381
return -EINVAL;
314382

315-
/* Assumption : cpumask refers to a single CPU */
316-
raw_spin_lock_irqsave(&gic_lock, flags);
383+
old_cpu = cpumask_first(irq_data_get_effective_affinity_mask(d));
384+
old_cl = cpu_cluster(&cpu_data[old_cpu]);
385+
cl = cpu_cluster(&cpu_data[cpu]);
317386

318-
/* Re-route this IRQ */
319-
write_gic_map_vp(irq, BIT(mips_cm_vp_id(cpu)));
387+
raw_spin_lock_irqsave(&gic_lock, flags);
320388

321-
/* Update the pcpu_masks */
322-
gic_clear_pcpu_masks(irq);
323-
if (read_gic_mask(irq))
324-
set_bit(irq, per_cpu_ptr(pcpu_masks, cpu));
389+
/*
390+
* If we're moving affinity between clusters, stop routing the
391+
* interrupt to any VP(E) in the old cluster.
392+
*/
393+
if (cl != old_cl) {
394+
if (gic_irq_lock_cluster(d)) {
395+
write_gic_redir_map_vp(irq, 0);
396+
mips_cm_unlock_other();
397+
} else {
398+
write_gic_map_vp(irq, 0);
399+
}
400+
}
325401

402+
/*
403+
* Update effective affinity - after this gic_irq_lock_cluster() will
404+
* begin operating on the new cluster.
405+
*/
326406
irq_data_update_effective_affinity(d, cpumask_of(cpu));
407+
408+
/*
409+
* If we're moving affinity between clusters, configure the interrupt
410+
* trigger type in the new cluster.
411+
*/
412+
if (cl != old_cl)
413+
gic_set_type(d, irqd_get_trigger_type(d));
414+
415+
/* Route the interrupt to its new VP(E) */
416+
if (gic_irq_lock_cluster(d)) {
417+
write_gic_redir_map_pin(irq,
418+
GIC_MAP_PIN_MAP_TO_PIN | gic_cpu_pin);
419+
write_gic_redir_map_vp(irq, BIT(mips_cm_vp_id(cpu)));
420+
421+
/* Update the pcpu_masks */
422+
gic_clear_pcpu_masks(irq);
423+
if (read_gic_redir_mask(irq))
424+
set_bit(irq, per_cpu_ptr(pcpu_masks, cpu));
425+
426+
mips_cm_unlock_other();
427+
} else {
428+
write_gic_map_pin(irq, GIC_MAP_PIN_MAP_TO_PIN | gic_cpu_pin);
429+
write_gic_map_vp(irq, BIT(mips_cm_vp_id(cpu)));
430+
431+
/* Update the pcpu_masks */
432+
gic_clear_pcpu_masks(irq);
433+
if (read_gic_mask(irq))
434+
set_bit(irq, per_cpu_ptr(pcpu_masks, cpu));
435+
}
436+
327437
raw_spin_unlock_irqrestore(&gic_lock, flags);
328438

329439
return IRQ_SET_MASK_OK;
@@ -471,11 +581,21 @@ static int gic_shared_irq_domain_map(struct irq_domain *d, unsigned int virq,
471581
unsigned long flags;
472582

473583
data = irq_get_irq_data(virq);
584+
irq_data_update_effective_affinity(data, cpumask_of(cpu));
474585

475586
raw_spin_lock_irqsave(&gic_lock, flags);
476-
write_gic_map_pin(intr, GIC_MAP_PIN_MAP_TO_PIN | gic_cpu_pin);
477-
write_gic_map_vp(intr, BIT(mips_cm_vp_id(cpu)));
478-
irq_data_update_effective_affinity(data, cpumask_of(cpu));
587+
588+
/* Route the interrupt to its VP(E) */
589+
if (gic_irq_lock_cluster(data)) {
590+
write_gic_redir_map_pin(intr,
591+
GIC_MAP_PIN_MAP_TO_PIN | gic_cpu_pin);
592+
write_gic_redir_map_vp(intr, BIT(mips_cm_vp_id(cpu)));
593+
mips_cm_unlock_other();
594+
} else {
595+
write_gic_map_pin(intr, GIC_MAP_PIN_MAP_TO_PIN | gic_cpu_pin);
596+
write_gic_map_vp(intr, BIT(mips_cm_vp_id(cpu)));
597+
}
598+
479599
raw_spin_unlock_irqrestore(&gic_lock, flags);
480600

481601
return 0;
@@ -651,6 +771,9 @@ static int gic_ipi_domain_alloc(struct irq_domain *d, unsigned int virq,
651771
if (ret)
652772
goto error;
653773

774+
/* Set affinity to cpu. */
775+
irq_data_update_effective_affinity(irq_get_irq_data(virq + i),
776+
cpumask_of(cpu));
654777
ret = irq_set_irq_type(virq + i, IRQ_TYPE_EDGE_RISING);
655778
if (ret)
656779
goto error;

0 commit comments

Comments
 (0)