Skip to content

Commit 6daa755

Browse files
committed
Merge tag 's390-5.13-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux
Pull s390 updates from Heiko Carstens: - fix buffer size for in-kernel disassembler for ebpf programs. - fix two memory leaks in zcrypt driver. - expose PCI device UID as index, including an indicator if the uid is unique. - remove some oprofile leftovers. - improve stack unwinder tests. - don't use gcc atomic builtins anymore, just like all other architectures. Even though I'm sure the current code is ok, I totally dislike that s390 is the only architecture being special here; especially considering that there was a lengthly discussion about this topic and the outcome was not to use the builtins. Therefore open-code atomic ops again with inline assembly and switch to gcc builtins as soon as other architectures are doing. - couple of other changes to atomic and cmpxchg, and use atomic-instrumented.h for KASAN. - separate zbus creation, registration, and scanning in our PCI code which allows for cleaner and easier handling. - a rather large change to the vfio-ap code to fix circular locking dependencies when updating crypto masks. - move QAOB handling from qdio layer down to drivers. - add CRW inject facility to common I/O layer. This adds debugs files which allow to generate artificial events from user space for testing purposes. - increase SCLP console line length from 80 to 320 characters to avoid odd wrapped lines. - add protected virtualization guest and host indication files, which indicate either that a guest is running in pv mode or if the hypervisor is capable of starting pv guests. - various other small fixes and improvements all over the place. * tag 's390-5.13-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (53 commits) s390/disassembler: increase ebpf disasm buffer size s390/archrandom: add parameter check for s390_arch_random_generate s390/zcrypt: fix zcard and zqueue hot-unplug memleak s390/pci: expose a PCI device's UID as its index s390/atomic,cmpxchg: always inline __xchg/__cmpxchg s390/smp: fix do_restart() prototype s390: get rid of oprofile leftovers s390/atomic,cmpxchg: make constraints work with old compilers s390/test_unwind: print test suite start/end info s390/cmpxchg: use unsigned long values instead of void pointers s390/test_unwind: add WARN if tests failed s390/test_unwind: unify error handling paths s390: update defconfigs s390/spinlock: use R constraint in inline assembly s390/atomic,cmpxchg: switch to use atomic-instrumented.h s390/cmpxchg: get rid of gcc atomic builtins s390/atomic: get rid of gcc atomic builtins s390/atomic: use proper constraints s390/atomic: move remaining inline assemblies to atomic_ops.h s390/bitops: make bitops only work on longs ...
2 parents c653667 + 6f3353c commit 6daa755

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

58 files changed

+1458
-964
lines changed

Documentation/ABI/testing/sysfs-bus-pci

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -195,10 +195,13 @@ What: /sys/bus/pci/devices/.../index
195195
Date: July 2010
196196
Contact: Narendra K <[email protected]>, [email protected]
197197
Description:
198-
Reading this attribute will provide the firmware
199-
given instance (SMBIOS type 41 device type instance) of the
200-
PCI device. The attribute will be created only if the firmware
201-
has given an instance number to the PCI device.
198+
Reading this attribute will provide the firmware given instance
199+
number of the PCI device. Depending on the platform this can
200+
be for example the SMBIOS type 41 device type instance or the
201+
user-defined ID (UID) on s390. The attribute will be created
202+
only if the firmware has given an instance number to the PCI
203+
device and that number is guaranteed to uniquely identify the
204+
device in the system.
202205
Users:
203206
Userspace applications interested in knowing the
204207
firmware assigned device type instance of the PCI

Documentation/s390/pci.rst

Lines changed: 11 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,8 @@ Entries specific to zPCI functions and entries that hold zPCI information.
5050
* /sys/bus/pci/slots/XXXXXXXX
5151

5252
The slot entries are set up using the function identifier (FID) of the
53-
PCI function.
53+
PCI function. The format depicted as XXXXXXXX above is 8 hexadecimal digits
54+
with 0 padding and lower case hexadecimal digitis.
5455

5556
- /sys/bus/pci/slots/XXXXXXXX/power
5657

@@ -88,8 +89,15 @@ Entries specific to zPCI functions and entries that hold zPCI information.
8889
is attached to.
8990

9091
- uid
91-
The unique identifier (UID) is defined when configuring an LPAR and is
92-
unique in the LPAR.
92+
The user identifier (UID) may be defined as part of the machine
93+
configuration or the z/VM or KVM guest configuration. If the accompanying
94+
uid_is_unique attribute is 1 the platform guarantees that the UID is unique
95+
within that instance and no devices with the same UID can be attached
96+
during the lifetime of the system.
97+
98+
- uid_is_unique
99+
Indicates whether the user identifier (UID) is guaranteed to be and remain
100+
unique within this Linux instance.
93101

94102
- pfip/segmentX
95103
The segments determine the isolation of a function.

arch/s390/Kconfig.debug

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,3 +15,11 @@ config DEBUG_ENTRY
1515
exits or otherwise impact performance.
1616

1717
If unsure, say N.
18+
19+
config CIO_INJECT
20+
bool "CIO Inject interfaces"
21+
depends on DEBUG_KERNEL && DEBUG_FS
22+
help
23+
This option provides a debugging facility to inject certain artificial events
24+
and instruction responses to the CIO layer of Linux kernel. The newly created
25+
debugfs user-interfaces will be at /sys/kernel/debug/s390/cio/*

arch/s390/configs/debug_defconfig

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -771,7 +771,6 @@ CONFIG_DYNAMIC_DEBUG=y
771771
CONFIG_DEBUG_INFO=y
772772
CONFIG_DEBUG_INFO_DWARF4=y
773773
CONFIG_GDB_SCRIPTS=y
774-
CONFIG_FRAME_WARN=1024
775774
CONFIG_HEADERS_INSTALL=y
776775
CONFIG_DEBUG_SECTION_MISMATCH=y
777776
CONFIG_MAGIC_SYSRQ=y
@@ -829,6 +828,7 @@ CONFIG_HIST_TRIGGERS=y
829828
CONFIG_FTRACE_STARTUP_TEST=y
830829
# CONFIG_EVENT_TRACE_STARTUP_TEST is not set
831830
CONFIG_DEBUG_ENTRY=y
831+
CONFIG_CIO_INJECT=y
832832
CONFIG_NOTIFIER_ERROR_INJECTION=m
833833
CONFIG_NETDEV_NOTIFIER_ERROR_INJECT=m
834834
CONFIG_FAULT_INJECTION=y

arch/s390/configs/defconfig

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -756,7 +756,6 @@ CONFIG_PRINTK_TIME=y
756756
CONFIG_DEBUG_INFO=y
757757
CONFIG_DEBUG_INFO_DWARF4=y
758758
CONFIG_GDB_SCRIPTS=y
759-
CONFIG_FRAME_WARN=1024
760759
CONFIG_DEBUG_SECTION_MISMATCH=y
761760
CONFIG_MAGIC_SYSRQ=y
762761
CONFIG_DEBUG_WX=y

arch/s390/crypto/arch_random.c

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -54,6 +54,10 @@ static DECLARE_DELAYED_WORK(arch_rng_work, arch_rng_refill_buffer);
5454

5555
bool s390_arch_random_generate(u8 *buf, unsigned int nbytes)
5656
{
57+
/* max hunk is ARCH_RNG_BUF_SIZE */
58+
if (nbytes > ARCH_RNG_BUF_SIZE)
59+
return false;
60+
5761
/* lock rng buffer */
5862
if (!spin_trylock(&arch_rng_lock))
5963
return false;

arch/s390/crypto/crc32be-vx.S

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@
3232
* process particular chunks of the input data stream in parallel.
3333
*
3434
* For the CRC-32 variants, the constants are precomputed according to
35-
* these defintions:
35+
* these definitions:
3636
*
3737
* R1 = x4*128+64 mod P(x)
3838
* R2 = x4*128 mod P(x)
@@ -189,7 +189,7 @@ ENTRY(crc32_be_vgfm_16)
189189
* Note: To compensate the division by x^32, use the vector unpack
190190
* instruction to move the leftmost word into the leftmost doubleword
191191
* of the vector register. The rightmost doubleword is multiplied
192-
* with zero to not contribute to the intermedate results.
192+
* with zero to not contribute to the intermediate results.
193193
*/
194194

195195
/* T1(x) = floor( R(x) / x^32 ) GF2MUL u */

arch/s390/include/asm/atomic.h

Lines changed: 56 additions & 42 deletions
Original file line numberDiff line numberDiff line change
@@ -15,48 +15,46 @@
1515
#include <asm/barrier.h>
1616
#include <asm/cmpxchg.h>
1717

18-
static inline int atomic_read(const atomic_t *v)
18+
static inline int arch_atomic_read(const atomic_t *v)
1919
{
20-
int c;
21-
22-
asm volatile(
23-
" l %0,%1\n"
24-
: "=d" (c) : "Q" (v->counter));
25-
return c;
20+
return __atomic_read(v);
2621
}
22+
#define arch_atomic_read arch_atomic_read
2723

28-
static inline void atomic_set(atomic_t *v, int i)
24+
static inline void arch_atomic_set(atomic_t *v, int i)
2925
{
30-
asm volatile(
31-
" st %1,%0\n"
32-
: "=Q" (v->counter) : "d" (i));
26+
__atomic_set(v, i);
3327
}
28+
#define arch_atomic_set arch_atomic_set
3429

35-
static inline int atomic_add_return(int i, atomic_t *v)
30+
static inline int arch_atomic_add_return(int i, atomic_t *v)
3631
{
3732
return __atomic_add_barrier(i, &v->counter) + i;
3833
}
34+
#define arch_atomic_add_return arch_atomic_add_return
3935

40-
static inline int atomic_fetch_add(int i, atomic_t *v)
36+
static inline int arch_atomic_fetch_add(int i, atomic_t *v)
4137
{
4238
return __atomic_add_barrier(i, &v->counter);
4339
}
40+
#define arch_atomic_fetch_add arch_atomic_fetch_add
4441

45-
static inline void atomic_add(int i, atomic_t *v)
42+
static inline void arch_atomic_add(int i, atomic_t *v)
4643
{
4744
__atomic_add(i, &v->counter);
4845
}
46+
#define arch_atomic_add arch_atomic_add
4947

50-
#define atomic_sub(_i, _v) atomic_add(-(int)(_i), _v)
51-
#define atomic_sub_return(_i, _v) atomic_add_return(-(int)(_i), _v)
52-
#define atomic_fetch_sub(_i, _v) atomic_fetch_add(-(int)(_i), _v)
48+
#define arch_atomic_sub(_i, _v) arch_atomic_add(-(int)(_i), _v)
49+
#define arch_atomic_sub_return(_i, _v) arch_atomic_add_return(-(int)(_i), _v)
50+
#define arch_atomic_fetch_sub(_i, _v) arch_atomic_fetch_add(-(int)(_i), _v)
5351

5452
#define ATOMIC_OPS(op) \
55-
static inline void atomic_##op(int i, atomic_t *v) \
53+
static inline void arch_atomic_##op(int i, atomic_t *v) \
5654
{ \
5755
__atomic_##op(i, &v->counter); \
5856
} \
59-
static inline int atomic_fetch_##op(int i, atomic_t *v) \
57+
static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \
6058
{ \
6159
return __atomic_##op##_barrier(i, &v->counter); \
6260
}
@@ -67,60 +65,67 @@ ATOMIC_OPS(xor)
6765

6866
#undef ATOMIC_OPS
6967

70-
#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
68+
#define arch_atomic_and arch_atomic_and
69+
#define arch_atomic_or arch_atomic_or
70+
#define arch_atomic_xor arch_atomic_xor
71+
#define arch_atomic_fetch_and arch_atomic_fetch_and
72+
#define arch_atomic_fetch_or arch_atomic_fetch_or
73+
#define arch_atomic_fetch_xor arch_atomic_fetch_xor
74+
75+
#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new))
7176

72-
static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
77+
static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
7378
{
7479
return __atomic_cmpxchg(&v->counter, old, new);
7580
}
81+
#define arch_atomic_cmpxchg arch_atomic_cmpxchg
7682

7783
#define ATOMIC64_INIT(i) { (i) }
7884

79-
static inline s64 atomic64_read(const atomic64_t *v)
85+
static inline s64 arch_atomic64_read(const atomic64_t *v)
8086
{
81-
s64 c;
82-
83-
asm volatile(
84-
" lg %0,%1\n"
85-
: "=d" (c) : "Q" (v->counter));
86-
return c;
87+
return __atomic64_read(v);
8788
}
89+
#define arch_atomic64_read arch_atomic64_read
8890

89-
static inline void atomic64_set(atomic64_t *v, s64 i)
91+
static inline void arch_atomic64_set(atomic64_t *v, s64 i)
9092
{
91-
asm volatile(
92-
" stg %1,%0\n"
93-
: "=Q" (v->counter) : "d" (i));
93+
__atomic64_set(v, i);
9494
}
95+
#define arch_atomic64_set arch_atomic64_set
9596

96-
static inline s64 atomic64_add_return(s64 i, atomic64_t *v)
97+
static inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v)
9798
{
9899
return __atomic64_add_barrier(i, (long *)&v->counter) + i;
99100
}
101+
#define arch_atomic64_add_return arch_atomic64_add_return
100102

101-
static inline s64 atomic64_fetch_add(s64 i, atomic64_t *v)
103+
static inline s64 arch_atomic64_fetch_add(s64 i, atomic64_t *v)
102104
{
103105
return __atomic64_add_barrier(i, (long *)&v->counter);
104106
}
107+
#define arch_atomic64_fetch_add arch_atomic64_fetch_add
105108

106-
static inline void atomic64_add(s64 i, atomic64_t *v)
109+
static inline void arch_atomic64_add(s64 i, atomic64_t *v)
107110
{
108111
__atomic64_add(i, (long *)&v->counter);
109112
}
113+
#define arch_atomic64_add arch_atomic64_add
110114

111-
#define atomic64_xchg(v, new) (xchg(&((v)->counter), new))
115+
#define arch_atomic64_xchg(v, new) (arch_xchg(&((v)->counter), new))
112116

113-
static inline s64 atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
117+
static inline s64 arch_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
114118
{
115119
return __atomic64_cmpxchg((long *)&v->counter, old, new);
116120
}
121+
#define arch_atomic64_cmpxchg arch_atomic64_cmpxchg
117122

118123
#define ATOMIC64_OPS(op) \
119-
static inline void atomic64_##op(s64 i, atomic64_t *v) \
124+
static inline void arch_atomic64_##op(s64 i, atomic64_t *v) \
120125
{ \
121126
__atomic64_##op(i, (long *)&v->counter); \
122127
} \
123-
static inline long atomic64_fetch_##op(s64 i, atomic64_t *v) \
128+
static inline long arch_atomic64_fetch_##op(s64 i, atomic64_t *v) \
124129
{ \
125130
return __atomic64_##op##_barrier(i, (long *)&v->counter); \
126131
}
@@ -131,8 +136,17 @@ ATOMIC64_OPS(xor)
131136

132137
#undef ATOMIC64_OPS
133138

134-
#define atomic64_sub_return(_i, _v) atomic64_add_return(-(s64)(_i), _v)
135-
#define atomic64_fetch_sub(_i, _v) atomic64_fetch_add(-(s64)(_i), _v)
136-
#define atomic64_sub(_i, _v) atomic64_add(-(s64)(_i), _v)
139+
#define arch_atomic64_and arch_atomic64_and
140+
#define arch_atomic64_or arch_atomic64_or
141+
#define arch_atomic64_xor arch_atomic64_xor
142+
#define arch_atomic64_fetch_and arch_atomic64_fetch_and
143+
#define arch_atomic64_fetch_or arch_atomic64_fetch_or
144+
#define arch_atomic64_fetch_xor arch_atomic64_fetch_xor
145+
146+
#define arch_atomic64_sub_return(_i, _v) arch_atomic64_add_return(-(s64)(_i), _v)
147+
#define arch_atomic64_fetch_sub(_i, _v) arch_atomic64_fetch_add(-(s64)(_i), _v)
148+
#define arch_atomic64_sub(_i, _v) arch_atomic64_add(-(s64)(_i), _v)
149+
150+
#define ARCH_ATOMIC
137151

138152
#endif /* __ARCH_S390_ATOMIC__ */

0 commit comments

Comments
 (0)