Slide 1

Slide 1 text

1 Test-driven kernel releases Guillaume Tucker [email protected] 2022-06-01

Slide 2

Slide 2 text

2 Development & Testing kernel development automated testing manual testing

Slide 3

Slide 3 text

3 Automated Testing

Slide 4

Slide 4 text

4 Open Source Philosophy Single mainline code base Many contributors Many use-cases Application changes are sent upstream Reduced duplication of efforts mainline

Slide 5

Slide 5 text

5 Open Testing Philosophy Single mainline code base including tests Many contributors who run tests Test results sent upstream Test results summary in each release Reduced duplication of testing efforts mainline

Slide 6

Slide 6 text

6 Hidden Mass of Testing Hidden Mass of Testing Duplicated testing efforts Duplicated testing efforts No solution for tracking results upstream No solution for tracking results upstream Testing stays hidden as if it was downstream Testing stays hidden as if it was downstream

Slide 7

Slide 7 text

7 automated testing

Slide 8

Slide 8 text

8 syzbot https://syzkaller.appspot.com/ syscall fuzzing Automated bisection Reproducers Web UI

Slide 9

Slide 9 text

9 KernelCI https://linux.kernelci.org/job/ Tailored CI system Web API Distributed test labs Kubernetes Automated bisection KCIDB database

Slide 10

Slide 10 text

10 Red Hat CKI https://datawarehouse.cki-project.org/ Fedora kernels Mainline kernels Stable LTP, kselftest etc. KCIDB integration

Slide 11

Slide 11 text

11 regzbot https://linux-regtracking.leemhuis.info/regzbot/mainline/ All known regressions Essentially manual submissions Seamless integration with emails Weekly report on LKML for mainline

Slide 12

Slide 12 text

12 show me the results

Slide 13

Slide 13 text

13 Focusing on the results Manual runs Maintainer scripts Automated systems

Slide 14

Slide 14 text

14 Focusing on the results Manual runs Maintainer scripts Automated systems Results are the least common denominator

Slide 15

Slide 15 text

15 Benefits of results in releases Valuable for users in general Canonical way to keep track of code quality Essentially, avoiding the “works for me” syndrome

Slide 16

Slide 16 text

16 Challenges Shift in workflow: results are needed before the release Similar to how -rc works for stable and mainline Expect positive results rather than solely look for regressions Additional step for maintainers Keeping it simple and not disruptive Optional Up to each maintainer to decide which results to include

Slide 17

Slide 17 text

17 in practice

Slide 18

Slide 18 text

18 Where to start? Results reproducible on any hardware Tests included in the kernel source tree Plain builds with reference toolchain binaries, Docker images Builds with sparse enabled make C=1 coccicheck KUnit Device tree validation

Slide 19

Slide 19 text

19 Where to start? Results reproducible on any hardware Tests included in the kernel source tree Plain builds with reference toolchain binaries, Docker images Builds with sparse enabled make C=1 coccicheck KUnit Device tree validation Documentation: https://docs.kernel.org/

Slide 20

Slide 20 text

20 RFC 1: Test results in-tree Similar to linux-next merge logs Updated for each release (stable, mainline, -next) Rely on Git history for older results Results ├── kselftest │ ├── futex │ └── lkdtm ├── KUnit │ └── results.json └── summary

Slide 21

Slide 21 text

21 RFC 2: Test-link in commit

Slide 22

Slide 22 text

22 RFC 3: Git meta-data Tied to Git history Separate from commit merge workflow Similar to Git notes git results show REVISION

Slide 23

Slide 23 text

23 Some thoughts Subsystem-specific results in separate location? Integration results for mainline / stable / linux-next Subsystem results could be pulled in alongside code Follow regular email workflow for adding results Keep in-tree result summaries in plain text Extra data can be hosted on separate systems

Slide 24

Slide 24 text

24 RFC: How does the concept sound? Has this been tried or discussed before? Does it seem worth the effort? Time for an RFC on LKML to go through some details?

Slide 25

Slide 25 text

25 Thank you!