or
Less Pipelines, More Happy Developers!
ESE Kongress 2025 in Sindelfingen
Note:
Hello and welcome to our talk ...
Today we'd like to take you on a journey that we started 20 years ago.
A journey that has led us through many different projects and companies.
A journey that taught us that it's not just about automation, but also about the joy of developing software.
Note:
Okay, so who are we actually?
Alexandru and me have about 20 years of experience in the automotive industry.
We both are passionate about embedded software, build systems and Software Product Line Engineering.
Currently we work as Senior Platform Engineers in the Rhine-Main Team at Marquardt GmbH.
Back to 2005
Note:
So where do we actually come from?
click
For that, we need to go back a bit into the past, specifically to the year 2005.
That's when I started as a newcomer in the automotive industry.
--
- Exciting products
- Constantly new requirements
- Well-paid jobs
- Paradise for SW developers
Note:
What was it like back then in the automotive industry?
Actually, pretty much the same as today.
click
Exciting products: brake control units, ESP, ABS, ACC, ...
click
Constantly new requirements, as many customers want to stand out from the competition.
click
The jobs were well paid.
click
Actually paradise for SW developers.
--
- SW development for brake control units
- Embedded C? We had that at university!
- It's your code, but don't you dare change anything!
- Always remember: don't break the build!
Note:
And the job?
click
Sure, we're coding Embedded C for brake control units.
click
No problem, we had that at university.
click
Here came the first damper.
You got responsibility for a part of the code, but ideally you shouldn't change it.
click
Why? Don't break the build!
--
- No unit tests
- No CI, only nightly builds
- Many integration tests, mainly vehicle trials
- Code reuse across all projects
- Hundreds of developers worldwide working on one codebase in RCS
Note:
How did the workflow look like back then?
click
Oops, no unit tests.
Not a single line of test code in the repository.
Sure, where do you test brakes? In the car.
click
No CI, only nightly builds to ensure that at least the code compiles.
click
But most of it was tested in driving tests.
So many features were tested at some point, somewhere in some project.
Hence the motto: better not change anything.
click
But how is that supposed to work when the code is shared across all projects,
all customers come around the corner with new requirements.
click
and several hundred developers worldwide are working on one codebase?
--
Continuous "Child in the Well"
Note:
When someone asks me today what Continuous Integration is, I like to remember that time.
About what Continuous Integration is absolutely NOT.
At some point, I came up with a fitting name for the situation back then:
click
Continuous "Child in the Well".
What does that mean exactly?
- Highest quality criterion: SW linkable.
- Some project is always red (compile or link errors)
- No test automation
- No unit tests
- Developers are evil, they build bugs into the code.
But what?
Note:
Well, we need to change something! But what?
click
This is where our CI journey really begins.
Namely with the development of our Automotive Software Factory.
The name came later, but there were plenty of ideas.
--
Changes until noon, then bug fixing and vehicle tests?
No!
Note:
One idea was ...
You can imagine how thrilled the developers were.
click
Especially considering the question of what noon means at an international corporation working worldwide.
This idea didn't really work.
What else can you do?
--
- With our own framework based on CUnit
- Automatic generation of mockups
- XML2Makefile code generation
- Test Driven Development (TDD)
- Nightly tests on Jenkins (and Hudson!)
Note:
Sure, if you don't have unit tests, that's always a good start.
However, it wasn't that easy to make our code testable.
click
We built our own framework based on CUnit.
click
The automatic generation of mockups was a big success back then.
Writing mockups manually (especially in the era of Autosar) was simply too time-consuming and a major hurdle for developers.
click
We tried from the beginning to establish Test Driven Development based on requirements.
click
And of course, if you have unit tests, you want to run them automatically.
--
- Gerrit and Jenkins for tools
- Feature-based testing via commit comments
- SW development still on RCS.
- CI with RCS? Yes, we can!
Note:
Well, we have a Jenkins and some unit tests.
Let's do CI!
click
...
--
Note:
What? No Git? You're doing CI with RCS?
click
Sorry, but there's no mercy then!
You're on your own!
And that's how it was. We never got away from nightly builds.
Our CI solution ran in parallel to nightly builds.
--
--
Note:
The monster that came out of it was a Jenkins that could do everything.
The Jenkinstein.
Instead of having a unified build system including pipeline, we had a multitude of jobs that were all somehow connected.
We abused Jenkins as a build system, as a test system, as a deployment system, as a monitoring system.
Everything our build system couldn't do, we packed into Jenkins pipelines.
- The crux with Jenkins pipelines
- Java developers who just want to program Java
- And then aren't allowed to!
- Many misunderstandings about what runs where
- Nobody understands how the pipeline works anymore.
- Nobody can debug.
- Nobody can follow it.
- Anti-pattern of CI.
- 2 Scrum teams were at least 50% occupied with maintenance.
- The "Service Card" was being passed around.
--
- On-premise Jenkins with 500 static VMs
- Micro services for reporting and artifact storage
- Combination of CI/CD, nightly builds and on-demand builds
Note:
At first, everything went well ...
--
Note:
Sure, when you start with Jenkins, you begin with simple freestyle jobs.
Just building.
- then suddenly a bit more happens
- gradually tools need to be glued together
- Connection to the SCM system
- Reporting
--
--
- A pipeline for target builds (Ninja)
- A pipeline for nightly builds (Eclipse + GNU Make)
- A pipeline for nightly unit tests (Perl + GNU Make)
- Many more pipelines (Matlab, Polyspace, QA-C, ...)
--
Note: Higher, faster, further: One pipeline to rule them all.
--
- Pipeline to orchestrate and integrate tools
- Build logic in pipelines (10,000s of lines of Groovy DSL)
- Sufficient? No! Shared libraries and plugins still exist ...
- Non-reproducible CI results
- Worst case: separate repos for product source code and CI pipeline
Note:
- CI system does different/more things than the build environment.
- With a hammer in hand, the world looks like a pile of nails.
- Birmingham screwdriver
--
--
🎯 Separation of Concerns → Pipeline = Orchestration only
💻 Local-First Development → Same commands everywhere
🚀 Bootstrapping → Scripts handle dependencies
🏗️ Unified Build System → CMake + Ninja for all variants
✅ Quality Gates = Test Selection → Pytest markers drive everything
Note:
These are the core principles we derived after recognizing that CI and local environments differ primarily in orchestration, not in actual build and test execution.
Separation of Concerns: All business logic lives in the build system, not in pipeline DSL.
Local-First: Jenkins executes the exact same commands developers run on their machines.
Bootstrapping: Build scripts handle all dependency resolution and tool installation automatically.
Unified Build System: CMake as meta-build system generates all artifacts for all variants.
Quality Gates: Different test levels are just pytest marker selections, making them transparent and reproducible.
--
🔄 Thin CI Pipeline → Minimal orchestration
🎯 Quality Gate → Selection of quality tests
🐍 Python + Pytest → Quality tests
🔄 Pypeline → CI agnostic build pipeline
📦 Scoop → Windows package manager
🏗️ CMake + Ninja → Fast (meta) build system
📚 Sphinx + Sphinx Needs → X-as-Code
Note:
Our implementation stack is simple but powerful:
Scoop handles all toolchain installation on Windows automatically.
CMake as meta-build system generates ninja build files for performance - building all artifacts of all variants.
Python and Pytest serve as the universal test framework for ALL quality gates.
Jenkins pipeline is thin - just orchestration, calling pytest with appropriate markers.
Quality gates are transparent: quick tests for PRs, full tests for main branch, extended tests for nightly.
--
--
%%{ init: { 'theme': 'dark', 'themeVariables': { 'edgeLabelBackground': 'transparent', 'fontSize': '20px' } } }%%
flowchart LR
subgraph QG["🎯 Quality Gate Selection"]
C1["What to test?"] --> C2{Trigger Type}
C2 -->|PR| C3["⚡ Quick Tests"]
C2 -->|Main Branch| C4["🔍 Full Tests"]
C2 -->|Nightly| C5["🌙 Long Tests"]
end
C3 --> C6["🎭 Start Parallel Execution"]
C4 --> C6
C5 --> C6
subgraph TE["🔄 Test Execution"]
subgraph A1["Agent 1"]
M1A["📥 Checkout Code"] --> M1B["🔧 Installation of Dependencies"]
M1B --> M1C["🧪 Execute Tests"]
M1C --> M1D["📋 Deploy Test Results"]
M1D --> M1E["📦 Deploy Artifacts"]
end
subgraph Ax["..."]
end
subgraph An["Agent N"]
M3A["📥 Checkout Code"] --> M3B["🔧 Installation of Dependencies"]
M3B --> M3C["🧪 Execute Tests"]
M3C --> M3D["📋 Deploy Test Results"]
M3D --> M3E["📦 Deploy Artifacts"]
end
START["▶️ Start"] --> M1A
START --> Ax
START --> M3A
M1E --> END["⏹️ End"]
M3E --> END
end
C6 --> TE
TE --> C7["📊 Wait & Collect<br/>Overall Status"]
%% Style to make an element transparent
classDef transparent fill:transparent,stroke:transparent
class M2 transparent
Note:
This diagram shows our unified SPLE pipeline approach.
The pipeline simply selects a quality gate based on the trigger type - pull request, main branch push, or nightly build.
Then it orchestrates parallel execution across multiple agents.
Each agent runs the same four steps: checkout code, install dependencies, execute tests with selected markers, and deploy results.
This transforms quality gates from opaque pipeline magic into transparent, reproducible test selections.
--
class Test_MyVariant:
variant = "MyVariant"
@pytest.mark.build
def test_build(self):
spl_build = SplBuild(variant=self.variant,
build_kit="prod",
target="build")
result = spl_build.execute()
assert result == 0, "Building failed"
@pytest.mark.unittests
def test_unittests(self):
spl_build = SplBuild(variant=self.variant,
build_kit="test",
target="unittests")
result = spl_build.execute()
assert result == 0, "Unit tests failed"Note:
Here's the actual code structure we use.
Each variant gets a pytest class with methods decorated with markers.
The build quality gate is marked with pytest.mark.build.
The unittests quality gate is marked with pytest.mark.unittests.
Each test uses the same SplBuild wrapper that calls CMake targets.
This works identically on developer machines and in CI - no magic, fully reproducible.
--
🎯 Define Once, Run Anywhere → Same pipeline on local/CI
📝 YAML Config → Declarative pipeline definition in pypeline.yaml
🔨 Pypeline → Cross platform pipeline runner (Python)
🐍 Pipeline Steps → Python classes, not CI DSL
Note:
A key enabler of our platform is pypeline - our own CI-agnostic pipeline framework.
The core problem it solves: pipelines become tightly coupled to specific CI systems like Jenkins or GitHub Actions.
Each CI system has its own syntax and limitations, making pipelines non-portable.
Pypeline lets you define build/test/deploy pipelines in YAML once and run them identically everywhere - on local machines, Jenkins, GitHub Actions, anywhere.
The key difference: pipeline steps are Python classes instead of platform-specific scripts.
This ensures reproducible builds and eliminates "works locally, fails in CI" problems completely.
It handles automatic bootstrapping of dependencies, virtual environments, and toolchains.
All business logic lives in testable, maintainable Python code - no more complex Groovy DSL.
--
# pypeline.yaml - runs identically everywhere
pipeline:
- step: CreateVEnv
module: pypeline.steps.create_venv
config:
python_executable: python311
- step: ScoopInstall
module: pypeline.steps.scoop_install
- step: GenerateEnvSetupScript
module: pypeline.steps.env_setup_script
- step: Build
run: cmake --build build --target allNote:
Here's a real example from our SPL Demo project.
This is the actual pypeline.yaml that bootstraps our entire build environment.
First step: CreateVEnv creates a Python virtual environment using our bootstrap script.
Second step: ScoopInstall installs all required Windows toolchain components via Scoop package manager.
Third step: GenerateEnvSetupScript creates environment setup for subsequent builds.
Fourth step: CheckCIContext detects if we're running in CI or locally and adjusts behavior.
Fifth step: CollectPRChanges gathers changed files for incremental testing.
This exact YAML runs identically on developer laptops, in Jenkins, or in GitHub Actions.
No platform-specific conditionals, no CI system lock-in.
--
Agile Release Train with SAFe
Shared Ownership across all teams
Regular Sprint Reviews with user feedback
Management Support for budget & infrastructure
From Fragmented Tools → Unified Platform
Note:
A major shift was treating the platform itself as a product.
We developed it collaboratively within an Agile Release Train following the Scaled Agile Framework.
This moved us from fragmented, tool-specific automation efforts to a unified, organization-wide initiative.
By coining a clear name and vision, we gave all contributors a shared sense of ownership.
Every team now contributes features, feedback, and improvements through regular sprint reviews.
Management actively supports from a business perspective with dedicated budgets for training, licenses, and infrastructure.
This transforms the platform from an ad-hoc engineering effort into a sustainable, strategic product.
--
--
- Same commands locally & CI
- Easy debugging of failures
- Fast feedback cycles
--
- Maintainable Python code
- Reusable components across SPLs
- Clear separation of concerns
--
- Fast, reliable quality feedback
- Transparent quality criteria
- Always releasable software state
Note:
Let's summarize the benefits for different stakeholders.
For developers: The same commands work locally and in CI, making debugging straightforward with fast feedback.
For platform engineers: We have maintainable Python code instead of complex Groovy DSL, with reusable components across all Software Product Lines.
For management: Fast, reliable feedback on software quality with transparent criteria ensuring always releasable software.
This transformation from Jenkinstein to a clean SPLE Platform has made everyone happier - hence our title: Less Pipelines, More Happy Developers!
https://avengineers.github.io/ESE-2025













