Menu
O mnie Kontakt

W materiale wideo Devoxx poruszono znaczenie kreatywności w programowaniu oraz wszelkie wyzwania, jakie wiążą się z rozwojem oprogramowania w przedsiębiorstwie. Prelegent przypomniał czasy, kiedy pierwsze programy, takie jak 'Hello World', dawały natychmiastowy efekt, co przypominało prostotę i radość z tworzenia. Jednak w miarę rozwoju oprogramowania złożoność rośnie, a doświadczenie z początkowych dni programowania szybko znika w gąszczu testów, integracji i ciagłego rozwiązywania problemów. Zamiast twórczej radości pojawia się frustracja związana z czasem oczekiwania na wyniki oraz kontrowersjami wokół niestabilnych testów.

Mówca zwrócił uwagę, że w dzisiejszej rzeczywistości przemysłowej wiele z tego, co w przeszłości było zawodne, powoli staje się również obciążeniem. Efektem tego może być większy spadek kreatywności i wydajności. Przykład Netflixu, który wyprzedził Blockbustera dzięki szybszym procesom, przypomina, że to szybkość, a nie tylko rozmiar, decyduje o sukcesie. Udało im się to osiągnąć dzięki innowacyjnym kulturom pracy, które pozwalały inżynierom na ekspresowe rozwiązywanie problemów.

W kontekście obszaru inżynierii produktywności dewelopera (DPE), mówca podkreślił znaczenie szybkiej analizy danych, co umożliwia podejmowanie świadomych decyzji w sposób bardziej efektywny. Wskazał, że bez danych nie można działać, a właściwe mierzenie i analiza są kluczowe w pracy zespołu programistycznego. DPE nie jest nowym tematem, ale raczej ewolucją wcześniejszych praktyk DevOps, które koncentrowały się głównie na rozwoju i integracji.

Mówca omówił również różne podejścia do przyspieszania feedbacku i zmniejszania oczekiwania na czas kompilacji i testów, co w efekcie powinno pozwolić deweloperom skupić się na pisaniu kodu, a nie na zarządzaniu długotrwałymi testami i budowami. Przykłady tego, jak programiści mogą pracować bardziej efektywnie, podano w kontekście technologii takich jak cache budowy oraz analizy fluktuacji testów. Zastosowanie odpowiednich narzędzi i podejść, takich jak monitorowanie wydajności przy pomocy skalowalnych danych, może powstrzymać regresję wydajności i sprawić, że inżynierowie będą bardziej zadowoleni z wykonywanej pracy i rezultatów.

Na koniec mówca podsumował, że w miarę jak coraz więcej organizacji dostrzega korzyści płynące z DPE, branża ta staje się bardziej poszukiwana, co zapewne wpłynie na rynek pracy dla programistów w przyszłości. To podejście może także przynieść znaczące oszczędności, dzięki czemu zespoły mogą skupić się na dostarczaniu wysokiej jakości rozwiązań dla swoich klientów. Film osiągnął do tej pory 2498 wyświetleń i 40 polubień w momencie pisania tego artykułu.

Toggle timeline summary

  • 00:00 Powitanie publiczności na specjalnej prezentacji w kinie.
  • 00:11 Refleksje na temat przeszłych doświadczeń związanych z kodowaniem i radości z tworzenia oprogramowania.
  • 00:58 Omówienie wyzwań w rozwoju oprogramowania dla przedsiębiorstw w porównaniu do tworzenia prostych programów.
  • 01:31 Wzrost złożoności w udanych projektach oprogramowania prowadzi do mniejszej kreatywności.
  • 01:50 Ilustrowanie frustracji związanych z długimi cyklami feedbacku w rozwoju przedsiębiorstw.
  • 02:49 Wzmianka o cyfryzacji PKB i poleganiu na inżynierii oprogramowania.
  • 03:38 Użycie Netflixa i Blockbustera jako przykładu szybkiej zmiany technologii.
  • 04:06 Opis typowego dnia w roli inżyniera oprogramowania wypełnionego wyzwaniami.
  • 04:25 Doświadczenie porażek kompilacji i ból debugowania.
  • 06:10 Omówienie wysokiego progu bólu inżynierów radzących sobie z problemami kompilacji.
  • 08:35 Wprowadzenie do inżynierii produktywności deweloperów jako odpowiedzi na obecne wyzwania.
  • 09:41 Podkreślenie znaczenia decyzji opartych na danych w poprawie rozwoju oprogramowania.
  • 11:03 Pytanie do publiczności, czy ich kompilacje oprogramowania są wydajne.
  • 12:26 Identyfikacja powszechnych problemów w stabilności kompilacji i debugowaniu.
  • 13:46 Wyjaśnienie potrzeby poprawy technologii dla satysfakcji deweloperów.
  • 14:25 Wprowadzenie do trzech technologii przyspieszających cykle feedbacku.
  • 21:36 Omówienie potencjalnych kosztów wolnych pipeline'ów CI i ich zarządzania.
  • 22:40 Zaangażowanie publiczności quizem na temat DPE i rozdawanie koszulek.
  • 23:22 Prezentacja różnych technologii do szybszych cykli feedbacku w kompilacjach oprogramowania.
  • 29:02 Podkreślenie wartości przeprowadzania testów predykcyjnych na podstawie zmian w kodzie.
  • 31:52 Wyjaśnienie dystrybucji testów jako metody przyspieszania procesów testowania.
  • 37:37 Podsumowanie kluczowych punktów dotyczących podnoszenia produktywności w inżynierii oprogramowania.
  • 42:26 Ostatnie myśli na temat efektywnego rozwoju i życzenia dla publiczności.

Transcription

Hello, miteinander. Welcome, and good afternoon. It's pretty special to look this way in a movie theater and not that way, but I'm sure I'll get used to that. I'm sure you remember all these days way in the past when you were writing your system out print line, hello world, and you instantly got the hello world on the screen. Or you moved the turtle, moved north 100, and the turtle moved north, and you saw it right there. If it went the wrong way, well, you corrected it, and you went the right way until you had your nice drawing on the screen. Or maybe you wrote your own clone of something like Space Invaders or so, you wrote some code, you ran it, and boom, it was there. If it didn't work, you fixed it, and you had this great pleasure, this great experience, which probably was the reason why you stayed in software engineering. There's just something very appealing, very pleasing when you're in this creative mode of creating software. It's also needed for many inventions today. Just recently, the whole human genome was mapped, was finally fully mapped. It took a lot of creative software to make that happen. Many other invention in transportation, technology, and so on, they rely on very creative process of creating this software. It's not a mechanical process. It's this joy, really, that also keeps us in this industry. This is why we like to do it. We like to be in the flow. We like to get things done, work on features, deliver them, and so on, see them with our customers. But enterprise software development often isn't like that. You don't just move a turtle, and you see the result. It's more like you move the turtle, and two hours later, you will see if the turtle went the right way. You're dealing with fires. You're dealing with integration. You're dealing with tests being broken. You don't even know whether they're your tests or not. Configuration is a hassle, and so on. The whole pleasant experience that you had when you started is usually, not always, but usually gone in enterprise software development. That's a pity. That's a real pity. If you're successful with what you're doing, it gets even worse. If your team, your company is successful, you will write even more code, even more features. You will add more platforms. You will add more cross-version testing, and so on. The more complexity you add there, the harder it becomes to maintain. You have more and more of this turn where you're not really creating and are creative, but you're just trying to deal with this, juggling with more balls than you actually can. You're kind of becoming a victim of your own success. This is not an esoteric topic, not at all. They predicted, like in 2021, that by this year, about two-thirds of the GDP will be digitalized. Two-thirds of the GDP comes from software in the end that is created by people like you. This is very substantial. It's a shame that when we see that people not necessarily are as productive as they could be, they're not as creative as they could be, because the things around don't allow for that anymore as it used to when they started to code as a kid. Also, it's no longer like the big beating the small. It's really the fast beating the slow. Maybe one example. It's not even from software engineering. It's only indirectly like Netflix. Blockbuster was dominating the video rental. They didn't take Netflix seriously even when they offered to be bought by Blockbusters. A few years later, Blockbusters went out of business because Netflix was just moving much, much faster. There's actually an interesting book by the people from Netflix about that. If we look at a typical day, I'm not saying every day is like this and every engineer goes exactly through that, but I'm sure you can recognize yourself in some of this. You come to work in the morning, you have your coffee, your tea, you're fully energized. You just get some work done. You're fully creative. You just write this code. Super experienced. Then at some point before you check in, you want to make sure that what you did is actually working. You start building. Now the build fails. Now you start debugging. You have no idea why this Docker instance container didn't come up or whatever it was. Finally, you figure it out, but now it's already launched. You're exhausted. Let's take a break. You come back. You have some good salad. You're again energized. You continue coding. You build again. You need to wait, but at least this time it actually passes. But now next step, you need to integrate this. You push this to CI and you know this is going to take a long time. The queue already says 2,000 builds in the queue. I'm sure at least some of you have seen this. You see four digit queue time. What do you do? You're smart. You schedule this such that you can go into your sprint meeting. When you come back, hopefully your build is done, if you're okay. But now what happened? Well, the build is done, yes, but it failed. What happened? A test broke. So now you start investigating, and it's not clear. I mean, this test broke. You worked on this feature code over here. Why did you break this over here? It's not clear. So you start investigating and you investigate, and at some point you realize, oh, this is not even related. This is a flaky test. It had nothing to do with your changes. And in any case, you need to now push to CI again or run CI again. I'm sure you've gone through this. Did you ever do this? You can be honest. The build's failing, and you just click run again to make it passed. And I think everybody that doesn't raise their hands probably did it as well. Me too. So that's a common thing. So a lot of ways, all this red stuff, that wasn't really creating something. Only the blue stuff over there. And the ratio here doesn't look too great. And if we ask people, so we talk to many of our customers, like, what are your biggest pain points? And so on. We give them a list of options, a big list. And waiting, waiting for builds and tests to finish is basically the biggest pain point. It's 92% of everybody that was asked. And we asked a lot of people. Instability or a similar one. They're also very painful. Things that sometimes work, sometimes don't work. You just try again until it hopefully sometimes or at some point works. So it is a real pain. It's not like we're making this up here. And before we move on, there's one more thing that I find very impressive also from talking to customers and which are always engineers for us, is that engineers have a really high pain threshold. I think it's so high that they would be really good world-class rowers. Because if you're rowing, that is very, very hard work. You need to push really hard for half an hour. And the pain you have to endure, I mean, it's crazy. And you're not alone. You're doing it for three other people or seven other people. But that endurance for pain, I think, is pretty impressive. And the way I say this is, I mean, we talk to people and say, well, this is taking two hours. And they're like, yeah, that's just what it takes. Or this is always broken. Yeah, that's just always broken. Once we do a release, we try to get it green and then we move on. But I think it's also so normal in many situations today that people are just have adopted to this high level of pain and deal with it. And just an interesting anecdote, not directly related, but indirectly, I would say, they made some scientific studies and they said if they row in a group, their pain threshold goes even higher. And I think that's the same in software. You do a release, you do it with your colleagues, you help each other, you fix things, you help out over here and there. And in the end, it feels really good. You got this release out. But if you step back a little bit and look what happened today, why did we even have to go through this pain? Even if we shared it and we can better deal with it, it's not really necessary in the first place. So if you're looking for another sports to do, that could be one. So there were many or several big initiatives in the industry. And I don't want to go through this whole history. The last one, though, was the DevOps, which started in the 2010s. And what we're seeing and we're saying, and I will have some facts and data for that as well, is that the next one is developer productivity engineering. Because DevOps, while it covers the develop part as well, the focus that you will see on developer productivity engineering is not really covered with DevOps. So it's kind of a continuation. It moves the whole cycle more to the left. So at some point DevOps starts in the cycle, but even more to the left, we can do something with DPE. And that's where we are continuing that spectrum. It's not a replacement. It's not something better. It's just an extension of things that have been done before. So DPE, developer productivity engineering, is taking an engineering approach. And you will see it. It's all based on data. It's not based on gut feels or who screams the loudest or who's the most popular in the room. It's really based on data to make decisions. And we can do different things. We can accelerate. So we get faster feedbacks. We can stabilize things. And we can also surface a lot of data of what's happening. Because if we don't data, how can we make a decision what to improve and what to keep? Developer productivity engineering is not developer productivity management. So this is not about the whole life cycle and how should we organize our teams and what locations and so on. It's really focused more narrowly on the process and the outcomes. We want to build faster. We want to have more reliable results. And it's really about the building and testing part. And in the end, if we want to look at how much does it add ROI, we will have real data. It's interesting for everybody, whether you're an executive and you care about revenue and cost and brand or you're kind of like a VP of engineering and you care about the middle tier or you're an engineer. In the end, there is value for everybody in developer productivity engineering and by improving the developer productivity. So what are the actual problems we're trying to solve very concretely? One is things take too long. Or is there anybody in the room that says my software builds too fast? There's one person here. Interesting. And second question, does anybody have software that gets faster over time to build? Probably neither. So things take too long. We want to be in the flow. We want to get things done. But we don't want to be slowed down. And then people say, well, I just do something else. But it's very inefficient. I think there are a lot of studies that say this. The inefficiency of swapping, of context switching. There are even studies now that say it's bad for your brain if you do context switching. Because it adds this distress which releases some hormones which are not good for your brain. The other aspect is things take too long to fix. Because you have no idea what's broken. You just run the build locally, CI, it's broken. And now what? Well, you try first running it again, yes. But if that doesn't work, what do you do? You start looking at some logs, print lines, whatever you can to make it work. We don't really have anything to work with there. And the third one, that's the one about knowing that something could have been prevented, but we didn't. So you run the build, and then it fails because of that test, for example, and you know, this test is flaky. You already knew, but you didn't really do anything about it. Maybe you didn't even have the data to be sure that it is flaky. Or once in a while, it's just failing if you run on that agent. So let's run it on another agent if you run on CI. But these things, they're costly. Because you now spend time running it there, and you know it could fail. But you still did, and if it's failing, you run it again. And that adds up. If you multiply this with a team of 100 developers or however many developers, and the number of days in the year that you code, it really costs a lot of money, and it reduces the productivity of that entire engineering team. So usually it's like many, many developers that when you do the calculations that you could employ on your team if you were more productive, instead of wasting that time. Yes, there is another study that showed this, that if this tool chain of building software locally on CI with the tests and everything else, if that were more efficient, that tool chain, they would be more happy at work. And for example, in the US, all these companies now offer like a fridge with juice. So it is not a distinguishing factor anymore what company you choose. Maybe they all have a ping pong table or so. That's not a distinguishing factor anymore. And the salary, of course, is also important. But they're also kind of matching each other. But another aspect is how happy are the developers in the work they do. And if they have to choose between company where they say, well, half my day I'm just waiting and I'm debugging and things are not working, and I have this other company where I can just code, I can be in that flow, I can create things, I can deliver features, that's where they go. So that's why also in the US, especially, companies have started to build up these entire teams to make their own developers happy. And not just with money and juice, but also with the tool chain that allows them to work efficiently. This is a scary looking chart. I'm not going to go into detail. But the key here is that there are certain problems, there are ways to address it, and there is a good outcome. But it can be looked at pretty almost scientifically. So there's a specific approach to doing this. First one is we want to make our decisions based on data. We want to stop, like I said, listening to who's the loudest, who's been on the team for the longest, or whatever. We want to do this based on data. Because it's so easy to also deviate and go for the most recent data, for example. One test was flaky the last three days. Let's fix that one. But maybe if you look back 30 days, it was irrelevant. There are other tests that fail much more often. Or maybe they run for much longer until they fail, so they're also much more expensive. So we cannot just based on anecdotal evidence make these decisions. We need the data. We call this a build scan. We can also use them for free. But it's about the concept here. And it is about surfacing what happens in the build, such that you can make, you have actionable data that you can then make decisions based on and improve things, measure again, is it now better than before or worse? But you need the data. And the idea is, without going through all this in detail, is we capture a lot of data, what happens in your build, CI or locally, and we visualize it. You can also export the data. But then you can really make a decision and say, okay, here's where we spent the time, or this is flaky, or this is instable. Let's address that. And then let's measure again. It's like a scientific experiment. You have a hypothesis. You verify it, or you try to produce it, and then you verify whether that hypothesis is true or not. I'll give you one example from XWiki. It's an open source project. Here's one build scan. It doesn't matter whether you can read it in detail or not. But here we see all the dependencies that were resolved. And it's a pretty big project, XWiki. And, for example, you probably have had that situation. You run on CI, it works, you use exactly the same Git commit ID locally, and it fails. Does that sound familiar? A little bit, okay. I've had this many times. Or the other way around, right? It works locally, works on my machine, but doesn't work on CI. But it's the same Git commit. It seems like everything's the same. So how do you figure that out? Here, for example, when you use the dynamic dependencies, or snapshot dependencies, which also are kind of a dynamic dependency, it will tell you exactly what version was resolved. And if you know that, you can compare the build on CI and the one locally, and it will tell you, ah, okay, there's a mismatch in the snapshot version, and maybe some API changed in these versions. And then you can address it. But if you don't have this, good luck with finding out that it was a dynamic dependency that was available in a different version. You probably can't figure it out, and then you have to rerun with a different log level. And you have to know what you're looking for, right? But if the data's just there and it shows you what is different, it's much easier to pinpoint what is going on. So this just as an example how data will help you find the root cause much faster. And then the other area besides capturing data is we want to make these feedback cycles much faster, right? We can stay in the flow. We don't have to switch context. We can just do one feature at a time. We can also verify more often whether the change we made is good. Because what happens if you know, oh, CI is going to take three hours to run, and locally it's also half an hour, you're just going to accumulate changes, right? You're not going to commit or build. And then if it's red after you build, well, what was the change that caused it? Well, anything you did the whole day. It also becomes a problem in that area. So just very briefly, so if you have faster feedback cycles, you can build more often naturally, right? That means you're going to have smaller change sets. That's what I said, because you can verify them very quickly. Small change sets means fewer merge conflicts. And if something goes wrong, it's also more efficient to troubleshoot. So the mean time to delivery will actually be much faster, and the quality is higher. Also, when you build faster, you can do the quality checks much earlier. Many people say, oh, it takes so long to run this, so I'm only going to do it once a day or once a week. But that is pretty expensive, because something is then broken very late after a week to go back and find out what caused it. It's much more expensive than if you get this feedback very quickly. So your quality checks will move to the left. You will do them earlier. And that also means, yes, because you're doing these checks earlier downstream, you will have less incidents to deal with. And the other part is if you have less idle time, you need to do less context switching. You can be more focused, you will be more productive, and also you will increase your quality. So there are different wins from having faster feedback cycles. Give you one example here. You have a small team, the build takes one minute, they build 1,000 times. You have a team, almost twice as many people, the build takes four minutes, they build 850 times. Per developer, the team two builds twice as much. Even though it's half the size of the team, they can build twice as much per developer, because the build is just much faster. And that's again coming back a little bit. You don't have to be big, but if you're small and fast, you will still beat the big and slow. And then people say, wow, five minutes, nine minutes, who cares? What's the difference? I just wait another four minutes. But if you look at this here, if you can get a build from one minute to 36 seconds down, over a year, if you take the number of builds per week, you count that up, you end up with something like 44 days. Working days. That's quite substantial. That's like two months that you're going to save. Or you have a bigger team, 100 people. Your build time goes from nine minutes to five minutes, because you made your build faster. And you're going to save something like this. That's a few developers that you can hire. So even though it seems like nine minutes, who cares about the four minutes difference? It adds up. And again, especially with these shorter build times, not super short, but nine to five, it's very unproductive to do something else. Google did some internal studies that they then published. It's very unproductive if something takes just in this range to do something else and come back to it. And the last aspect about faster builds is the CI resources. Now, you might say, I don't care about CI. I don't have to pay it. Somebody else pays. But the CFO at some point will say, well, this is getting expensive. Because if all you do is add more agents, more machines, more power, it does get expensive to the point where people say, we need to do something else. We cannot just add more hardware at it. One example, LinkedIn, every day, they're saving by using a build cache, meaning it's not doing work that can be avoided. And I'll show this in more detail. They're saving 800 hours every day. So the alternative to that would be, let's buy enough machines that we can run 800 hours of build time on these machines. So even for companies like LinkedIn, this gets very expensive. But besides the costs, if you have faster builds, you get faster feedback cycles. So you're going to push more often. You're going to wait less long. You're not going to have four digits in your queue, maybe two digits, which is, of course, again, making you more productive. You get your feedback faster. You know faster whether you did a good change or not. So let's make a short break here. I have some t-shirts. But you have to answer a question. I'll keep it easy, though. What does DPE stand for? And I will favor women. Can you say it again? Almost. Yep. There's a number of products you can see into it. Yes. Excellent. Do you want a t-shirt? It has a nice elephant on it. It's grabbing for the stars. You can have one. I have different sizes. You can pick one here. OK. You take one later. Cool. All right. So build time. I want to show you three technologies. And these are concrete technologies. And some have built their own. We have also built one. But the idea is not what we built, but it's the concept behind it. How can I make my feedback cycles faster? And I have three technologies for that. The first one is build caching. So build caching, how does it work? Well, that's not so critical here. How does it work? It operates on the level of a task in Gradle, a goal in Maven, or a target in Bazel. So that's the unit. It's not your entire build. It's not your git commit. And it's also a dependency cache. It doesn't store your whatever jar file that is in RdFactory. It stores the output of a single unit of work. And so how it works is it takes your inputs. For example, when you compile, it will take your compiler version, your compiler settings, your compile class path, the sources you want to compile, and creates a hash over it. And that's your input. That's representing your input. And then it runs it, you get some compiler classes, and that's your output. And that output is then stored in the cache for that input. It works the same with tests. The tests have an input, your test class path, your test sources, blah, blah, blah. Code generation, Java doc generation, whatever it is. It's the same concept. You have something that goes into this unit of work, and something comes out of it. The input is hashed, the output is stored in the cache under that hash. And so next time you run, if the inputs are exactly the same, well, let's not run it. Let's just take it from a cache. And the cache is stored somewhere not only in your machine, you have a local copy of that as well, but it's also stored somewhere else. So it can be used by many developers and CI as well. So the idea behind this is we don't want to do work that has been done before. Imagine you have a team of 100 people. CI built your project overnight. It compiled your code and many other things, generated things, tested them, and now you come to work in the morning, and you build your project. Why should your build do anything? You're checking out exactly what was built already on CI. So instead of building all this stuff, it will just take all these things from the cache. And if you make a change, usually your change is very local. It doesn't change the whole project. So why should it run the whole project when only certain areas are affected? And that's where the caching comes again into play. You make one change here, you say, build me the whole project, and it will only build those things that have been affected by the change. How does it know? You don't have to tell it. It will just look at the inputs. That's probably something you might do differently today. You might be smart and do this in CI. You might say, oh, for this kind of changes, I'm going to run this pipeline. For this kind of changes, I'm going to run this pipeline. At Cradle, we just, for every change, run everything. Because we know the build cache will just say, this didn't change, and so on, and it will just skip over all these things. So the logic on CI is super simple. Instead of having build logic in your build, then also have it on CI. So that's another aspect. But that's a different one. The key one here is the fast feedback cycle. Let's not do work that was already done somewhere else before for exactly the same configuration. And that's how LinkedIn saves these 800 hours. And they have something like 15,000 projects, I believe, is what they wrote in the end up. So I'm not sharing secrets. This is all in a public blog post. So 15,000 projects. So there's a lot of reuse, right, of libraries and components. And so very impactful. Give you one example from Spring Boot. I'm sure you all know Spring Boot. Here we can see, I just filled it by the Spring Boot project, because all of the Spring projects are benefiting, or most of them are benefiting from the build caching. On average, the build takes 60 minutes. You can see that over there. And on average, they're saving 75 minutes, an hour and 50 minutes per build. So without build caching, the total execution time would be 75 minutes more in task execution time. So very significant. That's on each build. And so they write a lot of code. They push a lot of code. And so having these quick feedback cycles of 15 minutes instead of an hour is, of course, super impactful. It makes a huge difference. That's why they're also super happy with it and even wrote a blog about how build caching actually helps them be very productive. All right. And by the way, this is at g.spring.io. So it's public. Everybody could look at exactly the data I'm showing here. It's nothing fake here. It's nothing secret. It's all public. Another idea is, so we can avoid work that has been done before. But sometimes, of course, you make changes. And then you need to run certain things. And what takes usually the most time is the tests. So what can we do there? Well, the idea is instead of running all the tests, let's run only those tests that are very likely affected by the changes you made. So and avoid running the other tests. And depending what the inputs are that changed, it will run different tests. So the goal here is not to say let's not run tests anymore. I mean, that would be very fast, yes. But that would be at the expense of quality. We still want to run tests. We want to definitely run all the tests at some point as well. But for the quick feedback cycle, we want to run exactly those tests that we know are the most likely affected by the changes we made. And you might even do this already today. You have maybe a multi-module project. And you know if I make a change here, I'm only going to run the build in that subproject or the tests in that subproject. So it's basically taking that idea but doing it in a way that you don't have to know any more well which tests are affected by this. Because in some cases, you will not even know. So the way we do it is based on machine learning. So it takes the history of the data. And based on that, it will come up and predict which tests will likely change their outcome. And the other ones, it will just skip. And so that is again a very powerful concept. Facebook also had some scientific papers about this. They're doing this in turn. They have their own tooling or have their own tooling. I think Google as well to do this kind of things. So it's in that sense not a super new thing. These are details. I'll just go over them. I'll skip over them. I want to give you another example for this. Micronaut. This is a software foundation Micronaut. It's a kind of a bit like Spring Boot or I think similar goals. Here we can see that within, I want to make sure I don't fall, within 28 days, they are saving 18 full days and a half days of not running tests that were not needed to be run. At some point in their pipeline, they're running all the tests. It's not like the quality goes down. But for all these PRs where you just want to have a quick feedback and catch 99% or however many it is here, 99.9% of all the possible regressions, they only run a subset. And then later they run all the tests. And if you take, let's say, let's take the weekends away. 28 days, that's eight weekends, that's 20 days. So within 20 days, they save about 18 days of 24 hours. So basically for everyday work, they're saving one full day and we don't work 24 hours. So for those work hours that we have, they're saving a full day in test execution. And so that is super, super impactful to be more productive and get your feedback much quicker. Was that a good change? And if it's good, let's go on to the next PR. Let's make the next change and the next change instead of accumulating all the changes and make one PR or making one PR and waiting a long time and doing something else in the meantime. So that's the situation with predictive test selection here. And then we have our third technology that we kind of advocate for making your feedback cycles faster, and that is test distribution. So let's say we avoid some work because it has been done before. We avoid some more tests because they're likely not affected, but we will run all of them later on. But some tests, okay, let's run them for these changes. And those we can parallelize. And we can parallelize them across machines. And not just on CI. Some CI providers provide this as well. But ideally also for our local builds. Because if it runs parallel, I can do something else in the meantime. Of course the feedback will come back much faster as well and I can stay in my flow. But it's something wherever the build run, it doesn't matter. I will still get the benefit from test distribution. So these tests are taken and run distributedly across many machines. And to give you one example here as well, you don't have to read all these details. We took the Jetty project. You might have heard the Jetty. It's an HTTP server. The build was 51 minutes. And we applied a bit of build caching, but primarily they do a lot of testing. So we used test distribution. Put a few agents at it. And we got the build time down to 15 minutes. So now imagine again, you're making a change. You want to be sure it's a good change. And instead of waiting 51 minutes, you're waiting actually 15 minutes. Interesting, though. And that gets back to the rowers and the threshold of pain. When we approached them and showed them these results, they said, we're not interested. It's not interesting for us. We don't see the value of that. But I think that's changing. I think in a few years or hopefully a few months, people in general will realize, okay, it is actually a big difference whether it takes 50 minutes or 51 minutes. Now, this is not a one-time effort. I cannot just say, okay, now we're going to do all these things. We're going to optimize and we're going to have the insights. We need to keep watching it, because otherwise it will regress. It's a bit like at home. If you don't clean up from time to time, your house is going to get more and more messy. It's not going to clean itself up. And that's the same here. Because what happens? You're going to have more developers, more projects. You're going to change maybe the compiler or the Java version, the build tool version. You're going to have new locations and so on. And all these things will have an impact on how you build your software, how effectively you build your software. And so you need to keep track of that. You need to have a pulse to see when things are regressing or whether they're performing equally nicely in all the different locations or whatever dimension you're looking at. And that's where the failure analytics and build insights come into play. So we want to have these trends that we can see, oh, our build is now slower than last week or by maybe 10%, but you didn't see it over every day. One example, we went to a customer, we optimized their build. That was a few years ago. And half a year later, we got it down to half the time. But half a year later, when we went back there, the time was where it was before, or even worse. And they didn't notice this until after half a year. It just went down a little bit every day. So you have to keep investing into it, but you also have to keep watching it so you can react. Because sometimes it drops so fast. And then the last point. So these are insights and trends. We need to keep watching those trends, whether they're stable or even improving, but we don't want them to regress. The other one is we want to be very effective at analyzing our failures and improving these situations. Because failures, every time you build something and it fails, you have to rerun. That's very expensive. That run was basically for nothing. Of course, there are certain failures that make sense. Like if you have a compiler failure, that's good to know. Okay. I made a mistake in my code. You want to get that feedback. That's not a bad thing. But if something is not a verification error, but it made it fail, that was kind of a wasted time. And so this gets us to flaky tests. Anybody has flaky tests? No? Just a few hands. But I think pretty much everybody has flaky tests. It could not be any different way. And so that's exactly where it gets into, okay, if it fails, I just run it again. But every time this happens, you just ran, like I said, you ran this build for nothing, basically. So let me give you a concrete example again. We interact a lot with the Spring team, especially with the Spring Boot team. And we asked them, do you have flaky tests? And they said, no, we don't have flaky tests. I'm like, sure. No, we don't have flaky tests. I'm totally certain. But let's capture the data. Let's see. And not surprising to us, a bit surprising to them, they had flaky tests. Because everybody asked them. And they were super happy to see that. They didn't think they would have it. But once they had the data, it's like, oh, wow, we see what they are. And now we can address them. Because before, if you don't even think you have flaky tests, how can you make them stable? I mean, you're not even going to think about investing time into it. But once you know, you can address it. And usually people have more flaky tests than time to fix. So which ones do you fix? You want to know the impact of those flaky tests. And once you know that, based on data, you can say, oh, we have a hundred flaky tests, but these are much more impactful on build time, on a number of failures than the ones over here. So let's fix those here. Okay. All right. So to finish up soon, one more question here. Somebody wants a T-shirt. What are the three acceleration technologies that I mentioned that we believe are key to improving feedback cycles? Can anybody enumerate them? Three? Yeah. Awesome. Feel free to take a T-shirt after. Cool. All right. So to make this concrete, what could you do if you now want to do something about this? Because you realize, okay, yes, we can do better than even if you think you have a good state, you can always do better. You should never be satisfied. It's a bit like in sports. Maybe you're fast, but you know, I can be a bit faster. There's always room for improvement. And it's the same here. Or maybe you're not doing anything yet. Then you have a lot to do. So first you need to capture and measure how your software is built locally and on CI. If you don't have the data, nothing to do. It's a bit like Formula One. How can you make your car faster if you don't get all these metrics that they're capturing, right? Without the data, it wouldn't be possible. Once you have the data, reason about it. Look at the data and see where there are opportunities to improve the situation. Also make sure you don't do this just sporadically, but that you have the trends that you see how certain factors are changing over time. And then make your software stable or more stable. I would do this before accelerating your build. To come back to like a car race analogy. I mean, if you have a super fast car, but it fails or it kind of breaks down in nine out of ten races, it's not worth much. It's much better if you have a car that's not quite that fast, but it's very stable and it gets over the finish line every time. So invest into stability. It also reduces a lot about the frustration of engineers when they build and then something breaks and it's not even in their hands to fix. And then apply these acceleration technologies. We have seen people inventing them themselves. That's also an opportunity, but it's not trivial, right? But you can, of course. One is avoid doing work that has been done before. So when I mentioned about the build caching, avoid doing work that is not really related to the changes you made. And the third one is paralyze the work that is left to do to complete it faster. And this applies to many things. It could be compilation, it could be testing, whatever. So if you want to take next steps, because you're like, oh yes, we want to do something about this in our company. I would start with raising awareness that DPE is a thing. I would start capturing what happens. And some people probably already do this, right? They at least capture build time and put it in some online database or so to at least have something. And then start accelerating and stabilizing your builds. Because one thing is sure, this will become a standard practice. And we already see this happening. Maybe Switzerland is not at the forefront of these things sometimes, but they will also come. But if you look at the US, it's already happening quite a bit. But also all over Europe, we see this happening. Just to give you a few examples, if you look at job openings at some of these companies that have really, really good engineering teams. And we know because we work with them pretty much every day. Elastic, Twitter, Netflix, LinkedIn, they're all looking for productivity engineers. This is a discipline. They have their own teams. And they're looking for people. It's really hard to find them. And another aspect to finish off here, it can also be a good career path. Maybe you're an engineer. Maybe you missed the DevOps opportunity. I should have jumped on it. I could have been the star in the company. Here's another chance. And I saw this more than once. Somebody says, I'm interested in DP. They put some person on it. And then half a year later, they adopted all the techniques and so on, optimized the builds. They're the stars in the team. Everybody goes to them. They're like, hey, we're so happy we can be more productive. We get our features out faster, and so on. And then they start building an entire team. They're saying, oh, let's not just have one person do this. Let's build a team around this. So, it can be quite a good opportunity for that as well. Or you could join Gradle. Also, working in that space. If you like to read, there's a book. It's very generic. It's not about Gradle Enterprise specifically. From the CEO of Gradle, about this topic. So, that was it. And I wish you a good rest of the company and effective development in your future. Thank you.