diff --git a/content/sessions/2024/mini-summits/Dec/Threat-Modeling/Threat-Modeling-Kata-VI.transcript.md b/content/sessions/2024/mini-summits/Dec/Threat-Modeling/Threat-Modeling-Kata-VI.transcript.md new file mode 100644 index 00000000000..6a56de70a57 --- /dev/null +++ b/content/sessions/2024/mini-summits/Dec/Threat-Modeling/Threat-Modeling-Kata-VI.transcript.md @@ -0,0 +1,100 @@ +--- +title : Threat Modeling Kata VI +track : DevSecOps +project : DevSecOps +type : working-session +event : mini-summit +when_year : 2024 +when_month : Dec +when_day : Tue +youtube_link : https://youtu.be/2kbRBaLYjgM +--- +
00:02
Dinis Cruz
Hi, welcome to this Open Security summit session in September 2024. I'm here with Lewis, long time contributor. A lot of amazing sessions and he has a plan. He's going to do a session and continue his kata, threat modeling. But before we thought we would change it a little bit and have a bit of a couple of interesting conversations about where we see threat modeling and where we see AI on this. So first of all, welcome Louis, thank you very much. So I guess my first question to you is I think there's more and more energy around threat modeling and gen AI, right? In fact, I even see a number of startups around threat modeling. So from last time we spoke in the last couple of months, where do you see the landscape at the moment? +
00:54
Luis Servin
This is a fundamental question. I think I remember back in the mid 2000, 2010. Right. So 2015, 17. So around that time before COVID. +
01:09
Dinis Cruz
I. +
01:09
Luis Servin
Was discussing with a colleague because I was doing a lot of training on threat modeling for an automotive company and I traveled around the locations in the world giving trainings and I happened to be india and one of my colleagues india was like, well, it must be possible to codify this thing and just dump it somewhere and just like do a couple of remarks and then everything will fall in the blanks. Right. And in the end an LLM is just a predictor of the next token, right? With a certain probability. So how much is it different from this brute force approach? The biggest challenge I see is one of shit in, shit out. +
01:50
Luis Servin
If you don't give it a very good description, if you're not careful in what you're describing, if you don't provide adequate diagrams, it will not be able to give you anything more than just very superficial things that are maybe not even related to your topic. And based on the data training sets that they might be using, it might start hallucinating. So the biggest challenge we have as an industry of practitioners is that there is no dictionary or no encyclopedia of threat models that you can consume out there to benchmark against it. Right. And I think that's the biggest challenge because how do you train into threat modeling without threat modeling? +
02:35
Dinis Cruz
Yeah, yeah. I think you're raising a couple of interesting points. Right. I think the first one you raise is that we still crazily in 2024 don't have a good library of thread models. +
02:49
Luis Servin
Well, it's just that everyone is doing it on private systems, so no one wants to split the beans. +
02:53
Dinis Cruz
Exactly. +
02:54
Luis Servin
That's one of the reasons I have this KATA project is just to have a reference out there somewhere of things that are, I mean, not perfect, but at least something to compare against. +
03:04
Dinis Cruz
Yeah, so, but the other thing you mentioned there, which I think it's basically, I think it's a misconception. I think when you say the models predict the next word, I actually think that's doing a massive disservice. Not because that's not what they're doing, but the bit that I don't think enough people are talking about is that the reason they work is because they create a really good model of reality. J and once they have the model, then yes, then they are predicting what is the best way statistically to explain this model. So in a way, I think the focus that we should have is how can we make sure that model that the LLMs create is good? +
03:54
Luis Servin
That's exactly the point, right? How do you give them enough context that they can create this worldview, or Terravision if you want to call it that represents what you have in your mind and what you're actually building. And I think that's the biggest challenge, right, Because I mean, you have been CTO of I don't know how many companies and you have been building diagrams. And one thing I know about diagrams is they're always outdated. I have yet to find a single project I join want a threat model and they have an up to date diagram no one has. And then even if they have one, what is it telling you? Is it a deployment diagram, is it a development diagram, is it a class diagram? Like everyone does it a bit different. +
04:39
Luis Servin
So there's also no architectural correspondence that allows you to create this world vision. Because many problems are not problems of development. It's not that the developers didn't understand what they were building, but that they were implementing blindly what the business wanted and the business was not aware of what they were asking for. Think of you're in a store, right? Think of gift cards. How many companies, big companies have had problems with gift cards? And it's nothing new, right? I mean, gift cards have existed for years. Companies love it. You pay the money upfront, that might not be cash, they expire after two years. They just love it. People giving them money for free and they have problems. +
05:26
Luis Servin
I was talking to some CISO from some retailer and he was telling me like, yeah, we had someone come with a gift card worth 20,000 Euro and when he presented to the cashier gave it back to him because there was no instruction in what to do with it. And immediately I was like, well there's like five levels of failure right there. +
05:48
Dinis Cruz
Yeah, see but that, that thing you describe it. First of all, I remember doing that and I remember like gift cards was one of my projects. I said this is one of the most dangerous things we can do because we're literally making money. We're literally creating a monetization, a way to make money with exploits. So usually what happens a lot of companies just for reference. Right. Is you have this level of fraud. As soon as you put gift cards you have that level of fraud. Literally like there's a one to one correlation. Right. But, but you're right, you know those patterns we need to map. And I think you were almost saying something I think is fundamental, which is most security problems are technology problems. They are business problems. And we don't solve security problems. +
06:25
Dinis Cruz
If we don't solve the technology problems, the business problems. +
06:29
Luis Servin
Right. We don't, we cannot solve the technology problems is the business. So if the idea is incomplete to begin with and no one realized it was incomplete and we just went blindly because we had a sprint and we had to implement it and something else, the whip just hits us. That's where things to get strange, right? +
06:48
Dinis Cruz
Yeah. So in a way what we need to do is to solve the business problem. Right. I think. And also we solve in a way that means in threat modeling we also have to threat model not just the security but the business and then the technology and then the actual business behind it because that can be the biggest threat of them all. Right. And even the CI pipeline, right. Think about the CI pipeline can have a crazy amount of impact on the ability to write secure stuff. Right. +
07:18
Luis Servin
I mean honestly, the CI pipeline is my favorite attack vector. Like you can change the code, no one will see it ever. And you get your. You piggyback something into the container that no one is aware of. +
07:36
Dinis Cruz
Yeah. So the other question I want to ask you is that where do you see the sort of the Gen AI on this? Because up until now I can never see how you could scale. And I think now with the gen AI stuff I feel that we finally have a way to scale this. +
08:00
Luis Servin
I think we can scale it a lot because it takes the PC work away from us. You can write. I mean the terrible thing about threat modeling is writing the report. The fun part is doing the thing and then for you, in your mind it makes sense to describe it in two words, three words because you know what you're talking about a very short sentence without context. But your consumers the developers, the businesses cannot understand it. So giving that away to an LLM that can, given this very short context, create specific things like a vulnerability, a threat scenario, and complement your countermeasures and think of something you might have forgotten on the countermeasures is a great help. And that really allows you to scale. The problem of thinking that you can scale based on thinner a threat model is really complicated. +
08:57
Luis Servin
You can try to say, okay, we have, we connect it into JIRA and look at the user stories and try to derive security requirements or abuse cases out of it and security requirements that could help for trivial things. I'm not sure how it works for more complex things. +
09:21
Dinis Cruz
Oh, I think you can scale quite spectacularly, right? But I think the way it scales is the way that I think we should be doing, which is lots of little bits, right? So it doesn't scale if you say, here's the whole system. I think what scales is to start to. It's like a graph, right? You do this bit, then you do that bit and you do that bit. And I actually think this is how we do information. And this takes me to the next question, which is why do you think that so many Gen AI projects are not working or maybe even saying failing at the moment? Because look, Gen AI is the first tech revolution is also coming from the top down. So nobody can say they don't have business support. If anything, people are complaining they have too many businesses. +
10:09
Luis Servin
Well, I mean it's the blockchain of this decade, right? So blockchain was last decade. This is the, the blockchain of this decade. But it actually works. Blockchain had no use besides from crypto money and whatever, right? +
10:23
Dinis Cruz
Yeah. +
10:23
Luis Servin
So this one has use and it's really powerful. It saves you lots of hours of work in many contexts. So I mean, I think there's a lot of potential to scale now. You need to have good training data, you need to have this worldview cannot be out of a common LLM. So you need to either do a lot of rag or some specific training for it to be of any value. And for that you need specialists, which might not be on the market. +
10:55
Dinis Cruz
But here's what I think is interesting part, right? I think more and more I think that the last thing anybody should be doing is building a model. In fact, we had a session yesterday about using custom models in response, right? And one of the topics we talk about is what do you do when the model is compromised? What do you do when your model has been poisoned? Because if you think about it's like having a magic database. That's I think how we should say RLMs are a magic database. We don't know how it works and we don't even know how SQL injection works and that thing. But we put data in, convert data out. +
11:29
Dinis Cruz
So, so I think anybody that creates a custom model is basically putting themselves in a position where as soon as something goes wrong they can't really do anything about it. And then the rags are another set of black box. Right? So, so I, my theory is that I think the reason why most LLM projects are failing is because like you said, people says we want to do gen AI projects and what they do, they say let's hire experts. But who are the experts they're hiring? They are all the machine learning crowd that learn that it does. They've been doing AI for decades, which is great, but they A want to build models, B, that's what they understand and I think C, a lot of them haven't done the paradigm shift into prompts and the power of graphs around the data. +
12:20
Dinis Cruz
So you, I think you have this self fulfilling prophecy here where you have a whole bunch of guys and bigger women teams that they get hired to do a geni project and the first thing they want to do is build a model. +
12:34
Luis Servin
Well, I mean the biggest challenge here is if you have been doing things one way, it's very difficult to do them in a different way. Right? I mean we can transpose that to the vehicle automotive industry. You have a bunch of people in Europe who have been building engines which are top of the line and have been top of the line for 20, 30, 50 years. But with the electrical revolution these skills are lost because you don't need them at all. And you need to do chemistry and. +
13:03
Dinis Cruz
You need to variation of that, right? +
13:05
Luis Servin
And then it's basically the same paradigm shift. You have been building models and now you don't have to build a model. You can take one, you can read, you can through rack and prompting bring it to a certain place, you can give it more context, you can give it a vector database with things that allow it to go outside of the common base. You took it, if you're doing llama or whatever into a specific domain, security if you want appsec and then on top of that you can do your rack for specific activities that you want. But then you need to be able to flush and relearn how to do things And I think that's one of the hardest things to do as a human being, Right. +
13:54
Dinis Cruz
I think it takes a lot of. +
13:55
Luis Servin
Energy to relearn something. +
13:58
Dinis Cruz
I think you nailed it. And I actually think that what you just said about the. What's it called, the automotive industry is actually a good analogy. Because the reason why. Think about like, why didn't those guys copy Tesla so fast, right? Why it took them so many years to copy Tesla. Right? And I think the reason was exactly that, right? As soon as they said, let's build a new electric car, they end up hiring everybody that came from the combustion industry and they couldn't get their heads around it. +
14:29
Luis Servin
Well, Tesla has a very different paradigm to automotive as well. Because, I mean, there was. Right. So there's a lot of giving out tasks to Bosch and whoever else, Continental and all, the tier one and then the tier two, Tier three, and then collecting everything and just basically Lego puzzling it together. And Tesla went very far from that and said, we have a much more vertical integration. We have very few suppliers, we do a lot more in software and we stack it all together so we control things so we can make changes a lot faster. So even if the big names, you want to push software update, they can't. They have to go to their vendors have to do it, they have to receive it, vet it, and then push it, whereas. +
15:15
Dinis Cruz
And they have to change the model, right? +
15:17
Luis Servin
Like. +
15:17
Dinis Cruz
And they have to become more like a software house. Right? So cool. So, okay, now let's. Let's now shift to our casa, right? Let me. So. So the question I have, and I would like us maybe do a couple of more sessions on this, right? So one of the topics that I think me and you talked a lot about is this idea that security drives good engineering, right. And a lot of security decisions. A lot of things that we want to do from a security point of view actually make the applications better from a performance point of view and making applications better from a solid point of view. Because ultimately we want simple solutions, we want less moving parts. Like, you know, we live. It's no database, right. There's no SQL injection if there's no database, right? +
15:59
Dinis Cruz
So let me walk you through this scenario that I've created, which I think it's a good example of a public, really nice public example of that. And in a way, I arrived there through this combination of security and engineering. Right on this. So let me share my screen. Right, so you should be able to see this part here, which is basically so you should see this data fields here. Right. +
16:28
Luis Servin
All right, so before we go into the details, and this is always the place I like to start. What are your high level requirements? What's the business case around this? And then let's go into the nitty gritty details. Because sometimes we as technology people are so in love with nitty gritty details that we forget the big picture. +
16:44
Dinis Cruz
Right, Cool. So the business case is exactly this, right? We build a system that collects, process and stores cybersecurity feeds, right. Making it available for real time LLM analysis. So that's the solution, right? The solution is take RSS feeds from other companies. In this case is from Hacker News, right? So take this feed here. Oops, sorry. And make it available. Take this, which is the news feed, right. Which is it off this case, and make it easily available for an LLM to consumer. +
17:32
Luis Servin
All right. +
17:35
Dinis Cruz
So that's fundamentally the objective. In fact, the fundamental objective is to be able to create, you know, in this case something that looks like this. Right. So I'll just put it here. Right? This is the objective. The objective is to create at scale something that gives me this because. And there's going to be a bit more bits. But this is the first, let's say mvp, right? The first MVP is how can I have this up to date it? Right. In real time. Right. If you can see, this is literally in real time. Right? So this will be updated as much as the other feed is. The other feed updates once an hour. This updates once an hour, right. How to make this available, you know, in that space. So that's the business require. +
18:22
Luis Servin
All right, so collect, process and store cybersecurity news feeds or any kind of newsfeed service, just one type of. But it could be any kind of news feeds from different sources and make it available for an NLM to consume. That's what we want to achieve, right? +
18:39
Dinis Cruz
Correct. All right, so in fact, I think I have a diagram of that here, which is here. There you go. So that's. This is. Can you see this? So this is a high level diagram of it where you got the RSS feeds, we have a feed crawler, we parse it, we store it in this case in S3. And in the first case we have a FastAPI service that provides that to the consumers. +
19:17
Luis Servin
All right. That's the high level view, right? +
19:20
Dinis Cruz
Yep. In fact, here you could see the fast API flow. This is where I started. Right. So if you look here. There you go. Louis, you say you haven't seen up to date diagrams. Look Here it is, right? This is literally the functionality of the application, right? And so you could see, right? You make. And this was created by LLM, by the way. This was created by Claude Thread. So I literally gave it the source code, gave it the explanation, and I said, give me some flow diagrams. And he created this, which was spot on, right? And now I'm maintaining. That's really nice. It's really cool, right? +
19:57
Luis Servin
I know, I know. It saves so much time, right? +
19:59
Dinis Cruz
I know. And this is where I think a lot of people miss the trick. I don't need this to be 100% perfect. I just need this to be good enough for me to make some tweaks and then I maintain it because I'm in this case, the architect. I know how it works, right? So I can look at a diagram and say is correct or not correct. The same way that when we provide a threat model, it always gets peer reviewed by the architects, right? They always turn around and say, look, you got this wrong. You got that right? Or they go, shit, this is not supposed to be like this. And go, well, it is, because we found that. Okay, so you could see here, so this is the typical architecture, right? You got the hacker news, right? Data feed, right? +
20:38
Dinis Cruz
You ask for it, you get the request to the cloudfront, it hits the lambda function, pass the API, hits the S3 bucket, gets the data, do some transformation, sends it back, all right? So that's the model, right? Now, the thing that I find very interesting is how I went from here. +
20:59
Luis Servin
So the client just. Is this open? Does it require authentication or is it. +
21:08
Dinis Cruz
But wait before I do the bit, because the next bit is the interesting bit, right? All right, so when you go to here, so if I show you this, right? Look, see, swagger, right? You can open it, right? And I come here and I say, hey, I want the latest feed, I'm coming here. I click there and I get the data, right? And I go like this. And look, see? Really nice and fast, right? 70 milliseconds. That's pretty cool, right? Depends on. Depends on the network, right? But you saw that now when you look at this, you think, well, this is a database, right? This is a full grown system, right? This is like, you know, swagger and APIs, et cetera. What's cool about even here? Oh, you want data feed for a particular hour, right? +
21:56
Dinis Cruz
Coming here, if you put the time and you go back, you get to see it, right? I guess I need to add the thing, but yeah, you get the pen. Now, what's cool about that? Is the architecture for that. Is this that server that you just got? I. This bit here. Let's see. Public data, I can use latest c. JSON. That architecture is this one here you hit Cloudfront hits S3. S3 returns the data and it goes back. +
22:32
Luis Servin
So you skip the lambda because it's latest and you don't care about latest, so you fetch latest every so often, right? +
22:40
Dinis Cruz
So this is a good example of refactoring. I kept taking parts out, I kept going, how can I make it simple? How can I make it faster? And I was like, dude, it's a get request, right? And the data doesn't change and the data changed once an hour. So I just need to update literally that file, right? And then I get it all the way down. So you get a really nice indesign. And so here's the bit, right? So then I went to Claude and I said, let's do a stride and os10 analysis. So this is the bit I want to see if you agree, right? So, so in this article I explained the architecture, the changes, and then I say, what's the difference between one and the other, right? And I love this car. I didn't wrote, I was working with Claude. +
23:33
Dinis Cruz
Free is amazing because you can iterate. So I was working a document, I'm making lots of changes, but some of it erode itself. I love this phrase, security performance. It's such a cool phrase. +
23:46
Luis Servin
I know, I know. I mean, security is many things, right? I have come to the realization that security is ultimately quality attribute. So every single security problem is a quality problem, but not the other way around. Not every quality problem is a security problem. And in that sense, performance is part of quality, Right? +
24:09
Dinis Cruz
Yeah. So now here's the thing, see if you agree with this, right? So I did a stride analysis. In fact, I got Claude to do a stride analysis, which did really well. Right, so let's go in bits, right? So here's the stride for this application. So let's take one by one and see if you agree. So on screen, you know, usually it's high. I put none because there's no user, there's no authentication, there's no search far from the validation, and there's no impersonation vectors. Would you agree? +
24:46
Luis Servin
Sure. You can get rid of that level of. I mean, challenge comes later when you flexibilize the system. All right, so right now when you're consuming hacker news, it's no problem because that's public data. But if you want to consume Bloomberg, that data is not public, right? So you would be in contract infringement if you don't have identities consuming the data. +
25:14
Dinis Cruz
I agree, but you just expanded my brief, right? +
25:18
Luis Servin
Yeah, I mean, just like from the principle of always consider the future, it could limit you and would need a heavy RE architecture in the future if you don't have identities to. I mean having identities is not a bad thing. Of course you can lose them and spoof them, but I mean, in this simple case, as long as you are consuming public data and no confidential data, it has to do a lot with the value of the data that you're storing, right? The CIA. So the I is taken care of. The A is taken care of because you have read only, so no one can try to overwrite it. If your system misbehaves, you enable S3 versioning. So even if it's overwritten, you can always go back to the version so you have integrity and availability right there. But C, it's hard to keep. +
26:18
Dinis Cruz
We'll get to those. Right, but you see, what I like about this is that even in the example you're describing, I would argue that's a separate thing. You're saying, okay, there's going to be some car that needs to be protected. Cool. I will have to do a threat model on that extra bit. Right. The power is able to break it into a little bit. So look, there's a lot of systems that we look at. The data is public, right? +
26:40
Luis Servin
I mean you could choose the endpoints, right. So you could have different endpoints as and if the request goes to the public endpoint, go serve from the S3 bucket. If the request goes to a non public endpoint, then you need a lambda to check the authorization of the claim. +
26:57
Dinis Cruz
Exactly. But that's, you know, if. Or you can. And so there's different patterns you can do. Right? Like if you think about it like I could have given you a token to go and get the data. Right? Still served by lambda. Right. I think the point is to architect out complexity. +
27:13
Luis Servin
Yeah, yeah, no, I'm completely with you. +
27:15
Dinis Cruz
Right. So if you look at tampering, it's another good example, right? Let's immutable data store. Right? Access to CXV SU version enabled content integrity and no runtime modification. So there's no temperance. +
27:28
Luis Servin
I mean you need to introduce new. So the latest feed is always overwritten, right. +
27:33
Dinis Cruz
Or so the latest feed, the way it works is that whenever I fetch a new feed, right? So I can actually show you all the files exist on the Server right on this area of the S3 bucket. Right. So can you see here what happens? So let's say so every time, once an hour. I actually have a schedule now, but I make a request because it's going to ask once an hour. Because hacker news in this case only updates once an hour, right? So you make a request and you save it as the xml and then you do an immediate transformation here. And then you overwrite these two. So every time you go and fetch a new article, the latest data, you always do two rights. You do one. So I see. It's really cool. +
28:21
Dinis Cruz
So I have a whole historical right of all the data, you know, one. One per hour and then you can see one per day. Right. Etc. And then this one. And again, why? Why? I wrote this because I want to remove complexity as like, why? Why do I make you think about what is the latest? If I can just give you in one URL. And then I was like, if I'm going to give you one. Yeah. +
28:42
Luis Servin
From a tempering perspective, the biggest challenge, the biggest enemy here is your thumb. It's not a talk, it's not someone outside, it's your thumb. You hit enter too fast, you push without noticing something bad. I know you're doing a lot of checks, right? But you have a misbehavior in the lambda and you've hit something too fast and it overrides data. +
29:06
Dinis Cruz
It should. +
29:07
Luis Servin
And still then S3 has your back because you can always revert in history in the versioning of latest to something that makes sense. Right. +
29:16
Dinis Cruz
And here I think it's also one thing that's important on flat modeling is that I think it's interesting to do the thread modeling from a particular angle. So this is from the angle from the outside in, right. I'm not looking at. And that's why if you look here, repudiation, same thing, right? And I love this information is closer by design, right? Literally it's by design if you keep everything away, right. Denial of service is the only one I put in because there's a little bit. But you're still protected by cloud front and I can still enable more stuff. Right. And then elevation, there's none. Right. So it's really cool because you get in stride. I get four nones, one by design. So five nuns and one low. And then I also do the top 10. It's the kind of same thing, right. +
30:01
Dinis Cruz
And I see here I put a little asterisk because I basically was saying assuming aws. So I said, look, you don't have a zero there in AWS or you don't have an API key, which is a different threat model. Right. So it's again like all of these just about don't exist. Maybe security misconfiguration a little bit like you said, the fat finger AI or internal logging problems. But it's cool that I was able to design out all of this. +
30:31
Luis Servin
I mean more than misconfiguration I could be worried about. I mean I was developing for so long that I sometimes created things that block the database for ages because of a query that was not well thought through or which worked in no low scenarios. But when there was a lot of load, it just broke the database. Right, yeah. So these kind of things are more. It's not a misconfiguration, it's just bad code in the end. Yeah, but yeah, I mean this is definitely a good way to optimize whole categories of threads out of the scenario. Right. So you just remove everything that doesn't make sense and then are right there on the top of the hour doing the right, doing whatever you want and you don't worry about any attacker. +
31:29
Dinis Cruz
But to me the key part here is that the big benefits I would argue are performance, business security, scalability. This thing scales up to S3 scale. And think about from a business point of view, right? I don't have a problem if this thing is ridiculously successful. Suddenly my lambda function stopped working. Right. It's like literally is S3. Right. So the cost is minimal. Right. And I could always add a building layer. +
31:53
Luis Servin
I mean the only cost here but I mean you get a bit of details. Protection on that is data. Exporting data from the cloud is usually expensive. +
32:03
Dinis Cruz
Yeah, but no. +
32:04
Luis Servin
So depending on the volume it could get expensive but up to a point because you have DDoS protection in front of it. +
32:10
Dinis Cruz
But it's always a minor percentage compared to when it runs on fast API or running on other services. So if you think about it, yes. Oh no. This amount of data, yes, it's going to be expensive but that amount of data in the normal server development is going to be crazy. So cool man. I think, look, I think we are on the hour. Let's do another one of these. But and I, I look take a look at it more because I think it's a really cool example. I would love to do some more threat models on it especially I would love to do the thread modeling the comparison between the two. And even more interestingly what I was just mentioning is what is the evolution as you want to add more features. Right where you said. Okay, now on authentication layer, how. +
32:49
Dinis Cruz
How do you then evolve? But I like the idea that we can have these threat models that are evolving with. +
32:55
Luis Servin
And that's the whole point of threat modeling, right? Threat model is something that should happen as you get new ideas to threat model the idea, and you keep the scope of the threat model very limited, rather than do a big shebang, which takes days, and no one is happy about it. +
33:10
Dinis Cruz
Yeah, I get. Cool, man. All right. That's another cool session on threat modeling. +
33:14
Luis Servin
And I know it was really fun. +
33:16
Dinis Cruz
So you didn't do your kata one, but we'll do it on the next one. +
33:20
Luis Servin
That's okay. It's okay. Thank you very much. +
33:23
Dinis Cruz
See you. +
33:24
Luis Servin
Take care. +
33:24
Dinis Cruz
Bye.