Replies: 2 comments 19 replies
-
|
Hi @VilemP, Great question! I'm just like you there, I focus on testing behavior instead of implementation details. I also avoid brittle tests like I have encountered these situations. We need to revise the validation model's instructions to properly handle refactoring that doesn't affect the behavior. The current approach creates unnecessary friction. I'll draft some proposed changes to address this, and I'd love your input on making them even better. :) |
Beta Was this translation helpful? Give feedback.
-
|
I'd like to hear your thoughts on an approach that builds on the existing pattern of customization through the The idea is to let users write their own TDD validation rules in a markdown file at What do you think? Does this feel like a natural extension of how the tool already works? Would it provide enough flexibility? I can introduce this feature in the next day or two if it sounds good. I'm currently leaning towards going ahead with it but would love to get some input first. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi. This is finally something that kinda makes Claude Code to follow right. Thanks.
However, I am struggling with one thing specifically - and that is refactoring.
I like to write very "behavioral" tests, tests that do not test the mechanics of "how" some observable behavior is achieved but rather WHAT observable behavior is expected.
I often run into situations where I have all tests green, I want to do the refactoring (no intention to change the behavior, just to achieve it differently) and I am blocked with messages like "In
TDD, every new method requires a failing test first. Write a test that fails because
preprocessTextdoesn't exist, run it to see the specific failure, then implementonly what's needed to address that failure."
My understanding is different. I do not think every method requires a failing test. If I am just restructuring my code without changing the behavior I care about (that is captured by the test), it should be ok to do it. Or? What am I missing here?
Just for example - I have a regexp that extracts data from a complex text. The text contains pipe characters, non standard white space characters etc. That makes my regexp quite complex. I have green tests that show that even though, the regexp works. But I want to make the regexp simpler by pre-processing the text, replacing no standard whitespaces with just normal space etc.
How would you go about this? I still want the same output for the same input. I do not want to write the test that tests "preprocessor was called" because that is testing the how and is brittle. I want to be able to restructure my code and make it better without failing my tests.
Please advice.
Beta Was this translation helpful? Give feedback.
All reactions