From 48f368a5f0147522554046890fd8ba31ff3aaad8 Mon Sep 17 00:00:00 2001 From: Sean Killeen Date: Fri, 10 Mar 2023 21:32:57 -0500 Subject: [PATCH 01/22] bring over contents so far --- docs/articles/book/Index.md | 41 ++++ docs/articles/book/concepts/TestTradeoffs.md | 49 +++++ .../articles/book/concepts/TestingConcepts.md | 84 ++++++++ docs/articles/book/concepts/TheWhy.md | 39 ++++ docs/articles/book/concepts/TypesOfTests.md | 41 ++++ docs/articles/book/concepts/toc.yml | 8 + docs/articles/book/getting-started/index.md | 198 ++++++++++++++++++ docs/articles/book/getting-started/toc.yml | 2 + docs/articles/book/toc.yml | 8 + .../Converter.Tests/Converter.Tests.csproj | 29 +++ .../TemperatureConverterTests.cs | 61 ++++++ .../Converter.Tests/UnitTest1.cs | 27 +++ .../getting-started/Converter.Tests/Usings.cs | 1 + .../book/getting-started/Converter/Class1.cs | 5 + .../Converter/Converter.csproj | 9 + .../Converter/TemperatureConverter.cs | 9 + 16 files changed, 611 insertions(+) create mode 100644 docs/articles/book/Index.md create mode 100644 docs/articles/book/concepts/TestTradeoffs.md create mode 100644 docs/articles/book/concepts/TestingConcepts.md create mode 100644 docs/articles/book/concepts/TheWhy.md create mode 100644 docs/articles/book/concepts/TypesOfTests.md create mode 100644 docs/articles/book/concepts/toc.yml create mode 100644 docs/articles/book/getting-started/index.md create mode 100644 docs/articles/book/getting-started/toc.yml create mode 100644 docs/articles/book/toc.yml create mode 100644 docs/snippets/book/getting-started/Converter.Tests/Converter.Tests.csproj create mode 100644 docs/snippets/book/getting-started/Converter.Tests/TemperatureConverterTests.cs create mode 100644 docs/snippets/book/getting-started/Converter.Tests/UnitTest1.cs create mode 100644 docs/snippets/book/getting-started/Converter.Tests/Usings.cs create mode 100644 docs/snippets/book/getting-started/Converter/Class1.cs create mode 100644 docs/snippets/book/getting-started/Converter/Converter.csproj create mode 100644 docs/snippets/book/getting-started/Converter/TemperatureConverter.cs diff --git a/docs/articles/book/Index.md b/docs/articles/book/Index.md new file mode 100644 index 000000000..ad366843c --- /dev/null +++ b/docs/articles/book/Index.md @@ -0,0 +1,41 @@ +# Automated Testing & TDD with NUnit: An On-Ramp + +## About This Series + +### Who is it for? + +This series aims to be for everyone -- from people who've never written a unit test to people who have used NUnit but would like to brush up on some of the theory and practices. + +We'll try to split up the articles so that you can dive in and focus on the parts that you care about. And we'll try to use real-world examples along the way. + +We'll also try to make it as succinct as possible, because we're not getting paid by the word -- or indeed, at all :) -- for this. + +### Strong Opinions, Loosely Held + +This guide is naturally going to reflect the opinions of the primary author. However, along the way, we'll try to point out where another school of thought might approach something differently. + +One thing for sure that we want to be clear about: there are several ways to do testing well, and there is no "one true way" to do it right, especially because the context and trade-offs of each project and team are unique. We'll do our best to not present opinion as fact, and we'll work toward including more adjacent insight as we build out the guide. + +Similarly, we're not trying to "sell" you on TDD. We find value in it in many cases, so we'll talk about it. Similarly, NUnit is a great library for testing -- but it's by no means the only one, and alternatives like xUnit are quite popular (even with us!). To each their own; we hope if nothing else, some of the theory and practical tips here will be useful no matter which library you choose. + +### What Tech Stack Are You Using? + +We're writing this primarily from the perspective of .NET Core and onward, because with .NET 5 this is the path forward that the .NET team has chosen for the technology. With that said, we'll absolutely augment this guide with tips and explainers for those who are on the classic .NET Framework, and if any of what we say doesn't work for you, + +## This is a Living Thing. Have Feedback or Improvements? + +No improvement to this will happen without you. If you have a question, chances are someone else will too -- please ask! If you have an improvement, we'd love to hear about it. [Create an issue in the docs repository](https://www.notion.so/seankilleen/TBD) to start a conversation. + +## Possible Future Directions + +It's possible this could expand to the point where it makes sense to stand it up on its own. If that happens, maybe it will move out of the NUnit docs and over to somewhere else. + +## Credit Where It's Due + +We've read a lot about testing over the years from a lot of places. Wherever we are aware (or are made aware) of credit being owed for a particular contribution, we'll be sure to cite it. Much of the knowledge here is considered general, mainstream knowledge in the industry. If you are reading this and think someone needs to be cited to receive credit for something, by all means -- let us know! + +## About the Author + +This series is originally by [Sean Killeen](https://SeanKilleen.com) ([Mastodon](https://mastodon.social/@sjkilleen), [GitHub](https://github.com/SeanKilleen)) with additional contributions from the NUnit team and our community. + +Sean is a Principal Technical Fellow for Modern Software Delivery at [Excella](https://excella.com). He has taught courses in modern testing and test automation as part of a ScrumAlliance CSD-certified course and an ICAgile ICP-TST/ICP-ATA certified course. He is active in the .NET community including OSS contributions, and is a member of the NUnit Core Team. diff --git a/docs/articles/book/concepts/TestTradeoffs.md b/docs/articles/book/concepts/TestTradeoffs.md new file mode 100644 index 000000000..32cb84057 --- /dev/null +++ b/docs/articles/book/concepts/TestTradeoffs.md @@ -0,0 +1,49 @@ +# Test Trade-offs + +Different types of tests have different trade-offs in their usage. + +Typically, automated tests are thought of as a pyramid or a funnel. + +* In a pyramid visualization, unit tests comprise the base of the pyramid (the largest part). On top of them are integration tests, then acceptance/functional tests, then UI tests. +* In a funnel visualization, the pyramid is inverted, and we think about unit tests as catching a majority of potential issues, followed by integration tests and acceptance/functional tets. + +The thinking behind both of these visualizations is that you want most of the tests in your project to be unit tests, followed by integration tests and acceptance/functional tests because of the trade-offs we're about to get into. + +While the approaches above offer a generalized, the reasoning behind them is the important part to consider. + +## The Journey Away from Fine-Grained Tests + +The further we move away from unit tests toward coarser-grained tests, a few things happen: + +* **Tests require more setup**. The further away you go from unit tests, typically the more setup those tests require. Instantiating a new database for each test run is much more setup than using a fake dependency. +* **Tests can fail for more than one reason**. When a unit test fails, you almost always can pinpoint exactly what's happening. But an integration test may comprise many units; when it fails, how do you know which unit is responsible for the failure, unless you also have a failing unit test? +* **Tests can fail for unrelated reasons**. For example, if you have a UI test that refers to a certain element, and the name or position of that element changes, your UI test may fail even though the UI is actually in perfectly working order. +* **Tests take (much) longer to run**. As a rule of thumb, I can run approximately 500-1,000 unit tests per second. However, if I have to instantiate a real database and make round trips of data to that database, a given test will take substantially longer. I once worked on a project where a few thousand unit tests took a few seconds to execute, but a few hundred integration tests took a few hours. +* **Tests take longer to fix**. Because of the longer execution time, multiple possible failure points, and potential for flakiness, troubleshooting these tests are often more difficult, which can lead to tem being left flaky for a long time or (worse) ignored or deleted altogether. + +With that said, having a project that completely neglects a layer is likely to suffer as well: + +* The team may make assumptions about how units of code work together, only to find that real components behave differently in practice. +* The team may miss important considerations of coarser-grained tests, such as the contracts for an API +* If there are no UI tests at all, it could be possible to deploy an application that passes and yet has a UI that is completely inoperable. + +## Finding the Right Trade-offs for Your Team + +Each codebase has a different context and set of trade-offs that might inform test types to use. Examples: + +* Teams with the ability to minimize the setup and execution time of their tests may benefit from more integration tests +* Legacy projects with little test coverage often start with UI tests to establish a baseline of confidence, and then take some of those tests and "push them down" into several API tests or integration tests to alleviate some of the trade-offs +* Teams with a high degree of non-coder collaboration may write a higher number of acceptance/functional tests because the language of the tests is closer to the language they use with their stakeholders. + +Keep some of the below in mind and you may avoid some pitfalls: + +* **Actively talk about and re-evaluate test types**. For example: + * If a number of UI tests have built up confidence and you've seen no failures, and those tests are appropriately covered by finer-grained tests, it may make sense to retire them. + * If you keep getting caught off-guard by integration issues, it may make sense to invest more time in integration or acceptance tests. + * If you've discovered a way to reduce the execution time and maintenance burden of a given layer of tests, it may make sense to invest more in that layer. +* **Remember: The goal is _confidence_**. + * If a test fails, it should be treated as an issue until it can be proven otherwise. + * Don't settle for flaky tests if you can at all avoid doing so. + * If a test no longer serves to improve confidence in the system (and doesn't meaningfully play into the living documentation of the system), consider removing it or pushing it into finer grained tests. + * If the maintenance of a set of unit tests is costly and things are well-covered by integration tests that provide a high degree of confidence, perhaps some of those unit tests can be retired. +* **Keep execution times as fast as possible**. The goal is to run as many tests as possible as often as possible. If a set of tests takes 6 hours to run, how will you be able to get confidence in pushing a branch of code prior to merging it in? More often than not, those tests will be skipped. \ No newline at end of file diff --git a/docs/articles/book/concepts/TestingConcepts.md b/docs/articles/book/concepts/TestingConcepts.md new file mode 100644 index 000000000..3b7a877e0 --- /dev/null +++ b/docs/articles/book/concepts/TestingConcepts.md @@ -0,0 +1,84 @@ +# Testing Concepts + +Lots of concepts are thrown around when we talk about automated testing. We'll briefly describe some of these here and then we'll work on applying them in the rest of the series. + +## SUT / CUT (Situation/Class Under Test) + +You might hear the term `SUT` or `CUT` or see these variables in tests you come across. Typically they mean the situation or class under test. + +## AAA: Arrange / Act / Assert + +Sometimes called the `AAA` approach, this refers to the way we might write a given automated test: + +* **Arrange**: Here, we set up the test for the action that will follow. This typically involves setting up the situation/class under test to take a particular action on it. Sometimes some or all of this arrangement is done by a `SetUp` method. +* **Act**: In this step, we take an action. We call some part of our situation/class under test that either returns a value or that we expect to throw an exception. + * Typically, we try to keep the actions to a minimum -- preferably, 1. If you are testing different actions, that often means you're testing a different path through your code. In that case, we recommend creating two tests -- one for each path. +* **Assert**: In this section of the test, we assert that a value is what we expect, or an exception has been thrown as we expect -- anything that indicates our expectations about the production code are met. + * Similar to the _action_, we try to keep assertions to a minimum or create additional tests to capture each assertion. That's because if a piece of code fails, seeing which tests fail at the same time can help triangulate the issue and give insight as to what the problem is. Sometimes you'll hear this phrased as "one _logical_ assertion", because multiple assertions may make sense as part of an overall concept (e.g. checking three proprties are what you expect when all three relate to a particular concept.) When in doubt, create multiple tests -- you can always consolidate them later. + * Note that if you are making more than one assertion for a test, NUnit has a particular format for that called `Assert.Multiple`. If you don't use that convention, NUnit will only try the first assertion and will fail the test if it fails -- which doesn't provide you any information about the other assertions you make in that test. This could lead to a situation in which you fix one part of a failing test, only to see another assertion in the same test fail. + +Sometimes you'll see these actual statements in comments such as `// Arrange`, `// Act`, and `// Assert` in a given test method. This can be helpful for some folks as a mental marker. While we would never begrudge anyone this style, typically these comments can be removed and the different sections can be separated by a blank line. + +## Red / Green / Refactor + +TDD lifecycle is often known as "Red, Green, Refactor": + +* In the **Red** phase, you write a failing test. + * "Failing" can also mean "not compiling". Since we write tests for production code before that code exists sometimes, there will be a failing test here too. +* In the **Green** phase, you write just enough production code to make the tests pass. + * "Just enough" is crucial. If you write the simplest version of the code that passes, and you can trick your tests into passing, it's a good indicator that you need additional tests. +* Lastly, you look for opportunities to **Refactor** -- to change the design of your production code and tests while ensuring that all tests pass. + * You are able to make changes because the tests provide a safety harness that prove you haven't introduced an issue. + * Attending to this cleanup in small cycles like this is good hygiene for your codebase. Many cycles of red / green / refactor provides opportunities to cleanup as you go and makes this cleanup a part of your muscle memory. + +## The FIRST Acronym in Automated Tests + +The `FIRST` acronym is often used to describe the goals of automated tests. Tests aim to be: + +* **F**ast: We want our tests to run as fast as possible. For example, unit tests should typically execute in no more than a second. Tests at higher levels might take slightly longer, but we want to prioritize speed. + * Fast tests mean as many tests as possible can be run as often as possible -- this is the source of confidence that automated tests provide. +* **I**solated: Tests should not depend on the output of another test or on a specific order of tests or production code run. We should (theoretically) be able to run all tests at the same time. +* **R**epeatable: The same test, when run many times, should always produce the same result. +* **S**elf-Documenting: The test itself should tell you whether it passed or failed. + * By this, we mean that a test shouldn't, for example, write out to a file that a human then needs to examine. When we run the test suite, we should gain immediate confidence in whether our code passes those tests or not. + * Assertions are how we accomplish this in automated test libraries. +* **T**imely: Tests should be written around the same time as production code, so that you are not just documenting what the code does, but _what you intended_ the code to do. + * In the case of TDD, the tests are guaranteed to be timely, because they're written _before_ the production code. + +## Tests Are Production Code, Too + +Just because you have tests in a test project doesn't mean they should be programmed in a sloppy manner. Active maintenance and refactoring of tests can pay huge dividends over the life of a project. + +## "Flaky" Tests + +Sometimes you'll hear tests referred to as "flaky" or experiencing "flakiness". Typically this means a test sometimes fails when we don't expect it to for reasons we don't fully understanding (violating the `R` in `FIRST`, stating that tests should be repeatable). + +Typically, we want to fix these tests so that we can trust our test suite and don't start to ignore failures as "probably just a test being flaky". We need to be able to trust what the test results well us. + +## DRY vs. DAMP + +Because tests are production code, the general programming concept of `DRY` (Don't Repeat Yourself) often comes into play with tests. + +However, tests have an important consideration -- they should be understandable quickly, with as little sleuthing as possible. If it serves the readability and understandability of the test, it's perfectly fine to repeat certain logic within a test rather than attempting to abstract it all away. In this way, we often say we prefer `DAMP` (Descriptive and Meaningful Phrasing) to `DRY`. + +Like many things, this is a balance that a team should attempt to find and actively maintain. Some parts of tests can be extracted into common methods; other abstractions might make tests too hard to understand. Be willing to re-evaluate these choices as projects evolve. + +## Test Coverage + +In the realm of automated tests we'll often hear talk of "test coverage" -- sometimes coupled with a percentage, such as "every project should have 90% test coverage". Let's talk about what this means, and why it might be problematic. + +"Test coverage" means the percentage of your production code that is called from -- sometimes we say "exercised by" -- tests. There are different methods to measure this. Some tools do a basic calculation of number of lines of code covered by tests divided by the total lines of production code. Others may use more complex methods that calculate test coverage on a per-method or per-statement level. But the general idea is that of seeing what amount of your production code tests actually "touch". + +Understanding test coverage is a good thing, and improving it where it's lacking is typically a good thing. + +However, there's an important adage known as [Goodhart's Law](https://en.wikipedia.org/wiki/Goodhart%27s_law), which states: + +> "When a measure becomes a target, it ceases to be a good measure." + +In other words, when a measure is used as the basis of judgment or reward, incentives will always exist to manipulate the measure in order to avoid the harsh judgement or receive the reward. + +Thinking about this in terms of test coverage, specifying an arbitrary coverage target (of say, 90%) can lead to some self-defeating practices. Tests may be written for their own sake rather than because they add value and confidence; those tests still carry the same maintenance burden as any other code written in the system. And in many cases, those metrics are easy to game -- one could write a meta-test that visits every method in the system and then claim a high number of test coverage while adding no value. More often than not, teams end up testing libraries and pieces of the frameworks they use in ways that have a high overlap with those those projects already test themselves -- leading to increased test burden and complexity without much benefit. + +My advice: Write as many tests as possible that increase confidence and add value, recognize when ROI may be limited, know your test coverage across your application, and avoid test coverage targets except as a thought exercise. + +NOTE: It is still helpful to measure test coverage for the purpose of these conversations, and there is nothing wrong with saying that test coverage should not trend downward without a very good reason. diff --git a/docs/articles/book/concepts/TheWhy.md b/docs/articles/book/concepts/TheWhy.md new file mode 100644 index 000000000..7d28ff028 --- /dev/null +++ b/docs/articles/book/concepts/TheWhy.md @@ -0,0 +1,39 @@ +# The "Why" + +Documentation often focuses on the "what", but with test automation and TDD it's also very important to focus on why we might apply a certain concept or approach. + +## Why Write Automated Tests? + +* **Living Documentation**. Tests describe what your code does, in a way that is continually verified. You can read the tests and understand how the code will behave, without worrying if the documentation is out of sync. +* **Reduced Fear**. So, you think your code could be better, and you'd like to change it. "No!" you hear someone say "We can't touch that code; we don't know if it'll break anything." Tests help solve that problem. +* **Better than reduced fear -- confidence**. We want to move faster _without_ breaking things. You can think of each automated test as a strand in a safety harness that helps you and your team move faster. +* **Lower Total Cost of Delivery**. We lower the cost of future change in our code, which means we lower the total cost of delivery. We have more options open to us, and future changes take less time because we are guided by the tests we've created. +* **We are human. (At least, last time we checked)**. Humans make mistakes. It's great if we can prove that the code works how we think it'll work. +* **Triangulating issues -- and proving they're fixed**. Something has gone wrong. How do you pinpoint where that's happened? With tests in place, you can express the issue in terms of one or more failing tests. When all tests pass once more, you can prove that the bug is gone. + +And there are the longer-term considerations: + +* **Tested code is testable code**. Is our code straight-forward and modular enough to be tested? One sure way to prove this is by testing it. +* **Thinking about how code will be consumed**. Writing tests forces you to think about calling the code you're writing, which helps you see the surface area and usage of the code. This perspective shift can lead to a better experience for someone consuming your code -- even if that someone is you. + +## Why TDD? + +TDD -- Test-Driven Development (or sometimes Test-Driven _Design_) -- is the art of writing a test before you write the corresponding piece of production code. Why do that? + +* **Timeliness**. Returning to code after we've written it to write tests proves how the program works now. But what about our intent? How can we prove that the program works how we intend it to work? This is where TDD comes in; by writing tests at the time we write the code, we gain additional confidence from testing from our intention all the way through to production code. +* **Design Drivers**. As we mentioned earlier, some benefits of tests are that they often keep code simpler and more modular, and that they force the developer to test-drive the usage of their code. TDD takes this to the next level -- from early on, you are engaging with your the shape of your code. As you approach the design of your code with TDD, it often becomes easier to spot dependencies as they emerge and make them explicit. By performing these cycles early and often, you gain many opportunities for insight into your code and its usage, and your design will benefit from that. + +## Why NUnit? + +NUnit is a testing framework for .NET. Like other frameworks -- xUnit, MSTest -- it provides a few core components that compose such a framework: + +* Ways to define tests +* A runner to run the tests (though you can hook into the Visual Studio test runner, the ReSharper test runner, NCrunch, or any other runner of your choice) +* An assertion library to use during testing (though you can substitute for FluentAssertions or the library of your choice) + +We believe the concepts we'll discuss here will be able to be applied to any major test framework in most languages. If you couple the concepts in this guide with some documentation in any major test framework, we hope you'll be able to get where you need to be. + +So, the question becomes -- who will benefit from NUnit's particular _flavor_ of test framework? + +* NUnit benefits in particular from its longevity -- our first NuGet package was published in 2011, 13 years prior to this article being written. In that time, the library has amassed 200+ million downloads and has seen a ton of support from the community. To borrow the Farmers Insurance ad campaign, "We know a thing or two because we've seen a thing or two." +* We think that beginners and newcomers benefit from the _explicitness_ of NUnit and the way it approaches tests with attributes such as `[Test]`, `[SetUp]`, and `[TearDown]`. In some cases, xUnit relies on C# conventions in ways that might not be familiar. diff --git a/docs/articles/book/concepts/TypesOfTests.md b/docs/articles/book/concepts/TypesOfTests.md new file mode 100644 index 000000000..d912d5e21 --- /dev/null +++ b/docs/articles/book/concepts/TypesOfTests.md @@ -0,0 +1,41 @@ +# Types of Automated Tests + +## Before We Begin: A Note on Opinions on Test Types + +The below is a summary of a lot of different schools of thought. It is by no means the only way of thinking about test types; we intend only to ground you in some general terms.vWe also don't cover every single type of test here; there are many specialized kinds of tests (security, smoke tests, sanity tests, verification tests, etc.). + +If you place a group of experienced developers in a room, they'll likely have several overlapping terms and would have to figure out how to use that language as a team. We recommend any team have these conversations early and often to ensure you're using the same terms in the same way. + +With that said, we do recommend that you try to use each test type correctly as it pertains to your team, because different types of tests come with different trade-offs and considerations, and it's important to consider those. + +## Unit Tests + +These are typically meant as the "lowest level" of automated tests. They aim to test a specific, isolated class. Any dependencies that class has on other classes would be faked in unit tests, so that they can execute quickly and with a clear understanding of how the class will behave. There will be more on this later when we discuss [test doubles](TODO). + +## Integration Tests + +Integration tests cover a wide variety of tests. It's often fair to say an integration test is to say "any test that tests two or more classes together". You may have a test that instantiates two different classes and makes assertions about them, or it may be a much larger test that tests 10 real classes together but where one of the 10 classes is faked (e.g. a database). It could be that an integration test uses all real components. + +## Acceptance/Functional Tests + +Unit and integration tests are often thought of as a developer "proving the code works the way I intend it to". Acceptance tests -- sometimes called "functional tests" -- shift that focus a little bit, and instead think about "proving the code is working the way an end-user expects it to." + +Typically in these tests, _no_ (or as few as possible) fake components are used, because we are trying to verify that things work when the entire system is put together. + +These kinds of tests, when not directly related to UI, can often be accomplished at a slightly lower level, such as an API level. + +## End-to-End / UI-Based Tests + +Some tests require interaction with the UI in order to prove things out. These are typically called "end-to-end" or UI tests. + +These tests spin up an actual GUI or browser window and execute commands against a running system in order to assert their results. + +## The Good News + +...You can use one test framework for all of these! + +All of the automated test types above still run in the standard unit test framework mindset -- set up a test, arrange the test, make an action, make an assertion, and tear down the test. Other tools can be used during these steps in conjunction with NUnit -- for example, during an NUnit test, you might manipulate a browser using Selenium or Playwright. But you can rely on a standard test framework to accomplish all of these tests. + +## Which are Right for You? + +The types of tests you employ and their mixture will depend highly on your team's context and the trade-offs and optimizations they're making. But, it's important to understand the trade-offs between the different tests, which we'll talk about next. diff --git a/docs/articles/book/concepts/toc.yml b/docs/articles/book/concepts/toc.yml new file mode 100644 index 000000000..2e12b8e6a --- /dev/null +++ b/docs/articles/book/concepts/toc.yml @@ -0,0 +1,8 @@ +- name: "The Why" + href: TheWhy.md +- name: "Testing Concepts" + href: TestingConcepts.md +- name: "Types of Tests" + href: TypesOfTests.md +- name: "Test Trade-offs" + href: TestTradeoffs.md diff --git a/docs/articles/book/getting-started/index.md b/docs/articles/book/getting-started/index.md new file mode 100644 index 000000000..0e98bf605 --- /dev/null +++ b/docs/articles/book/getting-started/index.md @@ -0,0 +1,198 @@ +# Beginning Our NUnit TDD Journey + +In this exercise, we'll introduce you to NUnit and some NUnit concepts by doing some introductory test-driven development at the same time. + +## Introducing the Exercise: A Temperature Converter + +_This exercise comes to us via [Fadi Stephan]() of [Kaizenko]() who first taught it to the author. Used here with his permission. Thanks, Fadi!_ + +Let's imagine a scenario in which we have to convert Celsius temperatures to Fahrenheit. + +Before jumping in, we think a little about the problem space and some things we might know about the conversion: + +* We'll probably have something that converts temperatures, probably called `TemperatureConverter`. +* We're converting Celsius to Fahrenheit, so we'll probably have a method in that called `ConvertCToF`, or similar. +* That method will probably take in a decimal and return a decimal, e.g. `public decimal ConvertCToF(decimal celius)`. +* We also probably know at least some of these common conversions: + +| Celsius Temperature | Fahrenheit Temperature | +| ------------------- | ---------------------- | +| 0 | 32 | +| 100 | 212 | +| 37 | 98.6 | +| -40 | -40 | + +## Creating the Project Structure + +* Create a new, empty folder somewhere that you want to practice this exercise +* Open a command prompt and go to that folder. +* Run the following commands: + +```cmd +dotnet new sln --name Conversions +dotnet new classlib --name Conversions +dotnet new nunit --name Conversions.Tests +dotnet sln add .\Conversions\ +dotnet sln add .\Conversions.Tests\ +cd .\Conversions.Tests\ +dotnet add reference ..\Conversions\Conversions.csproj +``` + +Let's break down what this command does. + +* Adds a new solution named `Conversions.sln` +* Adds a new class library called `Conversions.csproj` (this will be our production code project) +* Adds a new test project using the `nunit` template, called `Conversions.Tests.csproj` (this will be our unit test project) +* Adds a reference from the test project to the production code project so that we can see its contents. + +As for the `nunit` template, what does that do for us "out of the box"? In the `Conversions.Tess.csproj` file, we can see that a few libraries have been added for us: + +[!code-xml[PackageListing](~/snippets/book/getting-started/Converter.Tests/Converter.Tests.csproj#L11-L23)] + +* `Microsoft.NET.Test.Sdk` brings the test platform along and allows tests to be run using the `dotnet test` command. It's a good idea to add this to any .NET automated test project. +* `NUnit` brings in the core NUnit library. +* `NUnit3TestAdapter` surfaces NUnit3 tests for the Visual Studio / `dotnet test` runner. It essentially allows NUnit tests to be discovered. +* `NUnit.Analyzers` are Roslyn analyzers that provide helpful tips and surface warnings at development time. They catch common errors and guide you toward an idiomatic NUnit experience. +* `coverlet.collector` is a popular .NET tool for collecting test coverage metrics. + +## A First Test to Ensure Everything is in Order + +Before writing our actual tests, we'll write one demo test to ensure the test runner is working. + +* Open the `Converter.Tests` project +* Find the `UniTest1.cs` file and open it. +* You'll see a test exists there for the out of the box template: + +[!code-csharp[OutOfTheBoxTest](~/snippets/book/getting-started/Converter.Tests/UnitTest1.cs#OutOfTheBoxTest)] + +Below that test, add a new test consisting of a basic assertion: + +[!code-csharp[FirstTest](~/snippets/book/getting-started/Converter.Tests/UnitTest1.cs#FirstTest)] + +Run the tests using your runner of choice (Visual Studio's test runner, `dotnet test` from the console, etc.). You should see two tests that both pass. + +With those tests passing, we're ready to tackle our first test case. + +## Test Case 1: 0 Degrees Celsius == 32 Degrees Fahrenheit + +With this first test case, we're ready to create our first test that fails, before we've created any production code -- the "red" in our "red, green, refactor" cycle. + +Since we said earlier that we'll end up creating a class called `TemperatureConverter`, we'll start by creating a class in our test project called `TemperatureConverterTests.cs`. + +Next, let's think about how we might name our test. There are two common approaches we could take: + +* Put the method name we're testing in the name of the test. Our Test name method names would start with `ConvertCToF_`. +* Create a sub-class for each method name. We'd create a class within `TemperatureConverterTests` called `ConvertCToF` and put tests under that. + +Since we'll only have a handful of tests, I'm going to choose the first option for this example. + +Next, we want to think about what the circumstances are of the test, and what the expected output is. This we can we can put it right in the test name. A common format is `[MethodName]_[Situation]_[ExpectedResult]`. In our case, we expect to pass `0` into `ConvertCToF` and have it return `32`. So a safe bet for our test name is `ConvertCToF_Zero_Returns32`. + +So, in our test project we will: + +* Create a class named `TemperatureConverterTests.cs` +* Set it up to look like the below: + +[!code-csharp[ConvertCToF_Zero_Returns32](~/snippets/book/getting-started/Converter.Tests/TemperatureConverterTests.cs#ConvertCToF_Zero_Returns32)] + +Let's break this down line by line. + +* We define a test by putting the `[Test]` attribute above the method. This lets NUnit know we have a test. If you don't include this attribute, your test won't be run. + * (another way that TDD is helpful -- by seeing the test fail first, we know it's running.) +* We define the test name `public void ConvertCToF_Zero_Returns32()` +* We define the `sut`, or "situation under test": `var sut = new TemperatureConverter();` + * This is a common shorthand name. You don't have to use it, but we'll use it throughout this guide for consistency. + * This definition is sometimes considered to be part of the "Arrange" in [Arrange / Act / Assert](~/articles/book/concepts/TestingConcepts.md#aaa-arrange--act--assert). +* We take an action on the SUT, calling a method and getting a result: `var result = sut.ConvertCToF(0);` +* We then make an assertion: `Assert.That(result, Is.EqualTo(32));` + +At this point, we've defined our test in terms of how we want our production code (which does not yet exist) to behave. When we try to run our test for the first time now, it will fail with a compiler error, because that class doesn't yet exist. This is a valid "red" state, and so we move onto the next step: writing just enough production code to get the test to a "green" (passing) state. + +In our case, this means that in the production code project we: + +* Create a `TemperatureConverter.cs` class +* Define a `public decimal ConvertCToF(decimal celsius)` method +* Hard-code that method to return `32`; + * This might seem silly at first, but by doing the simplest thing possible, it forces us to write more tests which we might otherwise skip over. This step saves me headaches all the time. + +```csharp +public class TemperatureConverter +{ + public decimal ConvertCToF(decimal celsius) + { + return 32; + } +} +``` + +We run our tests and -- they pass! With that in mind, we look to see if there's anything we can refactor at this point. Nothing comes to mind, which means we're done our first cycle of "Red, Green, Refactor" in TDD. + +Clearly, there's more to do. + +## Adding our Next Test Case + +The next well-known example in our list was that a Celsius temperature of 100 (boiling water) is the equivalent of 212 degrees Fahrenheit. + +So we add our next test to the `TemperatureConverterTests.cs` file, keeping our same naming convention as before: + +[!code-csharp[ConvertCToF_100_Returns212](~/snippets/book/getting-started/Converter.Tests/TemperatureConverterTests.cs#ConvertCToF_100_Returns212)] + +When we run our test, it fails, because the production code is hard-coded to always return `32`. + +What's the simplest thing we can do to make the test code pass? We add an `if` statement: + +```csharp +public decimal ConvertCToF(decimal celsius) +{ + if (celsius == 100) { return 212; } + return 32; +} +``` + +This production code still doesn't look nearly ready for production. We should continue on to our other examples. + +## The Remaining Examples + +We continue this cycle of "Red, Green, Refactor" for our two remaining exmples: + +[!code-csharp[RemainingExamples](~/snippets/book/getting-started/Converter.Tests/TemperatureConverterTests.cs#RemainingExamples)] + +And we wind up with prod code that looks like: + +```csharp +public decimal ConvertCToF(decimal celsius) +{ + if (celsius == 100) { return 212; } + if (celsius == 37) { return 98.6m; } + if (celsius == -40) { return -40; } + return 32; +} +``` + +The tests cover our examples, but we know the production code is not ready for prime-time. We can't think of any more examples, so it's time to put in our algorithm. We replace the if statements in our production code with the correct algorithm: + +```csharp +public decimal ConvertCToF(decimal celsius) +{ + return celsius * 5 / 9 + 32; +} +``` + +And we run our tests. They pass! + +...Except, wait. They don't. + +We know this is the right formula. But we can see the specific tests that are failing and the actual values definitely look off. + +We check everything we can over and over again until it's apparent: the formula we _knew_ was correct...isn't. Our tests have given us the confidence to realize that we must have copied the formula incorrectly. Our refactoring was not successful, because all existing tests didn't pass. + +When we update our production code to the correct formula: + +```csharp +public decimal ConvertCToF(decimal celsius) +{ + return celsius * 9 / 5 + 32; +} +``` + +The tests now pass. Our refactoring is successful; we have changed the inner workings of the code with a high degree of confidence that we didn't change the code's result. When changing from the `if` statements to the algorithmic example, we were prevented from making a mistake that's all too easy to make. diff --git a/docs/articles/book/getting-started/toc.yml b/docs/articles/book/getting-started/toc.yml new file mode 100644 index 000000000..68055c7a8 --- /dev/null +++ b/docs/articles/book/getting-started/toc.yml @@ -0,0 +1,2 @@ +- name: "Beginning Our NUnit TDD Journey" + href: index.md \ No newline at end of file diff --git a/docs/articles/book/toc.yml b/docs/articles/book/toc.yml new file mode 100644 index 000000000..bbfd3a205 --- /dev/null +++ b/docs/articles/book/toc.yml @@ -0,0 +1,8 @@ +- name: "Intro & Outline" + href: Index.md +- name: "Concepts" + href: concepts/toc.yml + topicHref: concepts/TheWhy.md +- name: "Getting Started" + href: getting-started/toc.yml + topicHref: getting-started/index.md diff --git a/docs/snippets/book/getting-started/Converter.Tests/Converter.Tests.csproj b/docs/snippets/book/getting-started/Converter.Tests/Converter.Tests.csproj new file mode 100644 index 000000000..4aa3d9151 --- /dev/null +++ b/docs/snippets/book/getting-started/Converter.Tests/Converter.Tests.csproj @@ -0,0 +1,29 @@ + + + + net6.0 + enable + enable + + false + + + + + + + + all + runtime; build; native; contentfiles; analyzers; buildtransitive + + + all + runtime; build; native; contentfiles; analyzers; buildtransitive + + + + + + + + diff --git a/docs/snippets/book/getting-started/Converter.Tests/TemperatureConverterTests.cs b/docs/snippets/book/getting-started/Converter.Tests/TemperatureConverterTests.cs new file mode 100644 index 000000000..c3616b6ea --- /dev/null +++ b/docs/snippets/book/getting-started/Converter.Tests/TemperatureConverterTests.cs @@ -0,0 +1,61 @@ +namespace Converter.Tests; + +public class TemperatureConverterTests +{ + #region ConvertCToF_Zero_Returns32 + [Test] + public void ConvertCToF_Zero_Returns32() + { + // Arrange + var sut = new TemperatureConverter(); + + // Act + var result = sut.ConvertCToF(0); + + // Assert + Assert.That(result, Is.EqualTo(32)); + } + #endregion + #region ConvertCToF_100_Returns212 + [Test] + public void ConvertCToF_100_Returns212() + { + // Arrange + var sut = new TemperatureConverter(); + + // Act + var result = sut.ConvertCToF(100); + + // Assert + Assert.That(result, Is.EqualTo(212)); + } + #endregion + #region RemainingExamples + [Test] + public void ConvertCToF_37_Returns98point6() + { + // Arrange + var sut = new TemperatureConverter(); + + // Act + var result = sut.ConvertCToF(37); + + // Assert + Assert.That(result, Is.EqualTo(98.6)); + } + + [Test] + public void ConvertCToF_Negative40_ReturnsNegative40() + { + // Arrange + var sut = new TemperatureConverter(); + + // Act + var result = sut.ConvertCToF(-40); + + // Assert + Assert.That(result, Is.EqualTo(-40)); + } + + #endregion +} diff --git a/docs/snippets/book/getting-started/Converter.Tests/UnitTest1.cs b/docs/snippets/book/getting-started/Converter.Tests/UnitTest1.cs new file mode 100644 index 000000000..c89287c77 --- /dev/null +++ b/docs/snippets/book/getting-started/Converter.Tests/UnitTest1.cs @@ -0,0 +1,27 @@ +namespace Converter.Tests; + +public class Tests +{ + [SetUp] + public void Setup() + { + } + + #region OutOfTheBoxTest + [Test] + public void Test1() + { + Assert.Pass(); + } + #endregion + + #region FirstTest + [Test] + public void MathWorksAsExpected() + { + var result = 2 + 2; + + Assert.That(result, Is.EqualTo(4)); + } + #endregion +} \ No newline at end of file diff --git a/docs/snippets/book/getting-started/Converter.Tests/Usings.cs b/docs/snippets/book/getting-started/Converter.Tests/Usings.cs new file mode 100644 index 000000000..cefced496 --- /dev/null +++ b/docs/snippets/book/getting-started/Converter.Tests/Usings.cs @@ -0,0 +1 @@ +global using NUnit.Framework; \ No newline at end of file diff --git a/docs/snippets/book/getting-started/Converter/Class1.cs b/docs/snippets/book/getting-started/Converter/Class1.cs new file mode 100644 index 000000000..9b872441a --- /dev/null +++ b/docs/snippets/book/getting-started/Converter/Class1.cs @@ -0,0 +1,5 @@ +namespace Converter; +public class Class1 +{ + +} diff --git a/docs/snippets/book/getting-started/Converter/Converter.csproj b/docs/snippets/book/getting-started/Converter/Converter.csproj new file mode 100644 index 000000000..132c02c59 --- /dev/null +++ b/docs/snippets/book/getting-started/Converter/Converter.csproj @@ -0,0 +1,9 @@ + + + + net6.0 + enable + enable + + + diff --git a/docs/snippets/book/getting-started/Converter/TemperatureConverter.cs b/docs/snippets/book/getting-started/Converter/TemperatureConverter.cs new file mode 100644 index 000000000..a12ebf8c7 --- /dev/null +++ b/docs/snippets/book/getting-started/Converter/TemperatureConverter.cs @@ -0,0 +1,9 @@ +namespace Converter; + +public class TemperatureConverter +{ + public decimal ConvertCToF(decimal celsius) + { + return celsius * 9 / 5 + 32; + } +} \ No newline at end of file From f4bcd6fcba7a6fb25c4b1a958041f5b4691ea922 Mon Sep 17 00:00:00 2001 From: Sean Killeen Date: Fri, 10 Mar 2023 21:35:19 -0500 Subject: [PATCH 02/22] Add projects to solution --- docs/snippets/Snippets.sln | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/docs/snippets/Snippets.sln b/docs/snippets/Snippets.sln index 89cd29e33..8ca03d3e6 100644 --- a/docs/snippets/Snippets.sln +++ b/docs/snippets/Snippets.sln @@ -5,6 +5,14 @@ VisualStudioVersion = 17.0.31903.59 MinimumVisualStudioVersion = 10.0.40219.1 Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "Snippets.NUnit", "Snippets.NUnit\Snippets.NUnit.csproj", "{759AE765-B66A-4585-886C-4A6F35143C92}" EndProject +Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "book", "book", "{D97B13D1-BCB2-4073-BB28-B66B9875A1B8}" +EndProject +Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "getting-started", "getting-started", "{3B15F67C-E96E-4DA9-A60A-F25104D96C7E}" +EndProject +Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "Converter", "book\getting-started\Converter\Converter.csproj", "{85F83B5F-89F1-48E1-87DF-11A007532D95}" +EndProject +Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "Converter.Tests", "book\getting-started\Converter.Tests\Converter.Tests.csproj", "{88A94E93-0624-400C-BFE3-C289FC155581}" +EndProject Global GlobalSection(SolutionConfigurationPlatforms) = preSolution Debug|Any CPU = Debug|Any CPU @@ -18,5 +26,18 @@ Global {759AE765-B66A-4585-886C-4A6F35143C92}.Debug|Any CPU.Build.0 = Debug|Any CPU {759AE765-B66A-4585-886C-4A6F35143C92}.Release|Any CPU.ActiveCfg = Release|Any CPU {759AE765-B66A-4585-886C-4A6F35143C92}.Release|Any CPU.Build.0 = Release|Any CPU + {85F83B5F-89F1-48E1-87DF-11A007532D95}.Debug|Any CPU.ActiveCfg = Debug|Any CPU + {85F83B5F-89F1-48E1-87DF-11A007532D95}.Debug|Any CPU.Build.0 = Debug|Any CPU + {85F83B5F-89F1-48E1-87DF-11A007532D95}.Release|Any CPU.ActiveCfg = Release|Any CPU + {85F83B5F-89F1-48E1-87DF-11A007532D95}.Release|Any CPU.Build.0 = Release|Any CPU + {88A94E93-0624-400C-BFE3-C289FC155581}.Debug|Any CPU.ActiveCfg = Debug|Any CPU + {88A94E93-0624-400C-BFE3-C289FC155581}.Debug|Any CPU.Build.0 = Debug|Any CPU + {88A94E93-0624-400C-BFE3-C289FC155581}.Release|Any CPU.ActiveCfg = Release|Any CPU + {88A94E93-0624-400C-BFE3-C289FC155581}.Release|Any CPU.Build.0 = Release|Any CPU + EndGlobalSection + GlobalSection(NestedProjects) = preSolution + {3B15F67C-E96E-4DA9-A60A-F25104D96C7E} = {D97B13D1-BCB2-4073-BB28-B66B9875A1B8} + {85F83B5F-89F1-48E1-87DF-11A007532D95} = {3B15F67C-E96E-4DA9-A60A-F25104D96C7E} + {88A94E93-0624-400C-BFE3-C289FC155581} = {3B15F67C-E96E-4DA9-A60A-F25104D96C7E} EndGlobalSection EndGlobal From 6e669367926a36b8db9649e6cd50f23e268b8794 Mon Sep 17 00:00:00 2001 From: Sean Killeen Date: Sat, 11 Mar 2023 02:38:22 +0000 Subject: [PATCH 03/22] smile emoji --- docs/articles/book/Index.md | 82 ++++++++++++++++++------------------- 1 file changed, 41 insertions(+), 41 deletions(-) diff --git a/docs/articles/book/Index.md b/docs/articles/book/Index.md index ad366843c..74443e6c5 100644 --- a/docs/articles/book/Index.md +++ b/docs/articles/book/Index.md @@ -1,41 +1,41 @@ -# Automated Testing & TDD with NUnit: An On-Ramp - -## About This Series - -### Who is it for? - -This series aims to be for everyone -- from people who've never written a unit test to people who have used NUnit but would like to brush up on some of the theory and practices. - -We'll try to split up the articles so that you can dive in and focus on the parts that you care about. And we'll try to use real-world examples along the way. - -We'll also try to make it as succinct as possible, because we're not getting paid by the word -- or indeed, at all :) -- for this. - -### Strong Opinions, Loosely Held - -This guide is naturally going to reflect the opinions of the primary author. However, along the way, we'll try to point out where another school of thought might approach something differently. - -One thing for sure that we want to be clear about: there are several ways to do testing well, and there is no "one true way" to do it right, especially because the context and trade-offs of each project and team are unique. We'll do our best to not present opinion as fact, and we'll work toward including more adjacent insight as we build out the guide. - -Similarly, we're not trying to "sell" you on TDD. We find value in it in many cases, so we'll talk about it. Similarly, NUnit is a great library for testing -- but it's by no means the only one, and alternatives like xUnit are quite popular (even with us!). To each their own; we hope if nothing else, some of the theory and practical tips here will be useful no matter which library you choose. - -### What Tech Stack Are You Using? - -We're writing this primarily from the perspective of .NET Core and onward, because with .NET 5 this is the path forward that the .NET team has chosen for the technology. With that said, we'll absolutely augment this guide with tips and explainers for those who are on the classic .NET Framework, and if any of what we say doesn't work for you, - -## This is a Living Thing. Have Feedback or Improvements? - -No improvement to this will happen without you. If you have a question, chances are someone else will too -- please ask! If you have an improvement, we'd love to hear about it. [Create an issue in the docs repository](https://www.notion.so/seankilleen/TBD) to start a conversation. - -## Possible Future Directions - -It's possible this could expand to the point where it makes sense to stand it up on its own. If that happens, maybe it will move out of the NUnit docs and over to somewhere else. - -## Credit Where It's Due - -We've read a lot about testing over the years from a lot of places. Wherever we are aware (or are made aware) of credit being owed for a particular contribution, we'll be sure to cite it. Much of the knowledge here is considered general, mainstream knowledge in the industry. If you are reading this and think someone needs to be cited to receive credit for something, by all means -- let us know! - -## About the Author - -This series is originally by [Sean Killeen](https://SeanKilleen.com) ([Mastodon](https://mastodon.social/@sjkilleen), [GitHub](https://github.com/SeanKilleen)) with additional contributions from the NUnit team and our community. - -Sean is a Principal Technical Fellow for Modern Software Delivery at [Excella](https://excella.com). He has taught courses in modern testing and test automation as part of a ScrumAlliance CSD-certified course and an ICAgile ICP-TST/ICP-ATA certified course. He is active in the .NET community including OSS contributions, and is a member of the NUnit Core Team. +# Automated Testing & TDD with NUnit: An On-Ramp + +## About This Series + +### Who is it for? + +This series aims to be for everyone -- from people who've never written a unit test to people who have used NUnit but would like to brush up on some of the theory and practices. + +We'll try to split up the articles so that you can dive in and focus on the parts that you care about. And we'll try to use real-world examples along the way. + +We'll also try to make it as succinct as possible, because we're not getting paid by the word -- or indeed, at all :smile: -- for this. + +### Strong Opinions, Loosely Held + +This guide is naturally going to reflect the opinions of the primary author. However, along the way, we'll try to point out where another school of thought might approach something differently. + +One thing for sure that we want to be clear about: there are several ways to do testing well, and there is no "one true way" to do it right, especially because the context and trade-offs of each project and team are unique. We'll do our best to not present opinion as fact, and we'll work toward including more adjacent insight as we build out the guide. + +Similarly, we're not trying to "sell" you on TDD. We find value in it in many cases, so we'll talk about it. Similarly, NUnit is a great library for testing -- but it's by no means the only one, and alternatives like xUnit are quite popular (even with us!). To each their own; we hope if nothing else, some of the theory and practical tips here will be useful no matter which library you choose. + +### What Tech Stack Are You Using? + +We're writing this primarily from the perspective of .NET Core and onward, because with .NET 5 this is the path forward that the .NET team has chosen for the technology. With that said, we'll absolutely augment this guide with tips and explainers for those who are on the classic .NET Framework, and if any of what we say doesn't work for you, + +## This is a Living Thing. Have Feedback or Improvements? + +No improvement to this will happen without you. If you have a question, chances are someone else will too -- please ask! If you have an improvement, we'd love to hear about it. [Create an issue in the docs repository](https://www.notion.so/seankilleen/TBD) to start a conversation. + +## Possible Future Directions + +It's possible this could expand to the point where it makes sense to stand it up on its own. If that happens, maybe it will move out of the NUnit docs and over to somewhere else. + +## Credit Where It's Due + +We've read a lot about testing over the years from a lot of places. Wherever we are aware (or are made aware) of credit being owed for a particular contribution, we'll be sure to cite it. Much of the knowledge here is considered general, mainstream knowledge in the industry. If you are reading this and think someone needs to be cited to receive credit for something, by all means -- let us know! + +## About the Author + +This series is originally by [Sean Killeen](https://SeanKilleen.com) ([Mastodon](https://mastodon.social/@sjkilleen), [GitHub](https://github.com/SeanKilleen)) with additional contributions from the NUnit team and our community. + +Sean is a Principal Technical Fellow for Modern Software Delivery at [Excella](https://excella.com). He has taught courses in modern testing and test automation as part of a ScrumAlliance CSD-certified course and an ICAgile ICP-TST/ICP-ATA certified course. He is active in the .NET community including OSS contributions, and is a member of the NUnit Core Team. From 26ff70b1740b66aad788b020ec27cb7af3ede92f Mon Sep 17 00:00:00 2001 From: Sean Killeen Date: Sat, 11 Mar 2023 02:42:43 +0000 Subject: [PATCH 04/22] Add book to TOC --- docs/articles/toc.yml | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/articles/toc.yml b/docs/articles/toc.yml index 1421a0cc1..ae98f3a73 100644 --- a/docs/articles/toc.yml +++ b/docs/articles/toc.yml @@ -1,6 +1,8 @@ - name: NUnit href: nunit/toc.yml topicHref: nunit/intro.md +- name: "On-Ramp Guide" + href: book/toc.yml - name: VS Test Adapter href: vs-test-adapter/toc.yml topicHref: vs-test-adapter/Index.md From 477b025d7abb93fb02cf964c4a3122f12c71c6ea Mon Sep 17 00:00:00 2001 From: Sean Killeen Date: Sat, 11 Mar 2023 02:49:27 +0000 Subject: [PATCH 05/22] Last sentence + Fadi credit --- docs/articles/book/getting-started/index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/articles/book/getting-started/index.md b/docs/articles/book/getting-started/index.md index 0e98bf605..0dd118bc5 100644 --- a/docs/articles/book/getting-started/index.md +++ b/docs/articles/book/getting-started/index.md @@ -4,7 +4,7 @@ In this exercise, we'll introduce you to NUnit and some NUnit concepts by doing ## Introducing the Exercise: A Temperature Converter -_This exercise comes to us via [Fadi Stephan]() of [Kaizenko]() who first taught it to the author. Used here with his permission. Thanks, Fadi!_ +_This exercise comes to us via [Fadi Stephan](https://www.linkedin.com/in/fadistephan/) of [Kaizenko](https://www.kaizenko.com/) who first taught it to the author. Used here with his permission. Thanks, Fadi!_ Let's imagine a scenario in which we have to convert Celsius temperatures to Fahrenheit. @@ -195,4 +195,4 @@ public decimal ConvertCToF(decimal celsius) } ``` -The tests now pass. Our refactoring is successful; we have changed the inner workings of the code with a high degree of confidence that we didn't change the code's result. When changing from the `if` statements to the algorithmic example, we were prevented from making a mistake that's all too easy to make. +The tests now pass. Our refactoring is successful; we have changed the inner workings of the code with a high degree of confidence that we didn't change the code's result, and we've prevented a pretty severe bug. When changing from the `if` statements to the algorithmic example, we were prevented from making a mistake that's all too easy to make. From 590364b1492da1dea62f97a63ac1d9076316d900 Mon Sep 17 00:00:00 2001 From: Sean Killeen Date: Sat, 11 Mar 2023 02:52:56 +0000 Subject: [PATCH 06/22] add some terms to dictionary --- cSpell.json | 55 ++++++++++++++++++++++++++++------------------------- 1 file changed, 29 insertions(+), 26 deletions(-) diff --git a/cSpell.json b/cSpell.json index 66ff70ce2..db5bfe7b2 100644 --- a/cSpell.json +++ b/cSpell.json @@ -13,6 +13,9 @@ "Dogfood", "DWORD", "Enumerables", + "Excella", + "Fadi", + "Goodhart's", "Guid", "Guids", "Hashtable", @@ -143,50 +146,50 @@ ], "patterns": [ { - "name": "Markdown links", - "pattern": "\\((.*)\\)", - "description": "" + "name": "Markdown links", + "pattern": "\\((.*)\\)", + "description": "" }, { - "name": "Markdown code blocks", - "pattern": "/^(\\s*`{3,}).*[\\s\\S]*?^\\1/gmx", - "description": "Taken from the cSpell example at https://cspell.org/configuration/patterns/#verbose-regular-expressions" + "name": "Markdown code blocks", + "pattern": "/^(\\s*`{3,}).*[\\s\\S]*?^\\1/gmx", + "description": "Taken from the cSpell example at https://cspell.org/configuration/patterns/#verbose-regular-expressions" }, { - "name": "Inline code blocks", - "pattern": "\\`([^\\`\\r\\n]+?)\\`", - "description": "https://stackoverflow.com/questions/41274241/how-to-capture-inline-markdown-code-but-not-a-markdown-code-fence-with-regex" + "name": "Inline code blocks", + "pattern": "\\`([^\\`\\r\\n]+?)\\`", + "description": "https://stackoverflow.com/questions/41274241/how-to-capture-inline-markdown-code-but-not-a-markdown-code-fence-with-regex" }, { - "name": "Link contents", - "pattern": "\\", - "description": "" + "name": "Link contents", + "pattern": "\\", + "description": "" }, { - "name": "Snippet references", - "pattern": "-- snippet:(.*)", - "description": "" + "name": "Snippet references", + "pattern": "-- snippet:(.*)", + "description": "" }, { - "name": "Snippet references 2", - "pattern": "\\<\\[sample:(.*)", - "description": "another kind of snippet reference" + "name": "Snippet references 2", + "pattern": "\\<\\[sample:(.*)", + "description": "another kind of snippet reference" }, { - "name": "Multi-line code blocks", - "pattern": "/^\\s*```[\\s\\S]*?^\\s*```/gm" + "name": "Multi-line code blocks", + "pattern": "/^\\s*```[\\s\\S]*?^\\s*```/gm" }, { - "name": "HTML Tags", - "pattern": "<[^>]*>", - "description": "Reference: https://stackoverflow.com/questions/11229831/regular-expression-to-remove-html-tags-from-a-string" + "name": "HTML Tags", + "pattern": "<[^>]*>", + "description": "Reference: https://stackoverflow.com/questions/11229831/regular-expression-to-remove-html-tags-from-a-string" }, { "name": "UID Lines", "pattern": "uid: (.*)" } - ], - "ignoreRegExpList": [ + ], + "ignoreRegExpList": [ "Markdown links", "Markdown code blocks", "Inline code blocks", @@ -196,5 +199,5 @@ "Multi-line code blocks", "HTML Tags", "UID Lines" - ] + ] } From 7e6343929603402cbc8a91ba7c87474ee96784a2 Mon Sep 17 00:00:00 2001 From: Sean Killeen Date: Sat, 11 Mar 2023 02:54:38 +0000 Subject: [PATCH 07/22] some ignore words --- cSpell.json | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/cSpell.json b/cSpell.json index db5bfe7b2..75cdac5f4 100644 --- a/cSpell.json +++ b/cSpell.json @@ -142,7 +142,11 @@ "osokin", "lahma", "unsortable", - "Dalsbø" + "Dalsbø", + "solated", + "epeatable", + "imely" + ], "patterns": [ { From 36102a5628bc49d4907729be7c22718d88f9a433 Mon Sep 17 00:00:00 2001 From: Sean Killeen Date: Sat, 11 Mar 2023 02:55:36 +0000 Subject: [PATCH 08/22] update cSpell file version to remove warning --- cSpell.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/cSpell.json b/cSpell.json index 75cdac5f4..2e67ee290 100644 --- a/cSpell.json +++ b/cSpell.json @@ -1,5 +1,5 @@ { - "version": "0.1", + "version": "0.2", "language": "en", "words": [ "buildable", From d86c724cad493e5acffd9a1e7d8010ead0da5b64 Mon Sep 17 00:00:00 2001 From: Sean Killeen Date: Sat, 11 Mar 2023 03:03:39 +0000 Subject: [PATCH 09/22] some markdown fixes --- docs/articles/book/concepts/TestTradeoffs.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/articles/book/concepts/TestTradeoffs.md b/docs/articles/book/concepts/TestTradeoffs.md index 32cb84057..b8f5e1688 100644 --- a/docs/articles/book/concepts/TestTradeoffs.md +++ b/docs/articles/book/concepts/TestTradeoffs.md @@ -4,7 +4,7 @@ Different types of tests have different trade-offs in their usage. Typically, automated tests are thought of as a pyramid or a funnel. -* In a pyramid visualization, unit tests comprise the base of the pyramid (the largest part). On top of them are integration tests, then acceptance/functional tests, then UI tests. +* In a pyramid visualization, unit tests comprise the base of the pyramid (the largest part). On top of them are integration tests, then acceptance/functional tests, then UI tests. * In a funnel visualization, the pyramid is inverted, and we think about unit tests as catching a majority of potential issues, followed by integration tests and acceptance/functional tets. The thinking behind both of these visualizations is that you want most of the tests in your project to be unit tests, followed by integration tests and acceptance/functional tests because of the trade-offs we're about to get into. @@ -38,12 +38,12 @@ Each codebase has a different context and set of trade-offs that might inform te Keep some of the below in mind and you may avoid some pitfalls: * **Actively talk about and re-evaluate test types**. For example: - * If a number of UI tests have built up confidence and you've seen no failures, and those tests are appropriately covered by finer-grained tests, it may make sense to retire them. + * If a number of UI tests have built up confidence and you've seen no failures, and those tests are appropriately covered by finer-grained tests, it may make sense to retire them. * If you keep getting caught off-guard by integration issues, it may make sense to invest more time in integration or acceptance tests. * If you've discovered a way to reduce the execution time and maintenance burden of a given layer of tests, it may make sense to invest more in that layer. -* **Remember: The goal is _confidence_**. - * If a test fails, it should be treated as an issue until it can be proven otherwise. - * Don't settle for flaky tests if you can at all avoid doing so. +* **Remember: The goal is _confidence_**. + * If a test fails, it should be treated as an issue until it can be proven otherwise. + * Don't settle for flaky tests if you can at all avoid doing so. * If a test no longer serves to improve confidence in the system (and doesn't meaningfully play into the living documentation of the system), consider removing it or pushing it into finer grained tests. * If the maintenance of a set of unit tests is costly and things are well-covered by integration tests that provide a high degree of confidence, perhaps some of those unit tests can be retired. -* **Keep execution times as fast as possible**. The goal is to run as many tests as possible as often as possible. If a set of tests takes 6 hours to run, how will you be able to get confidence in pushing a branch of code prior to merging it in? More often than not, those tests will be skipped. \ No newline at end of file +* **Keep execution times as fast as possible**. The goal is to run as many tests as possible as often as possible. If a set of tests takes 6 hours to run, how will you be able to get confidence in pushing a branch of code prior to merging it in? More often than not, those tests will be skipped. From a3e46652b7f4e7888eadeb13026542c6b82a2339 Mon Sep 17 00:00:00 2001 From: Sean Killeen Date: Fri, 10 Mar 2023 22:13:17 -0500 Subject: [PATCH 10/22] para on commercial products --- docs/articles/book/concepts/TestTradeoffs.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/docs/articles/book/concepts/TestTradeoffs.md b/docs/articles/book/concepts/TestTradeoffs.md index b8f5e1688..11a33f854 100644 --- a/docs/articles/book/concepts/TestTradeoffs.md +++ b/docs/articles/book/concepts/TestTradeoffs.md @@ -47,3 +47,9 @@ Keep some of the below in mind and you may avoid some pitfalls: * If a test no longer serves to improve confidence in the system (and doesn't meaningfully play into the living documentation of the system), consider removing it or pushing it into finer grained tests. * If the maintenance of a set of unit tests is costly and things are well-covered by integration tests that provide a high degree of confidence, perhaps some of those unit tests can be retired. * **Keep execution times as fast as possible**. The goal is to run as many tests as possible as often as possible. If a set of tests takes 6 hours to run, how will you be able to get confidence in pushing a branch of code prior to merging it in? More often than not, those tests will be skipped. + +## What About Commercial Testing Products? + +Because this guide is intended for NUnit itself, we won't delve into that topic too much. However, these products tend to _increase_ the feedback loops around testing & results, when typically want we want is _as many tests as possible_ running _as often as possible_. Commercial tools tend to take what should be a continuous process/mindset and extract it into a separate role or separate team. We're pretty dedicated to the idea of agility these days and would prefer that testing happen along side the work in close collaboration within cross-functional teams. + +That's not to say we'd never recommend using a commercial testing product -- it certainly may better than having no tests at all or an entirely manual process. But, teams and organizations should be extremely careful of the lagging test feedback, high maintenance burden, and fragility of such endeavors. When in doubt, keep tests close to the work. From 33086ba24c43e53b2bc1b7e91170642553374653 Mon Sep 17 00:00:00 2001 From: Sean Killeen Date: Fri, 10 Mar 2023 22:22:54 -0500 Subject: [PATCH 11/22] note on other assertion libs --- docs/articles/book/concepts/TestingConcepts.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/docs/articles/book/concepts/TestingConcepts.md b/docs/articles/book/concepts/TestingConcepts.md index 3b7a877e0..6283f97d2 100644 --- a/docs/articles/book/concepts/TestingConcepts.md +++ b/docs/articles/book/concepts/TestingConcepts.md @@ -14,11 +14,15 @@ Sometimes called the `AAA` approach, this refers to the way we might write a giv * **Act**: In this step, we take an action. We call some part of our situation/class under test that either returns a value or that we expect to throw an exception. * Typically, we try to keep the actions to a minimum -- preferably, 1. If you are testing different actions, that often means you're testing a different path through your code. In that case, we recommend creating two tests -- one for each path. * **Assert**: In this section of the test, we assert that a value is what we expect, or an exception has been thrown as we expect -- anything that indicates our expectations about the production code are met. - * Similar to the _action_, we try to keep assertions to a minimum or create additional tests to capture each assertion. That's because if a piece of code fails, seeing which tests fail at the same time can help triangulate the issue and give insight as to what the problem is. Sometimes you'll hear this phrased as "one _logical_ assertion", because multiple assertions may make sense as part of an overall concept (e.g. checking three proprties are what you expect when all three relate to a particular concept.) When in doubt, create multiple tests -- you can always consolidate them later. + * Similar to the _action_, we try to keep assertions to a minimum or create additional tests to capture each assertion. That's because if a piece of code fails, seeing which tests fail at the same time can help triangulate the issue and give insight as to what the problem is. Sometimes you'll hear this phrased as "one _logical_ assertion", because multiple assertions may make sense as part of an overall concept (e.g. checking three proprties are what you expect when all three relate to a particular concept.) When in doubt, create multiple tests -- you can always consolidate them later. Oftentimes, we might consolidate assertions in situations where the actions are expensive to run or maintain. * Note that if you are making more than one assertion for a test, NUnit has a particular format for that called `Assert.Multiple`. If you don't use that convention, NUnit will only try the first assertion and will fail the test if it fails -- which doesn't provide you any information about the other assertions you make in that test. This could lead to a situation in which you fix one part of a failing test, only to see another assertion in the same test fail. Sometimes you'll see these actual statements in comments such as `// Arrange`, `// Act`, and `// Assert` in a given test method. This can be helpful for some folks as a mental marker. While we would never begrudge anyone this style, typically these comments can be removed and the different sections can be separated by a blank line. +> [!NOTE] +> This guide will use NUnit's `Assert.That` syntax because it comes out of the box. But, you should know that there are several libraries that can be used with NUnit to craft assertions and return helpful messages upon failure, such as [FluentAssertions](https://fluentassertions.com/) or [Shouldly](https://docs.shouldly.org/) + + ## Red / Green / Refactor TDD lifecycle is often known as "Red, Green, Refactor": From a02b21989a4f261a8d13e981d4a2cd488f0d6c21 Mon Sep 17 00:00:00 2001 From: Sean Killeen Date: Fri, 10 Mar 2023 22:24:36 -0500 Subject: [PATCH 12/22] Note formatting --- docs/articles/book/concepts/TestingConcepts.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/docs/articles/book/concepts/TestingConcepts.md b/docs/articles/book/concepts/TestingConcepts.md index 6283f97d2..d62f4f0a9 100644 --- a/docs/articles/book/concepts/TestingConcepts.md +++ b/docs/articles/book/concepts/TestingConcepts.md @@ -85,4 +85,5 @@ Thinking about this in terms of test coverage, specifying an arbitrary coverage My advice: Write as many tests as possible that increase confidence and add value, recognize when ROI may be limited, know your test coverage across your application, and avoid test coverage targets except as a thought exercise. -NOTE: It is still helpful to measure test coverage for the purpose of these conversations, and there is nothing wrong with saying that test coverage should not trend downward without a very good reason. +> [!NOTE] +> It is still helpful to measure test coverage for the purpose of these conversations, and there is nothing wrong with saying that test coverage should not trend downward without a very good reason. From f79b085a74189bcdf249bac527817a74b1278633 Mon Sep 17 00:00:00 2001 From: Sean Killeen Date: Fri, 10 Mar 2023 22:26:06 -0500 Subject: [PATCH 13/22] table formatting --- docs/articles/book/getting-started/index.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/articles/book/getting-started/index.md b/docs/articles/book/getting-started/index.md index 0dd118bc5..1bf8b9834 100644 --- a/docs/articles/book/getting-started/index.md +++ b/docs/articles/book/getting-started/index.md @@ -17,10 +17,10 @@ Before jumping in, we think a little about the problem space and some things we | Celsius Temperature | Fahrenheit Temperature | | ------------------- | ---------------------- | -| 0 | 32 | -| 100 | 212 | -| 37 | 98.6 | -| -40 | -40 | +| 0 | 32 | +| 100 | 212 | +| 37 | 98.6 | +| -40 | -40 | ## Creating the Project Structure From 7d1d9841fa0d8ff389fda0659174668900c010d8 Mon Sep 17 00:00:00 2001 From: Sean Killeen Date: Fri, 10 Mar 2023 22:34:08 -0500 Subject: [PATCH 14/22] minor wording --- docs/articles/book/concepts/TypesOfTests.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/articles/book/concepts/TypesOfTests.md b/docs/articles/book/concepts/TypesOfTests.md index d912d5e21..be20ab016 100644 --- a/docs/articles/book/concepts/TypesOfTests.md +++ b/docs/articles/book/concepts/TypesOfTests.md @@ -10,7 +10,7 @@ With that said, we do recommend that you try to use each test type correctly as ## Unit Tests -These are typically meant as the "lowest level" of automated tests. They aim to test a specific, isolated class. Any dependencies that class has on other classes would be faked in unit tests, so that they can execute quickly and with a clear understanding of how the class will behave. There will be more on this later when we discuss [test doubles](TODO). +These are typically meant as the "lowest level" of automated tests. They aim to test a specific, isolated class. Any dependencies that class has on other classes would be faked in unit tests, so that they can execute quickly and with a clear understanding of how the class will behave. There will be more on this later when we discuss [test doubles and mocks](TODO). ## Integration Tests From d3a4ef8a2d0726f695dd086f356ef3ed337f72c8 Mon Sep 17 00:00:00 2001 From: Sean Killeen Date: Sat, 11 Mar 2023 03:41:38 +0000 Subject: [PATCH 15/22] finish a sentence --- docs/articles/book/Index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/articles/book/Index.md b/docs/articles/book/Index.md index 74443e6c5..a7f4208a9 100644 --- a/docs/articles/book/Index.md +++ b/docs/articles/book/Index.md @@ -20,7 +20,7 @@ Similarly, we're not trying to "sell" you on TDD. We find value in it in many ca ### What Tech Stack Are You Using? -We're writing this primarily from the perspective of .NET Core and onward, because with .NET 5 this is the path forward that the .NET team has chosen for the technology. With that said, we'll absolutely augment this guide with tips and explainers for those who are on the classic .NET Framework, and if any of what we say doesn't work for you, +We're writing this primarily from the perspective of .NET Core and onward, because with .NET 5 this is the path forward that the .NET team has chosen for the technology. With that said, we'll absolutely augment this guide with tips and explainers for those who are on the classic .NET Framework, and if any of what we say doesn't work for you, let us know! ## This is a Living Thing. Have Feedback or Improvements? From bd9825fa2b7a663ff2358b5b9f9feb16c1f3df8a Mon Sep 17 00:00:00 2001 From: Sean Killeen Date: Sat, 11 Mar 2023 03:45:56 +0000 Subject: [PATCH 16/22] Note that this doesn't replace the docs. --- docs/articles/book/Index.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/docs/articles/book/Index.md b/docs/articles/book/Index.md index a7f4208a9..bba13697f 100644 --- a/docs/articles/book/Index.md +++ b/docs/articles/book/Index.md @@ -22,6 +22,10 @@ Similarly, we're not trying to "sell" you on TDD. We find value in it in many ca We're writing this primarily from the perspective of .NET Core and onward, because with .NET 5 this is the path forward that the .NET team has chosen for the technology. With that said, we'll absolutely augment this guide with tips and explainers for those who are on the classic .NET Framework, and if any of what we say doesn't work for you, let us know! +### This Guide Isn't Intended as a Docs Replacement + +This guide is going to delve into on-boarding, concepts, and thoughts on how to approach automated testing in general. But while we're going to give examples of syntax & features, we're not going to cover _every_ bit of syntax & features. If you'd like more on a certain topic, absolutely suggest it, but please try to make sure that it would bring something unique to the guide. + ## This is a Living Thing. Have Feedback or Improvements? No improvement to this will happen without you. If you have a question, chances are someone else will too -- please ask! If you have an improvement, we'd love to hear about it. [Create an issue in the docs repository](https://www.notion.so/seankilleen/TBD) to start a conversation. From c1a990909b82f9fc6759e0e5f9e173ee7e2b1e3b Mon Sep 17 00:00:00 2001 From: Sean Killeen Date: Sat, 11 Mar 2023 04:03:33 +0000 Subject: [PATCH 17/22] outline including some blank placeholder docs --- docs/articles/book/getting-started/assertions-tour.md | 1 + .../book/getting-started/controlling-test-state.md | 1 + .../book/getting-started/test-cases-and-data.md | 1 + .../book/getting-started/test-doubles-and-mocks.md | 1 + docs/articles/book/getting-started/toc.yml | 11 ++++++++++- 5 files changed, 14 insertions(+), 1 deletion(-) create mode 100644 docs/articles/book/getting-started/assertions-tour.md create mode 100644 docs/articles/book/getting-started/controlling-test-state.md create mode 100644 docs/articles/book/getting-started/test-cases-and-data.md create mode 100644 docs/articles/book/getting-started/test-doubles-and-mocks.md diff --git a/docs/articles/book/getting-started/assertions-tour.md b/docs/articles/book/getting-started/assertions-tour.md new file mode 100644 index 000000000..b2296a9cd --- /dev/null +++ b/docs/articles/book/getting-started/assertions-tour.md @@ -0,0 +1 @@ +# A Quick Tour of Assertions diff --git a/docs/articles/book/getting-started/controlling-test-state.md b/docs/articles/book/getting-started/controlling-test-state.md new file mode 100644 index 000000000..1ec338957 --- /dev/null +++ b/docs/articles/book/getting-started/controlling-test-state.md @@ -0,0 +1 @@ +# Controlling Test State diff --git a/docs/articles/book/getting-started/test-cases-and-data.md b/docs/articles/book/getting-started/test-cases-and-data.md new file mode 100644 index 000000000..32760b3e4 --- /dev/null +++ b/docs/articles/book/getting-started/test-cases-and-data.md @@ -0,0 +1 @@ +# Test Cases and Data diff --git a/docs/articles/book/getting-started/test-doubles-and-mocks.md b/docs/articles/book/getting-started/test-doubles-and-mocks.md new file mode 100644 index 000000000..bc5c4f746 --- /dev/null +++ b/docs/articles/book/getting-started/test-doubles-and-mocks.md @@ -0,0 +1 @@ +# Test Doubles and Mocks diff --git a/docs/articles/book/getting-started/toc.yml b/docs/articles/book/getting-started/toc.yml index 68055c7a8..bbf95bab7 100644 --- a/docs/articles/book/getting-started/toc.yml +++ b/docs/articles/book/getting-started/toc.yml @@ -1,2 +1,11 @@ - name: "Beginning Our NUnit TDD Journey" - href: index.md \ No newline at end of file + href: index.md +- name: "A Quick Tour of Assertions" + href: assertions-tour.md +- name: "Controlling Test State" + href: controlling-test-state.md +- name: "Test Cases and Data" + href: test-cases-and-data.md +- name: "Test Doubles and Mocks" + href: test-doubles-and-mocks.md + \ No newline at end of file From 811ed625a11aaf67f1623ca5ac56809658fd3b58 Mon Sep 17 00:00:00 2001 From: Sean Killeen Date: Sat, 11 Mar 2023 04:06:18 +0000 Subject: [PATCH 18/22] minor updates --- docs/articles/book/concepts/TestingConcepts.md | 3 +-- docs/articles/book/concepts/TheWhy.md | 2 +- 2 files changed, 2 insertions(+), 3 deletions(-) diff --git a/docs/articles/book/concepts/TestingConcepts.md b/docs/articles/book/concepts/TestingConcepts.md index d62f4f0a9..0c8d0f30b 100644 --- a/docs/articles/book/concepts/TestingConcepts.md +++ b/docs/articles/book/concepts/TestingConcepts.md @@ -20,8 +20,7 @@ Sometimes called the `AAA` approach, this refers to the way we might write a giv Sometimes you'll see these actual statements in comments such as `// Arrange`, `// Act`, and `// Assert` in a given test method. This can be helpful for some folks as a mental marker. While we would never begrudge anyone this style, typically these comments can be removed and the different sections can be separated by a blank line. > [!NOTE] -> This guide will use NUnit's `Assert.That` syntax because it comes out of the box. But, you should know that there are several libraries that can be used with NUnit to craft assertions and return helpful messages upon failure, such as [FluentAssertions](https://fluentassertions.com/) or [Shouldly](https://docs.shouldly.org/) - +> This guide will use NUnit's `Assert.That` syntax because it comes out of the box. But, you should know that there are several libraries that can be used with NUnit to craft assertions and return helpful messages upon failure, such as [FluentAssertions](https://fluentassertions.com/) or [Shouldly](https://docs.shouldly.org/). ## Red / Green / Refactor diff --git a/docs/articles/book/concepts/TheWhy.md b/docs/articles/book/concepts/TheWhy.md index 7d28ff028..7e82bba98 100644 --- a/docs/articles/book/concepts/TheWhy.md +++ b/docs/articles/book/concepts/TheWhy.md @@ -29,7 +29,7 @@ NUnit is a testing framework for .NET. Like other frameworks -- xUnit, MSTest -- * Ways to define tests * A runner to run the tests (though you can hook into the Visual Studio test runner, the ReSharper test runner, NCrunch, or any other runner of your choice) -* An assertion library to use during testing (though you can substitute for FluentAssertions or the library of your choice) +* An assertion library to use during testing (though you can substitute for [FluentAssertions](https://fluentassertions.com/) or the library of your choice) We believe the concepts we'll discuss here will be able to be applied to any major test framework in most languages. If you couple the concepts in this guide with some documentation in any major test framework, we hope you'll be able to get where you need to be. From 638ad53ad6cf922cca98e2b7b9520b17ab85f711 Mon Sep 17 00:00:00 2001 From: Sean Killeen Date: Fri, 10 Mar 2023 23:13:23 -0500 Subject: [PATCH 19/22] Add GettingStarted project for the upcoming general samples --- docs/snippets/Snippets.sln | 19 +++++++++----- .../GettingStarted.Tests.csproj | 25 +++++++++++++++++++ .../book/GettingStarted.Tests/UnitTest1.cs | 6 +++++ .../book/GettingStarted.Tests/Usings.cs | 1 + 4 files changed, 45 insertions(+), 6 deletions(-) create mode 100644 docs/snippets/book/GettingStarted.Tests/GettingStarted.Tests.csproj create mode 100644 docs/snippets/book/GettingStarted.Tests/UnitTest1.cs create mode 100644 docs/snippets/book/GettingStarted.Tests/Usings.cs diff --git a/docs/snippets/Snippets.sln b/docs/snippets/Snippets.sln index 8ca03d3e6..40277aa7e 100644 --- a/docs/snippets/Snippets.sln +++ b/docs/snippets/Snippets.sln @@ -3,24 +3,23 @@ Microsoft Visual Studio Solution File, Format Version 12.00 # Visual Studio Version 17 VisualStudioVersion = 17.0.31903.59 MinimumVisualStudioVersion = 10.0.40219.1 -Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "Snippets.NUnit", "Snippets.NUnit\Snippets.NUnit.csproj", "{759AE765-B66A-4585-886C-4A6F35143C92}" +Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Snippets.NUnit", "Snippets.NUnit\Snippets.NUnit.csproj", "{759AE765-B66A-4585-886C-4A6F35143C92}" EndProject Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "book", "book", "{D97B13D1-BCB2-4073-BB28-B66B9875A1B8}" EndProject Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "getting-started", "getting-started", "{3B15F67C-E96E-4DA9-A60A-F25104D96C7E}" EndProject -Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "Converter", "book\getting-started\Converter\Converter.csproj", "{85F83B5F-89F1-48E1-87DF-11A007532D95}" +Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Converter", "book\getting-started\Converter\Converter.csproj", "{85F83B5F-89F1-48E1-87DF-11A007532D95}" EndProject -Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "Converter.Tests", "book\getting-started\Converter.Tests\Converter.Tests.csproj", "{88A94E93-0624-400C-BFE3-C289FC155581}" +Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Converter.Tests", "book\getting-started\Converter.Tests\Converter.Tests.csproj", "{88A94E93-0624-400C-BFE3-C289FC155581}" +EndProject +Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "GettingStarted.Tests", "book\GettingStarted.Tests\GettingStarted.Tests.csproj", "{F43FF87B-2366-4DA9-B6FD-AA80CBFB95FF}" EndProject Global GlobalSection(SolutionConfigurationPlatforms) = preSolution Debug|Any CPU = Debug|Any CPU Release|Any CPU = Release|Any CPU EndGlobalSection - GlobalSection(SolutionProperties) = preSolution - HideSolutionNode = FALSE - EndGlobalSection GlobalSection(ProjectConfigurationPlatforms) = postSolution {759AE765-B66A-4585-886C-4A6F35143C92}.Debug|Any CPU.ActiveCfg = Debug|Any CPU {759AE765-B66A-4585-886C-4A6F35143C92}.Debug|Any CPU.Build.0 = Debug|Any CPU @@ -34,10 +33,18 @@ Global {88A94E93-0624-400C-BFE3-C289FC155581}.Debug|Any CPU.Build.0 = Debug|Any CPU {88A94E93-0624-400C-BFE3-C289FC155581}.Release|Any CPU.ActiveCfg = Release|Any CPU {88A94E93-0624-400C-BFE3-C289FC155581}.Release|Any CPU.Build.0 = Release|Any CPU + {F43FF87B-2366-4DA9-B6FD-AA80CBFB95FF}.Debug|Any CPU.ActiveCfg = Debug|Any CPU + {F43FF87B-2366-4DA9-B6FD-AA80CBFB95FF}.Debug|Any CPU.Build.0 = Debug|Any CPU + {F43FF87B-2366-4DA9-B6FD-AA80CBFB95FF}.Release|Any CPU.ActiveCfg = Release|Any CPU + {F43FF87B-2366-4DA9-B6FD-AA80CBFB95FF}.Release|Any CPU.Build.0 = Release|Any CPU + EndGlobalSection + GlobalSection(SolutionProperties) = preSolution + HideSolutionNode = FALSE EndGlobalSection GlobalSection(NestedProjects) = preSolution {3B15F67C-E96E-4DA9-A60A-F25104D96C7E} = {D97B13D1-BCB2-4073-BB28-B66B9875A1B8} {85F83B5F-89F1-48E1-87DF-11A007532D95} = {3B15F67C-E96E-4DA9-A60A-F25104D96C7E} {88A94E93-0624-400C-BFE3-C289FC155581} = {3B15F67C-E96E-4DA9-A60A-F25104D96C7E} + {F43FF87B-2366-4DA9-B6FD-AA80CBFB95FF} = {3B15F67C-E96E-4DA9-A60A-F25104D96C7E} EndGlobalSection EndGlobal diff --git a/docs/snippets/book/GettingStarted.Tests/GettingStarted.Tests.csproj b/docs/snippets/book/GettingStarted.Tests/GettingStarted.Tests.csproj new file mode 100644 index 000000000..71ee8c728 --- /dev/null +++ b/docs/snippets/book/GettingStarted.Tests/GettingStarted.Tests.csproj @@ -0,0 +1,25 @@ + + + + net6.0 + enable + enable + + false + + + + + + + + all + runtime; build; native; contentfiles; analyzers; buildtransitive + + + all + runtime; build; native; contentfiles; analyzers; buildtransitive + + + + diff --git a/docs/snippets/book/GettingStarted.Tests/UnitTest1.cs b/docs/snippets/book/GettingStarted.Tests/UnitTest1.cs new file mode 100644 index 000000000..849fba495 --- /dev/null +++ b/docs/snippets/book/GettingStarted.Tests/UnitTest1.cs @@ -0,0 +1,6 @@ +namespace GettingStarted.Tests +{ + public class Tests + { + } +} \ No newline at end of file diff --git a/docs/snippets/book/GettingStarted.Tests/Usings.cs b/docs/snippets/book/GettingStarted.Tests/Usings.cs new file mode 100644 index 000000000..cefced496 --- /dev/null +++ b/docs/snippets/book/GettingStarted.Tests/Usings.cs @@ -0,0 +1 @@ +global using NUnit.Framework; \ No newline at end of file From a18e314ec705fc9820d42084efb2431957b18ecf Mon Sep 17 00:00:00 2001 From: Sean Killeen Date: Fri, 10 Mar 2023 23:16:05 -0500 Subject: [PATCH 20/22] update packages --- docs/snippets/Snippets.NUnit/Snippets.NUnit.csproj | 2 +- .../book/GettingStarted.Tests/GettingStarted.Tests.csproj | 6 +++--- .../getting-started/Converter.Tests/Converter.Tests.csproj | 6 +++--- 3 files changed, 7 insertions(+), 7 deletions(-) diff --git a/docs/snippets/Snippets.NUnit/Snippets.NUnit.csproj b/docs/snippets/Snippets.NUnit/Snippets.NUnit.csproj index 4752fa1b2..8958a58fb 100644 --- a/docs/snippets/Snippets.NUnit/Snippets.NUnit.csproj +++ b/docs/snippets/Snippets.NUnit/Snippets.NUnit.csproj @@ -12,7 +12,7 @@ - + all runtime; build; native; contentfiles; analyzers; buildtransitive diff --git a/docs/snippets/book/GettingStarted.Tests/GettingStarted.Tests.csproj b/docs/snippets/book/GettingStarted.Tests/GettingStarted.Tests.csproj index 71ee8c728..86ad92a87 100644 --- a/docs/snippets/book/GettingStarted.Tests/GettingStarted.Tests.csproj +++ b/docs/snippets/book/GettingStarted.Tests/GettingStarted.Tests.csproj @@ -9,10 +9,10 @@ - + - - + + all runtime; build; native; contentfiles; analyzers; buildtransitive diff --git a/docs/snippets/book/getting-started/Converter.Tests/Converter.Tests.csproj b/docs/snippets/book/getting-started/Converter.Tests/Converter.Tests.csproj index 4aa3d9151..b1b94617b 100644 --- a/docs/snippets/book/getting-started/Converter.Tests/Converter.Tests.csproj +++ b/docs/snippets/book/getting-started/Converter.Tests/Converter.Tests.csproj @@ -9,10 +9,10 @@ - + - - + + all runtime; build; native; contentfiles; analyzers; buildtransitive From d429c60b7a0fe7405ee83f96415eed1f8fdba561 Mon Sep 17 00:00:00 2001 From: Sean Killeen Date: Fri, 10 Mar 2023 23:19:34 -0500 Subject: [PATCH 21/22] Add a classlib project for getting started --- docs/snippets/Snippets.sln | 10 ++++++++++ .../GettingStarted.Tests/GettingStarted.Tests.csproj | 8 ++++++++ .../book/getting-started/GettingStarted/Class1.cs | 7 +++++++ .../GettingStarted/GettingStarted.csproj | 9 +++++++++ 4 files changed, 34 insertions(+) create mode 100644 docs/snippets/book/getting-started/GettingStarted/Class1.cs create mode 100644 docs/snippets/book/getting-started/GettingStarted/GettingStarted.csproj diff --git a/docs/snippets/Snippets.sln b/docs/snippets/Snippets.sln index 40277aa7e..eedf9e6c1 100644 --- a/docs/snippets/Snippets.sln +++ b/docs/snippets/Snippets.sln @@ -15,6 +15,8 @@ Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Converter.Tests", "book\get EndProject Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "GettingStarted.Tests", "book\GettingStarted.Tests\GettingStarted.Tests.csproj", "{F43FF87B-2366-4DA9-B6FD-AA80CBFB95FF}" EndProject +Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "GettingStarted", "book\getting-started\GettingStarted\GettingStarted.csproj", "{6F59F44D-AAA7-44BB-9361-939F770E5F2A}" +EndProject Global GlobalSection(SolutionConfigurationPlatforms) = preSolution Debug|Any CPU = Debug|Any CPU @@ -37,6 +39,10 @@ Global {F43FF87B-2366-4DA9-B6FD-AA80CBFB95FF}.Debug|Any CPU.Build.0 = Debug|Any CPU {F43FF87B-2366-4DA9-B6FD-AA80CBFB95FF}.Release|Any CPU.ActiveCfg = Release|Any CPU {F43FF87B-2366-4DA9-B6FD-AA80CBFB95FF}.Release|Any CPU.Build.0 = Release|Any CPU + {6F59F44D-AAA7-44BB-9361-939F770E5F2A}.Debug|Any CPU.ActiveCfg = Debug|Any CPU + {6F59F44D-AAA7-44BB-9361-939F770E5F2A}.Debug|Any CPU.Build.0 = Debug|Any CPU + {6F59F44D-AAA7-44BB-9361-939F770E5F2A}.Release|Any CPU.ActiveCfg = Release|Any CPU + {6F59F44D-AAA7-44BB-9361-939F770E5F2A}.Release|Any CPU.Build.0 = Release|Any CPU EndGlobalSection GlobalSection(SolutionProperties) = preSolution HideSolutionNode = FALSE @@ -46,5 +52,9 @@ Global {85F83B5F-89F1-48E1-87DF-11A007532D95} = {3B15F67C-E96E-4DA9-A60A-F25104D96C7E} {88A94E93-0624-400C-BFE3-C289FC155581} = {3B15F67C-E96E-4DA9-A60A-F25104D96C7E} {F43FF87B-2366-4DA9-B6FD-AA80CBFB95FF} = {3B15F67C-E96E-4DA9-A60A-F25104D96C7E} + {6F59F44D-AAA7-44BB-9361-939F770E5F2A} = {3B15F67C-E96E-4DA9-A60A-F25104D96C7E} + EndGlobalSection + GlobalSection(ExtensibilityGlobals) = postSolution + SolutionGuid = {53A75FBC-BDEE-4A17-8DF7-8EF34ADAA8A0} EndGlobalSection EndGlobal diff --git a/docs/snippets/book/GettingStarted.Tests/GettingStarted.Tests.csproj b/docs/snippets/book/GettingStarted.Tests/GettingStarted.Tests.csproj index 86ad92a87..d4cd4cc3d 100644 --- a/docs/snippets/book/GettingStarted.Tests/GettingStarted.Tests.csproj +++ b/docs/snippets/book/GettingStarted.Tests/GettingStarted.Tests.csproj @@ -22,4 +22,12 @@ + + + + + + + + diff --git a/docs/snippets/book/getting-started/GettingStarted/Class1.cs b/docs/snippets/book/getting-started/GettingStarted/Class1.cs new file mode 100644 index 000000000..3e1e07c54 --- /dev/null +++ b/docs/snippets/book/getting-started/GettingStarted/Class1.cs @@ -0,0 +1,7 @@ +namespace GettingStarted +{ + public class Class1 + { + + } +} \ No newline at end of file diff --git a/docs/snippets/book/getting-started/GettingStarted/GettingStarted.csproj b/docs/snippets/book/getting-started/GettingStarted/GettingStarted.csproj new file mode 100644 index 000000000..132c02c59 --- /dev/null +++ b/docs/snippets/book/getting-started/GettingStarted/GettingStarted.csproj @@ -0,0 +1,9 @@ + + + + net6.0 + enable + enable + + + From ba23d44abcf5686f203d5494e8ce18098eae7bf7 Mon Sep 17 00:00:00 2001 From: Sean Killeen Date: Sat, 11 Mar 2023 00:16:54 -0500 Subject: [PATCH 22/22] some content & next thoughts --- .../book/getting-started/assertions-tour.md | 51 +++++++++++++++++++ 1 file changed, 51 insertions(+) diff --git a/docs/articles/book/getting-started/assertions-tour.md b/docs/articles/book/getting-started/assertions-tour.md index b2296a9cd..a49818dc3 100644 --- a/docs/articles/book/getting-started/assertions-tour.md +++ b/docs/articles/book/getting-started/assertions-tour.md @@ -1 +1,52 @@ # A Quick Tour of Assertions + +In this guide, we'll begin a new sample application in order to work through some TDD and explore some of NUnit's assertions in tests. + +We're going to build on this sample in the coming guides to explore NUnit concepts. + +## Creating the Project Structure for a "Checkout Machine" + +* Create a new, empty folder somewhere that you want to practice this exercise +* Open a command prompt and go to that folder. +* Run the following commands: + +```cmd +dotnet new sln --name Supermarket +dotnet new classlib --name Checkout +dotnet new nunit --name Checkout.Tests +dotnet sln add .\Checkout\ +dotnet sln add .\Checkout.Tests\ +cd .\Checkout.Tests\ +dotnet add reference ..\Checkout\Checkout.csproj +``` + +Let's break down what this command does. + +* Adds a new solution named `Supermarket.sln` +* Adds a new class library called `Checkout.csproj` (this will be our production code project) +* Adds a new test project using the `nunit` template, called `Checkout.Tests.csproj` (this will be our unit test project) +* Adds a reference from the test project to the production code project so that we can see its contents. + +## Creating our First Test + +* In your `Checkout.Tests` project, create a `CheckoutMachineTests.cs` file. +* Replace the code in that file with the below: + +```csharp +public class CheckoutMachineTests +{ + [Test] + public void Total_NoItemsScanned_Returns0(){} +} +``` + +Let's pause for a moment before proceeding. What does this test name tell us? + +We use a standard convention in this case, and with good reason -- the name conveys a lot of information at a glance. This follows the format `[MethodUnderTest]_[Scenario]_[ExpectedResult]`. So from this name, we know our checkout machine will have a method called `Total`, and when no items are scanned, that method should return `0`. + +TODO: Finish Assertion & Prod Code +TODO: Show scanning an item & Prod Code (easiest possible) +TODO: Show scanning a different item & Prod Code (update scan method to include a number of some kind) +TODO: First refactoring - extracting the idea of a SKU + +TODO: idea of TotalItems so we can explore a GreaterThan assertion. Or maybe something like capturing the time a checkout starts and then asserting items were added after the start? \ No newline at end of file