This is part 4 of a multi-part series:
In Part 1, we discussed setup and best practices and answered the question: “Will AI replace software developers?”.
In Part 2, we designed a solution and created a design document.
In Part 3, we generated a sequence diagram and an OpenAPI specification.
In Part 4, we’ll generate tests.
Recap: URL Shortening Service
We’re writing a solution for the URL Shortener Challenge.
We’ve got a great design.
AI won't replace developers; developers that use AI will replace those who don't.
Test First
I like Test Driven Development development. I usually write tests during coding -- interwoven with my coding.
In this case, I'm generating tests first, then using a different tool to write application code to demonstrate using AI in multiple ways.
Good tests describe the business use cases of our software. So, like the other prompts, let’s use our design document for this prompt.
You and I are software engineers. We will use pair-development. Our goal is to craft the best possible application to satisfy the design.
We will follow this process:
1. You will write tests that fit the design
2. We'll discuss the tests until they satisfy the design
{complete design document}
It’s important to note a critical part of the design document. At the end is a style guide. This style guide drives how ChatGPT will generate the tests.
## Style Guide
Our organization follows these principles for development.
- DRY
- SOLID
- TDD
Our company follows these coding styles and best practices:
- Use Python `black` formatting.
- Use Python type hints.
- Pytest for testing.
- Testing:
- All business use cases should be covered in the tests.
- Focus testing on business case functional testing to test at API
boundaries such as REST APIs -- not class-level
tests.
- For fake data use "example" in the name. "test" must only be used for the
test names themselves.
- If monkey-patching or other dependency injection is necessary, only do so
in pytest fixtures -- no dependency
injection in tests themselves.
- Use testing analogs for external dependencies like databases. Do not use
test analogs for our own code.
- Testing analogs should function the same as the libraries and services
they mimic. Only implement as much analog
functionality as needed for the test.
- For test analogs, use "fake" in the name.
ChatGPT takes this into account when generating the tests:
import pytest from datetime import timedelta from freezegun import freeze_time from urllib.parse import urlparse from your_app import create_app from your_app.shortener import url_hash, generate_short_code # create a fixture for test client @pytest.fixture def client(): app = create_app() with app.test_client() as client: yield client def test_shorten_valid_url(client): url = "https://www.very-long-url.com/" response = client.post('/shorten', json={'url': url}) assert response.status_code == 200 assert 'url' in response.get_json() short_code = urlparse(response.get_json()['url']).path.strip('/') assert short_code == generate_short_code(url_hash(url)) def test_redirect_to_long_url(client): url = "https://www.very-long-url.com/" # first we need to generate the short URL response = client.post('/shorten', json={'url': url}) assert response.status_code == 200 short_code = urlparse(response.get_json()['url']).path.strip('/') # then we use the short URL response = client.get('/' + short_code) assert response.status_code == 302 assert response.headers['Location'] == url @freeze_time("2023-06-10") def test_expired_short_url(client): url = "https://www.very-long-url.com/" # first we generate the short URL response = client.post('/shorten', json={'url': url}) assert response.status_code == 200 short_code = urlparse(response.get_json()['url']).path.strip('/') # then we time travel 24 hours into the future with freeze_time("2023-06-11"): response = client.get('/' + short_code) assert response.status_code == 404 # [truncated]
This is close. It covered most of the use cases in the first pass and styled the code based on the style guide.
Like with most of the other design and development phases with ChatGPT, this will be a conversation to generate optimal results.
- generate_short_code is not idempotent so we can not use it this way in our tests. To verify the URL is shortened, validate the output URL matches the requirements, then call it to verify we are redirected.
- Call the app "shorty"
- Remember to use black formatting
- Do we have any external dependencies we need to fake?
You're right about the idempotence, my mistake. Instead, we'll validate the shortened URL by checking its length and calling it to verify the redirection.
As for external dependencies, we are using Redis as our backend storage. When testing, we don't want to rely on an actual Redis instance as it might not be available or it might have unexpected data. Therefore, we'll use a fake Redis server for our tests. Python has a library for that called
fakeredis
.Now, let's modify our tests to use "shorty" as the app name and to use
fakeredis
. We'll also format the tests usingblack
:from shorty import create_app from fakeredis import FakeStrictRedis @pytest.fixture def client(): app = create_app() app.config["REDIS"] = FakeStrictRedis() with app.test_client() as client: yield client
More revisions, please.
- test_redirect_to_long_url is now redundant.
- Are there any other use cases we should cover?
You're correct. The
test_shorten_valid_url
test already covers the redirection to the long URL, sotest_redirect_to_long_url
is indeed redundant.As for other use cases, we could add a test for the case where the user tries to access a non-existent short URL. This could happen if a user makes a typo when entering the URL manually, or if there's an issue with the code generating or storing the short URLs.
Here's a test for this use case:
def test_nonexistent_short_url(client): response = client.get("/nonexistent") assert response.status_code == 404 assert "error" in response.get_data(as_text=True)
Additionally, we should consider adding a test to ensure that the shortened URLs are unique for different long URLs within the 24-hour period. Here is a test case for that:
We continue:
looks good. One revision, for test_nonexistent_short_url, validate the error string contains a human friendly message. Generate the new test file.
Then ChatGPT forgets what it’s doing:
Some test cases are now missing such as expired URL. What happened to fakeredis?
My apologies for the confusion. Let's reintroduce the missing test case for an expired URL and include the fakeredis as a fixture for testing Redis-related functionality. Below are the changes:
At least we can get it back on track easily.
After a few iterations, we have a complete set of tests.
ChatGPT has understood our testing requirements. If I did not specify them, it would have generated less useful unit tests, used poor naming conventions, and littered test code with dependency injection.
The final results needed a few small tweaks to get the boilerplate setup done correctly. The tests themselves were spot on.
You can view the completed test file.
Now we can take these tests, drop them into our IDE and write code to satisfy them.
Next
Head to Part 5. Continue the journey.
Part 4: Generating tests ← YOU ARE HERE