Add track Contributing Guide and Generator documentation (#1900)

* Add Track Contributing Guide

* Add test generator documentation

* fix heading format

* fix table HTML

* incorporate review suggestions

Suggestions: https://github.com/exercism/python/pull/1900#pullrequestreview-272542702

Co-authored-by: Michael Morehouse <640167+yawpitch@users.noreply.github.com>
This commit is contained in:
Corey McCandless
2019-08-08 12:49:02 -04:00
committed by Michael Morehouse
parent ede3016eae
commit dd698d18ef
4 changed files with 309 additions and 4 deletions

164
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,164 @@
# Contributing Guide
This document supplements the [Exercism contributing guide]; all contributors should read that document before proceeding.
## Table of Contents
- [Contributing Guide](#contributing-guide)
* [Architecture](#architecture)
* [Implementing an exercise](#implementing-an-exercise)
+ [Exercise structure](#exercise-structure)
+ [Generating Exercise READMEs](#generating-exercise-readmes)
- [Requirements](#requirements)
- [Generating all READMEs](#generating-all-readmes)
- [Generating a single README](#generating-a-single-readme)
+ [Implementing tests](#implementing-tests)
+ [Example solutions](#example-solutions)
+ [config.json](#configjson)
* [Implementing Track-specific Exercises](#implementing-track-specific-exercises)
* [Pull Request Tips](#pull-request-tips)
## Architecture
Exercism tracks inherit exercise definitions from the [problem-specifications] repository in the form of description files
(from which exercise READMEs are [generated](#GeneratingExerciseREADMEs))
## Implementing an exercise
### Exercise structure
```Bash
exercises/[EXERCISE]/
├── [EXERCISE].py
├── [EXERCISE]_test.py
├── example.py
├── .meta
│ ├── template.j2
│ ├── additional_tests.json
│ └── hints.md
└── README.md
```
Files:
| File | Description | Source |
|:--- |:--- |:--- |
| [[EXERCISE].py](exercises/two-fer/two_fer.py) | Solution stub | Manually created by the implementer |
| [[EXERCISE]_test.py](exercises/two-fer/two_fer_test.py) | Exercise test suite | Automatically generated if `.meta/template.j2` is present, otherwise manually created by the implementer |
| [example.py](exercises/two-fer/example.py) | Example solution used to automatically verify the `[EXERCISE]_test.py` suite | Manually created by the implementer |
| [.meta/template.j2](exercises/two-fer/.meta/template.j2) | Test generation template; if present used to automatically generate `[EXERCISE]_test.py` (See [generator documentation](docs/GENERATOR.md)) | Manually created by implementer |
| [.meta/additional_tests.json](exercises/word-count/.meta/additional_tests.json) | Defines additional track-specific test cases; if `.meta/template.j2` is also present these test will be incorporated into the automatically generated `[EXERCISE]_test.py` | Manually created by the implementer |
| [.meta/hints.md](exercises/high-scores/.meta/hints.md) | Contains track-specific hints that are automatically included in the generated `README.md` file | Manually created by the implementer |
| [README.md](exercises/two-fer/README.md) | Exercise README | [Generated by `configlet` tool](#GeneratingExerciseREADMEs) |
### Generating Exercise READMEs
#### Requirements
- A local clone of the [problem-specifications] repository.
- [configlet]: may be obtained either by
- (**Recommended**) Following installation instructions at the above link
- Running `bin/fetch-configlet` (`configlet` binary will be downloaded to the repository `bin/`)
#### Generating all READMEs
```
configlet generate <path/to/track> --spec-path path/to/problem/specifications
```
#### Generating a single README
```
configlet generate <path/to/track> --spec-path path/to/problem/specifications --only example-exercise
```
### Implementing tests
If an unimplemented exercise has a `canonical-data.json` file in the [problem-specifications] repository, a generation template must be created. See the [test generator documentation](docs/GENERATOR.md) for more information.
If an unimplemented exercise does not have a `canonical-data.json` file, the test file must be written manually (use existing test files for examples).
### Example solutions
Example solution files serve two purposes:
1. Verification of the tests
2. Example implementation for mentor/student reference
### config.json
[`config.json`](config.json) is used by the website to determine which exercises to load an in what order. It also contains some exercise metadata, such as difficulty, labels, and if the exercise is a core exercise. New entries should be places just before the first exercise that is marked `"deprecated": true`:
```JSON
{
"slug": "current-exercise",
"uuid": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa",
"core": false,
"unlocked_by": null,
"difficulty": 1,
"topics": [
"strings"
]
},
<<< HERE
{
"slug": "old-exercise",
"uuid": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa",
"core": false,
"unlocked_by": null,
"difficulty": 2,
"topics": null,
"deprecated": true
},
```
Fields
<table>
<tr>
<td>slug</td>
<td>Hyphenated lowercase exercise name</td>
</tr>
<tr>
<td>uuid</td>
<td>Generate using <code>configlet uuid</code></td>
</tr>
<tr>
<td>core</td>
<td>Set to <code>false</code>; core exercises are decided by track maintainers</td>
</tr>
<tr>
<td>unlocked_by</td>
<td>Slug for the core exercise that unlocks the new one</td>
</tr>
<tr>
<td>difficulty</td>
<td><code>1</code> through <code>10</code>. Discuss with reviewer if uncertain.</td>
</tr>
<tr>
<td>topics</td>
<td>Array of relevant topics from the <a href="https://github.com/exercism/problem-specifications/blob/master/TOPICS.txt">topics list</a> </td>
</tr>
</table>
## Implementing Track-specific Exercises
Similar to implementing a canonical exercise that has no `canonical-data.json`, but the exercise README will also need to be written manually. Carefully follow the structure of generated exercise READMEs.
## Pull Request Tips
Before committing:
- Run `configlet fmt` and `configlet lint` before committing if [`config.json`](config.json) has been modified
- Run [flake8] to ensure all Python code conforms to style standards
- Run `test/check-exercises.py [EXERCISE]` to check if your test changes function correctly
- If you modified or created a `hints.md` file, [regenerate the README](#GeneratingExerciseREADMEs)
- If your changes affect multiple exercises, try to break them up into a separate PR for each exercise.
[configlet]: https://github.com/exercism/configlet
[Exercism contributing guide]: https://github.com/exercism/docs/blob/master/contributing-to-language-tracks/README.md
[problem-specifications]: https://github.com/exercism/problem-specifications
[topics list]: https://github.com/exercism/problem-specifications/blob/master/TOPICS.txt
[flake8]: http://flake8.pycqa.org/

View File

@@ -8,7 +8,8 @@ Exercism exercises in Python
## Contributing Guide ## Contributing Guide
Please see the [contributing guide](https://github.com/exercism/docs/blob/master/contributing-to-language-tracks/README.md) Please see the [Exercism contributing guide](https://github.com/exercism/docs/blob/master/contributing-to-language-tracks/README.md)
and the [Python track contributing guide](CONTRIBUTING.md)
## Working on the Exercises ## Working on the Exercises

View File

@@ -218,9 +218,23 @@ if __name__ == '__main__':
) )
) )
parser.add_argument('-v', '--verbose', action='store_true') parser.add_argument('-v', '--verbose', action='store_true')
parser.add_argument('-p', '--spec-path', default=DEFAULT_SPEC_LOCATION) parser.add_argument(
parser.add_argument('--stop-on-failure', action='store_true') '-p', '--spec-path',
parser.add_argument('--check', action='store_true') default=DEFAULT_SPEC_LOCATION,
help=(
'path to clone of exercism/problem-specifications '
'(default: %(default)s)'
)
)
parser.add_argument(
'--stop-on-failure',
action='store_true'
)
parser.add_argument(
'--check',
action='store_true',
help='check if tests are up-to-date, but do not modify test files'
)
opts = parser.parse_args() opts = parser.parse_args()
if opts.verbose: if opts.verbose:
logger.setLevel(logging.DEBUG) logger.setLevel(logging.DEBUG)

126
docs/GENERATOR.md Normal file
View File

@@ -0,0 +1,126 @@
# Exercism Python Track Test Generator
The Python track uses a generator script and [Jinja2] templates for
creating test files from the canonical data.
## Table of Contents
- [Exercism Python Track Test Generator](#exercism-python-track-test-generator)
* [Script Usage](#script-usage)
* [Test Templates](#test-templates)
+ [Conventions](#conventions)
+ [Layout](#layout)
+ [Overriding Imports](#overriding-imports)
+ [Ignoring Properties](#ignoring-properties)
* [Learning Jinja](#learning-jinja)
* [Creating a templates](#creating-a-templates)
## Script Usage
Test generation requires a local copy of the [problem-specifications] repository.
Run `bin/generate_tests.py --help` for usage information.
## Test Templates
Test templates support [Jinja2] syntax, and have the following context
variables available from the canonical data:
- `exercise`: The hyphenated name of the exercise (ex: `two-fer`)
- `version`: The canonical data version (ex. `1.2.0`)
- `cases`: A list of case objects or a list of `cases` lists. For exact
structure for the exercise you're working on, please consult
`canonical-data.json`
- `has_error_case`: Indicates if any test case expects an
error to be thrown (ex: `False`)
- `additional_cases`: similar structure to `cases`, but is populated from the exercise's `.meta/additional_tests.json` file if one exists (for an example, see `exercises/word-count/.meta/additional_tests.json`)
### Conventions
- General-use macros for highly repeated template structures are defined in `config/generator_macros.j2`.
- These may be imported with the following:
`{%- import "generator_macros.j2" as macros with context -%}`
- All test templates should end with `{{ macros.footer() }}`.
- All Python class names should be in CamelCase (ex: `TwoFer`).
- Convert using `{{ "two-fer" | camel_case }}`
- All Python module and function names should be in snake_case
(ex: `high_scores`, `personal_best`).
- Convert using `{{ "personalBest" | to_snake }}`
- Track-specific tests are defined in the option file `.meta/additional_tests.json`. The JSON object defined in this file is to
have a single key, `cases`, which has the same structure as `cases` in
`canonical-data.json`.
- Track-specific tests should be placed after canonical tests in test
files.
- Track-specific tests should be marked in the test file with the following comment:
```
# Additional tests for this track
```
### Layout
Most templates will look something like this:
```Jinja2
{%- import "generator_macros.j2" as macros with context -%}
{{ macros.header() }}
class {{ exercise | camel_case }}Test(unittest.TestCase):
{% for case in cases -%}
def test_{{ case["description"] | to_snake }}(self):
value = {{ case["input"]["value"] }}
expected = {{ case["expected"] }}
self.assertEqual({{ case["property"] }}(value), expected)
{% endfor %}
{{ macros.footer() }}
```
### Overriding Imports
The names imported in `macros.header()` may be overridden by adding
a list of alternate names to import (ex:`clock`):
```Jinja2
{{ macros.header(["Clock"])}}
```
### Ignoring Properties
On rare occasion, it may be necessary to filter out properties that
are not tested in this track. The `header` macro also accepts an
`ignore` argument (ex: `high-scores`):
```Jinja2
{{ macros.header(ignore=["scores"]) }}
```
## Learning Jinja
Starting with the [Jinja Documentation] is highly recommended, but a complete reading is not strictly necessary.
Additional Resources:
- [Primer on Jinja Templating]
- [Python Jinja tutorial]
## Creating a templates
1. Create `.meta/template.j2` for the exercise you are implementing,
and open it in your editor.
2. Copy and paste the [example layout](#Layout) in the template file
and save.
3. Make the appropriate changes to the template file until it produces
valid test code, referencing the exercise's `canonical-data.json` for
input names and case structure.
- Use the [available macros](config/generator_macros.j2) to avoid re-writing standardized sections.
- If you are implementing a template for an existing exercise,
matching the exact structure of the existing test file is not a
requirement, but minimizing differences will make PR review a much smoother process for everyone involved.
[Jinja2]: https://jinja.pocoo.org/
[Jinja Documentation]: https://jinja.palletsprojects.com/en/2.10.x/
[Primer on Jinja Templating]: https://realpython.com/primer-on-jinja-templating/
[Python Jinja tutorial]: http://zetcode.com/python/jinja/
[problem-specifications]: https://github.com/exercism/problem-specifications