`exercsim/Python` is one of the many tracks on [exercism](https://exercism.org/). This repo holds all the instructions, tests, code, & support files for Python *exercises* currently under development or implemented & available for students.
Exercises are grouped into **concept** exercises which teach the [Python syllabus](https://exercism.org/tracks/python/concepts), and **practice** exercises, which are unlocked by progressing in the syllabus tree. Practice exercises are open-ended, and can be used to practice concepts learned, try out new techniques, and _play_. These two exercise groupings can be found in the track [`config.json`](https://github.com/exercism/javascript/blob/main/config.json), and under the `python/exercises` directory.
It's not uncommon that people discover typos, confusing directions, or incorrect implementations of certain tests or code examples. Or you might have a great suggestion for a hint to aid students (💙) , see optimizations for exemplar or test code, find some missing test cases to add, or want to correct factual and/or logical errors. Or maybe you have a great idea for an exercise or feature.
2️⃣ Medium changes that have been agreed/discussed via a filed issue,
🌟🌟 If you have not already done so, please take a moment to read our [Code of Conduct](https://exercism.org/docs/using/legal/code-of-conduct) & [Being a Good Community Member](https://github.com/exercism/docs/tree/main/community/good-member) . It might also be helpful to take a look at [The words that we use](https://github.com/exercism/docs/blob/main/community/good-member/words.md).
Some defined roles in our community: [Community Member](https://github.com/exercism/docs/tree/main/community/good-member) | [Contributors](https://github.com/exercism/docs/blob/main/community/contributors.md) |[Mentors](https://github.com/exercism/docs/tree/main/mentoring) | [Maintainers](https://github.com/exercism/docs/blob/main/community/maintainers.md) | [Admins](https://github.com/exercism/docs/blob/main/community/administrators.md)
- Maintainers are happy to review your work and help you out. 💛 💙 But they may be in a different timezone, or tied up 🧶 with other tasks. **Please wait at least 72 hours before pinging them.** They will review your request as soon as they are able to.
- If you'd like in-progress feedback or discussion, please mark your PR as a `[draft]`
- Pull Request titles and descriptions should make clear **what** has changed and **why**. Please link 🔗 to any related issues the PR addresses.
- 📛 [An issue should be opened](https://github.com/exercism/python/issues/new/choose) 📛 _**before**_ creating a Pull Request that makes significant or breaking changes to an existing exercise.
- The same holds true for changes across multiple exercises.
- It is best to discuss changes with 🧰 maintainers before doing a lot of work.
- Follow coding standards found in [PEP8](https://www.python.org/dev/peps/pep-0008/) ( ["For Humans" version here](https://pep8.org/).) We do have some more specific requirements. More on that a little later.
- All files should have a proper [EOL](https://en.wikipedia.org/wiki/Newline) at the end. This means one carriage return at the end of the final line of text in files.
- Otherwise, watch out ⚠️ for trailing spaces, extra blank lines, extra spaces, and spaces in blank lines.
- The CI is going to run **a lot** of checks on your PR. Pay attention to the failures, try to understand and fix them. If you need help, comment in the PR or issue. 🙋🏽♀️ The maintainers are happy to help troubleshoot.
Non-code content (_exercise introductions & instructions, hints, concept write-ups, documentation etc._) should be written in [American English](https://github.com/exercism/docs/blob/main/building/markdown/style-guide.md) . We strive to watch [the words we use](https://github.com/exercism/docs/blob/main/community/good-member/words.md).
When a word or phrase usage is contested or ambiguous, we default to what is best understood by our international community of learners, even if it "sounds a little weird" to a "native" American English speaker.
Our documents use [Markdown](https://guides.github.com/pdfs/markdown-cheatsheet-online.pdf) , with certain [alterations](https://github.com/exercism/docs/blob/main/building/markdown/widgets.md) & [additions](https://github.com/exercism/docs/blob/main/building/markdown/internal-linking.md). Here is our full [Markdown Specification](https://github.com/exercism/docs/blob/main/building/markdown/markdown.md). We format/lint our Markdown with [Prettier](https://prettier.io/).
- Each exercise must be self-contained. Please do not use or reference files that reside outside the given exercise directory. "Outside" files will not be included if a student fetches the exercise via the CLI.
- Each exercise/problem should include a complete test suite, an example/exemplar solution, and a stub file ready for student implementation.
- See [Concept Exercise Anatomy](https://github.com/exercism/docs/blob/main/building/tracks/concept-exercises.md), or [Practice Exercise Anatomy](https://github.com/exercism/docs/blob/main/building/tracks/practice-exercises.md) depending on which type of exercise you are contributing to.
- For **practice exercise**, descriptions and instructions come from a centralized, cross-track [problem specifications](https://github.com/exercism/problem-specifications) repository.
- Any updates or changes need to be proposed/approved in `problem-specifications` first.
- If Python-specific changes become necessary, they need to be appended to the canonical instructions by creating a `instructions.append.md` file in this (`exercism/Python`) repository.
- **Practice Exericse Test Suits** for many practice exercises are similarly [auto-generated](##Auto-Generated Test Files and Test Templates) from data in [problem specifications](https://github.com/exercism/problem-specifications).
- Any changes to them need to be proposed/discussed in the `problem-specifications` repository and approved by **3 track maintainers**, since changes could potentially affect many (or all) exercism language tracks.
- If Python-specific test changes become necessary, they can be appended to the exercise `tests.toml` file. 📛 [**Please file an issue**](https://github.com/exercism/python/issues/new/choose) 📛 and check with maintainers before adding any Python-specific tests.
Exercises on this track officially support Python >= `3.8` Track tooling (`test runners`, `analyzers`, and r`epresenters`) all run on Python `3.9`.
* All exercises going forward should be written for compatibility with Python >= `3.8`,.
* Version backward _incompatibility_ (*e.g* an exercise using a `3.8` or `3.9` only feature) should be clearly notied in any exercise introduction or notes.
* _Most_ exercises will work with Python 3.6+, and _many_ are compatible with Python 2.7+. Please do not change existing exercises to add new `3.6`+ features without consulting with a maintainer first.
- All test suites and example solutions must work in all Python versions that we currently support. When in doubt about a feature, please check with maintainers.
Practice exericses inherit their definitions from the [problem-specifications](https://github.com/exercism/problem-specifications) repository in the form of _description files_. Exercise introductions, instructions and (_in the case of **many**, but not **all**_) test files are machine-generated .
Changes to practice exercise _specifications_ should be raised/PR'd in [problem-specifications](https://github.com/exercism/problem-specifications) and approved by **3 track maintainers**. After an exercise change has gone through that process , related documents and tests for the Python track will need to be re-generated via [configlet](https://github.com/exercism/docs/blob/main/building/configlet/generating-documents.md). Configlet is also used as part of the track CI, essential track and exercise linting, and other verification tasks.
If a practice exercise has an auto-generated `<exercise>_test.py` file, there will be a `.meta/template.j2` and a `.meta/tests.toml` file in the exercise directory. If an exercise implements Python track-specific tests, there may be a `.meta/additional_tests.json` to define them. These `additional_tests.json` files will automatically be included in test generation.
Practice exercise `<exercise>_test.py` files are generated/regenerated via the [Python Track Test Generator](https://github.com/exercism/python/blob/main/docs/GENERATOR.md). Please reach out to a maintainer if you need any help with the process.
Exercism tracks inherit exercise definitions from the [problem-specifications] repository in the form of description files
(from which exercise READMEs are [generated](#generating-exercise-readmes))
## Implementing an exercise
### Exercise structure
```Bash
exercises/[EXERCISE]/
├── [EXERCISE].py
├── [EXERCISE]_test.py
├── example.py
├── .meta
│ ├── template.j2
│ ├── additional_tests.json
│ └── hints.md
└── README.md
```
Files:
| File | Description | Source |
|:--- |:--- |:--- |
| [[EXERCISE].py](exercises/two-fer/two_fer.py) | Solution stub | Manually created by the implementer |
| [[EXERCISE]_test.py](exercises/two-fer/two_fer_test.py) | Exercise test suite | Automatically generated if `.meta/template.j2` is present, otherwise manually created by the implementer |
| [example.py](exercises/two-fer/example.py) | Example solution used to automatically verify the `[EXERCISE]_test.py` suite | Manually created by the implementer |
| [.meta/template.j2](exercises/two-fer/.meta/template.j2) | Test generation template; if present used to automatically generate `[EXERCISE]_test.py` (See [generator documentation](docs/GENERATOR.md)) | Manually created by implementer |
| [.meta/additional_tests.json](exercises/word-count/.meta/additional_tests.json) | Defines additional track-specific test cases; if `.meta/template.j2` is also present these test will be incorporated into the automatically generated `[EXERCISE]_test.py` | Manually created by the implementer |
| [.meta/hints.md](exercises/high-scores/.meta/hints.md) | Contains track-specific hints that are automatically included in the generated `README.md` file | Manually created by the implementer |
If an unimplemented exercise has a `canonical-data.json` file in the [problem-specifications] repository, a generation template must be created. See the [test generator documentation](docs/GENERATOR.md) for more information.
If an unimplemented exercise does not have a `canonical-data.json` file, the test file must be written manually (use existing test files for examples).
### Example solutions
Example solution files serve two purposes:
1. Verification of the tests
2. Example implementation for mentor/student reference
### config.json
[`config.json`](config.json) is used by the website to determine which exercises to load an in what order. It also contains some exercise metadata, such as difficulty, labels, and if the exercise is a core exercise. New entries should be places just before the first exercise that is marked `"deprecated": true`:
```JSON
{
"slug": "current-exercise",
"uuid": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa",
"core": false,
"unlocked_by": null,
"difficulty": 1,
"topics": [
"strings"
]
},
<<<HERE
{
"slug": "old-exercise",
"uuid": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa",
"core": false,
"unlocked_by": null,
"difficulty": 2,
"topics": null,
"status": "deprecated"
},
```
Fields
<table>
<tr>
<td>slug</td>
<td>Hyphenated lowercase exercise name</td>
</tr>
<tr>
<td>uuid</td>
<td>Generate using <code>configlet uuid</code></td>
</tr>
<tr>
<td>core</td>
<td>Set to <code>false</code>; core exercises are decided by track maintainers</td>
</tr>
<tr>
<td>unlocked_by</td>
<td>Slug for the core exercise that unlocks the new one</td>
</tr>
<tr>
<td>difficulty</td>
<td><code>1</code> through <code>10</code>. Discuss with reviewer if uncertain.</td>
</tr>
<tr>
<td>topics</td>
<td>Array of relevant topics from the <ahref="https://github.com/exercism/problem-specifications/blob/master/TOPICS.txt">topics list</a></td>
</tr>
</table>
## Implementing Track-specific Exercises
Similar to implementing a canonical exercise that has no `canonical-data.json`, but the exercise README will also need to be written manually. Carefully follow the structure of generated exercise READMEs.