API Reference
📌 Grader Functions
jupygrader.grade_notebooks()
from jupygrader import grade_notebooks, GradingItem
item1 = {
"notebook_path": "path/to/notebook1.ipynb",
"output_path": "path/to/output1",
"copy_files": ["data1.csv"],
}
item2 = {
"notebook_path": "path/to/notebook2.ipynb",
"output_path": None, # Will default to the notebook's parent directory
"copy_files": {
"data/population.csv": "another/path/population.csv",
},
}
graded_results = grade_notebooks(
[item1, item2],
execution_timeout=300 # Set execution timeout to 300 seconds (5 minutes)
)
Grade multiple Jupyter notebooks with test cases.
Processes a list of notebook grading items, executes each notebook in a clean environment, evaluates test cases, and produces graded outputs. Can handle both simple file paths and complex grading configurations.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
grading_items
|
List[Union[FilePath, GradingItem, dict]]
|
List of items to grade, which can be: - Strings or Path objects with paths to notebook files - GradingItem objects with detailed grading configuration - Dictionaries that can be converted to GradingItem objects |
required |
base_files
|
Optional[Union[FilePath, List[FilePath], FileDict]]
|
Optional files to include in all grading environments. Can be: - A single file path (string or Path) - A list of file paths - A dictionary mapping source paths to destination paths |
None
|
verbose
|
bool
|
Whether to print progress and diagnostic information. Defaults to True. |
True
|
export_csv
|
bool
|
Whether to export results to CSV file. Defaults to True. |
True
|
csv_output_path
|
Optional[FilePath]
|
Optional path for the CSV export. If None, uses notebook output directories. Defaults to None. |
None
|
regrade_existing
|
bool
|
Whether to regrade notebooks even if results already exist. Defaults to False. |
False
|
execution_timeout
|
Optional[int]
|
Maximum time (in seconds) allowed for notebook execution. Set to None to disable the timeout. Defaults to 600 seconds. |
600
|
Returns:
| Type | Description |
|---|---|
List[GradedResult]
|
List of GradedResult objects containing detailed results for each notebook. |
Raises:
| Type | Description |
|---|---|
TypeError
|
If an element in grading_items has an unsupported type. |
ValueError
|
If a required path doesn't exist or has invalid configuration. |
Examples:
>>> # Grade multiple notebooks with default settings
>>> results = grade_notebooks(["student1.ipynb", "student2.ipynb"])
>>>
>>> # With custom configurations
>>> results = grade_notebooks([
... GradingItem(notebook_path="student1.ipynb", output_path="results"),
... GradingItem(notebook_path="student2.ipynb", output_path="results"),
... ], base_files=["data.csv", "helpers.py"], export_csv=True)
Source code in src/jupygrader/grader.py
jupygrader.grade_single_notebook()
Grade a single Jupyter notebook with test cases.
Executes a notebook in a clean environment, evaluates test cases, and produces graded outputs. A convenience wrapper around grade_notebooks() for single notebook grading.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
grading_item
|
Union[FilePath, GradingItem, dict]
|
The notebook to grade, which can be: - A string or Path object with path to notebook file - A GradingItem object with detailed grading configuration - A dictionary that can be converted to a GradingItem object |
required |
**kwargs
|
Additional keyword arguments passed to grade_notebooks(): - base_files: Files to include in grading environment - verbose: Whether to print progress information - regrade_existing: Whether to regrade if results exist - csv_output_path: Path for CSV output (if needed) - execution_timeout: Maximum time (in seconds) allowed for notebook execution. Set to None to disable the timeout. |
{}
|
Returns:
| Type | Description |
|---|---|
Optional[GradedResult]
|
GradedResult object containing detailed results, or None if grading failed. |
Raises:
| Type | Description |
|---|---|
TypeError
|
If grading_item has an unsupported type. |
ValueError
|
If a required path doesn't exist or has invalid configuration. |
Examples:
>>> # Grade a notebook with default settings
>>> result = grade_single_notebook("student1.ipynb")
>>> print(f"Score: {result.learner_autograded_score}/{result.max_total_score}")
>>>
>>> # With custom configuration
>>> result = grade_single_notebook(
... GradingItem(
... notebook_path="student1.ipynb",
... output_path="results",
... copy_files=["data.csv"]
... ),
... verbose=True
... )
Source code in src/jupygrader/grader.py
📦 @dataclasses
jupygrader.GradedResult
Complete results of grading a Jupyter notebook.
This comprehensive class stores all information related to grading a notebook, including scores, test case results, execution environment details, and file paths for generated outputs.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
filename
|
str
|
Name of the graded notebook file. Defaults to "". |
''
|
learner_autograded_score
|
Union[int, float]
|
Points earned from automatically graded test cases. Defaults to 0. |
0
|
max_autograded_score
|
Union[int, float]
|
Maximum possible points from automatically graded test cases. Defaults to 0. |
0
|
max_manually_graded_score
|
Union[int, float]
|
Maximum possible points from manually graded test cases. Defaults to 0. |
0
|
max_total_score
|
Union[int, float]
|
Total maximum possible points across all test cases. Defaults to 0. |
0
|
num_autograded_cases
|
int
|
Number of automatically graded test cases. Defaults to 0. |
0
|
num_passed_cases
|
int
|
Number of passed test cases. Defaults to 0. |
0
|
num_failed_cases
|
int
|
Number of failed test cases. Defaults to 0. |
0
|
num_manually_graded_cases
|
int
|
Number of test cases requiring manual grading. Defaults to 0. |
0
|
num_total_test_cases
|
int
|
Total number of test cases in the notebook. Defaults to 0. |
0
|
grading_finished_at
|
str
|
Timestamp when grading completed. Defaults to "". |
''
|
grading_duration_in_seconds
|
float
|
Time taken to complete grading. Defaults to 0.0. |
0.0
|
test_case_results
|
List[TestCaseResult]
|
Detailed results for each individual test case. Defaults to empty list. |
list()
|
submission_notebook_hash
|
str
|
MD5 hash of the submitted notebook file. Defaults to "". |
''
|
test_cases_hash
|
str
|
MD5 hash of test case code in the notebook. Defaults to "". |
''
|
grader_python_version
|
str
|
Python version used for grading. Defaults to "". |
''
|
grader_platform
|
str
|
Platform information where grading occurred. Defaults to "". |
''
|
jupygrader_version
|
str
|
Version of Jupygrader used. Defaults to "". |
''
|
extracted_user_code_file
|
Optional[str]
|
Path to file containing extracted user code. Defaults to None. |
None
|
graded_html_file
|
Optional[str]
|
Path to HTML output of graded notebook. Defaults to None. |
None
|
text_summary_file
|
Optional[str]
|
Path to text summary file. Defaults to None. |
None
|
graded_result_json_file
|
Optional[str]
|
Path to JSON file containing the graded results. Defaults to None. |
None
|
Source code in src/jupygrader/models/grading_dataclasses.py
95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 | |
jupygrader.TestCaseResult
Result of an individual test case execution in a notebook.
This class stores the outcome of executing a test case during grading, including the points awarded, whether the test passed, and any output messages generated during execution.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
test_case_name
|
str
|
Unique identifier for the test case. Defaults to "". |
''
|
points
|
Union[int, float]
|
Points awarded for this test case (0 if failed). Defaults to 0. |
0
|
available_points
|
Union[int, float]
|
Maximum possible points for this test case. Defaults to 0. |
0
|
did_pass
|
Optional[bool]
|
Whether the test case passed (True), failed (False), or requires manual grading (None). Defaults to None. |
None
|
grade_manually
|
bool
|
Whether this test case should be graded manually. Defaults to False. |
False
|
message
|
str
|
Output message from the test execution, typically contains error information if the test failed. Defaults to "". |
''
|
Source code in src/jupygrader/models/grading_dataclasses.py
📌 Notebook Operations
jupygrader.extract_test_case_metadata_from_code()
Extract test case metadata from a code cell string.
Parses a code string to extract test case metadata including the test case name, points value, and whether it requires manual grading. The function looks for specific patterns in the code:
_test_case = 'name'(required)_points = value(optional, defaults to 0)_grade_manually = True/False(optional, defaults toFalse)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
code_str
|
str
|
The source code string to parse for test case metadata |
required |
Returns:
| Type | Description |
|---|---|
Optional[TestCaseMetadata]
|
A TestCaseMetadata object with extracted values if a test case is found, |
Optional[TestCaseMetadata]
|
None if a test case is not found |
Source code in src/jupygrader/notebook_operations.py
jupygrader.extract_test_cases_metadata_from_notebook()
Extract metadata from all test cases in a notebook.
Iterates through all code cells in the notebook and identifies test case cells
by looking for specific pattern markers. For each test case found, extracts
the metadata into a TestCaseMetadata object.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
nb
|
NotebookNode
|
The notebook to extract test case metadata from |
required |
Returns:
| Type | Description |
|---|---|
List[TestCaseMetadata]
|
A list of TestCaseMetadata objects for all test cases found in the notebook |
Source code in src/jupygrader/notebook_operations.py
jupygrader.does_cell_contain_test_case()
Determine if a notebook cell contains a test case.
A cell is considered a test case if it contains the pattern '_test_case = "name"'. This function uses a regular expression to check for this pattern.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
cell
|
NotebookNode
|
The notebook cell to check |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if the cell contains a test case pattern, False otherwise |
Source code in src/jupygrader/notebook_operations.py
jupygrader.is_manually_graded_test_case()
Determine if a notebook cell contains a manually graded test case.
A test case is considered manually graded if it contains the pattern '_grade_manually = True'. This function checks for this specific pattern in the cell's source code.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
cell
|
NotebookNode
|
The notebook cell to check |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if the cell is a manually graded test case, False otherwise |
Source code in src/jupygrader/notebook_operations.py
jupygrader.extract_user_code_from_notebook()
Extract user code from a notebook.
Collects all code from non-test-case code cells in the notebook.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
nb
|
NotebookNode
|
The notebook to extract code from |
required |
Returns:
| Type | Description |
|---|---|
str
|
String containing all user code concatenated with newlines |
Source code in src/jupygrader/notebook_operations.py
jupygrader.remove_code_cells_that_contain()
Source code in src/jupygrader/notebook_operations.py
jupygrader.remove_comments()
Remove comments from Python source code.
Removes both single line comments (starting with #) and multi-line comments (/ ... /), while preserving strings.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
source
|
str
|
Python source code as string |
required |
Returns:
| Type | Description |
|---|---|
str
|
Source code with comments removed |
Source code in src/jupygrader/notebook_operations.py
jupygrader.get_test_cases_hash()
Generate a hash of all test cases in a notebook.
Creates a standardized representation of all test case cells by removing comments and formatting with Black, then generates an MD5 hash.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
nb
|
NotebookNode
|
The notebook to generate a hash for |
required |
Returns:
| Type | Description |
|---|---|
str
|
MD5 hash string representing the test cases |