You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: optimisation-data-structures-algorithms.md
+6-8Lines changed: 6 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -65,7 +65,7 @@ CPython for example uses [`newsize + (newsize >> 3) + 6`](https://github.com/pyt
65
65
66
66
This has two implications:
67
67
68
-
* If you are creating large static lists, they will use upto 12.5% excess memory.
68
+
* If you are creating large static lists, they will use up to 12.5% excess memory.
69
69
* If you are growing a list with `append()`, there will be large amounts of redundant allocations and copies as the list grows.
70
70
71
71
### List Comprehension
@@ -165,7 +165,7 @@ To retrieve or check for the existence of a key within a hashing data structure,
165
165
166
166
### Keys
167
167
168
-
Keys will typically be a core Python type such as a number or string. However multiple of these can be combined as a Tuple to form a compound key, or a custom class can be used if the methods `__hash__()` and `__eq__()` have been implemented.
168
+
Keys will typically be a core Python type such as a number or string. However, multiple of these can be combined as a Tuple to form a compound key, or a custom class can be used if the methods `__hash__()` and `__eq__()` have been implemented.
169
169
170
170
You can implement `__hash__()` by utilising the ability for Python to hash tuples, avoiding the need to implement a bespoke hash function.
171
171
@@ -265,7 +265,7 @@ Constructing a set with a loop and `add()` (equivalent to a list's `append()`) c
265
265
266
266
The naive list approach is 2200x times slower than the fastest approach, because of how many times the list is searched. This gap will only grow as the number of items increases.
267
267
268
-
Sorting the input list reduces the cost of searching the output list significantly, however it is still 8x slower than the fastest approach. In part because around half of it's runtime is now spent sorting the list.
268
+
Sorting the input list reduces the cost of searching the output list significantly, however it is still 8x slower than the fastest approach. In part because around half of its runtime is now spent sorting the list.
269
269
270
270
```output
271
271
uniqueSet: 0.30ms
@@ -280,9 +280,9 @@ uniqueListSort: 2.67ms
280
280
281
281
Independent of the performance to construct a unique set (as covered in the previous section), it's worth identifying the performance to search the data-structure to retrieve an item or check whether it exists.
282
282
283
-
The performance of a hashing data structure is subject to the load factor and number of collisions. An item that hashes with no collision can be checked almost directly, whereas one with collisions will probe until it finds the correct item or an empty slot. In the worst possible case, whereby all insert items have collided this would mean checking every single item. In practice, hashing data-structures are designed to minimise the chances of this happening and most items should be found or identified as missing with a single access.
283
+
The performance of a hashing data structure is subject to the load factor and number of collisions. An item that hashes with no collision can be checked almost directly, whereas one with collisions will probe until it finds the correct item or an empty slot. In the worst possible case, whereby all insert items have collided this would mean checking every single item. In practice, hashing data-structures are designed to minimise the chances of this happening and most items should be found or identified as missing with single access.
284
284
285
-
In contrast if searching a list or array, the default approach is to start at the first item and check all subsequent items until the correct item has been found. If the correct item is not present, this will require the entire list to be checked. Therefore the worst-case is similar to that of the hashing data-structure, however it is guaranteed in cases where the item is missing. Similarly, on-average we would expect an item to be found half way through the list, meaning that an average search will require checking half of the items.
285
+
In contrast, if searching a list or array, the default approach is to start at the first item and check all subsequent items until the correct item has been found. If the correct item is not present, this will require the entire list to be checked. Therefore, the worst-case is similar to that of the hashing data-structure, however it is guaranteed in cases where the item is missing. Similarly, on-average we would expect an item to be found halfway through the list, meaning that an average search will require checking half of the items.
286
286
287
287
If however the list or array is sorted, a binary search can be used. A binary search divides the list in half and checks which half the target item would be found in, this continues recursively until the search is exhausted whereby the item should be found or dismissed. This is significantly faster than performing a linear search of the list, checking a total of `log N` items every time.
Searching the set is fastest performing 25,000 searches in 0.04ms.
337
-
This is followed by the binary search of the (sorted) list which is 145x slower, although the list has been filtered for duplicates. A list still containing duplicates would be longer, leading to a more expensive search.
338
-
The linear search of the list is more than 56,600x slower than the fastest, it really shouldn't be used!
336
+
Searching the set is the fastest, performing 25,000 searches in 0.04ms. This is followed by the binary search of the (sorted) list which is 145x slower, although the list has been filtered for duplicates. A list still containing duplicates would be longer, leading to a more expensive search. The linear search of the list is more than 56,600x slower than searching the set, it really shouldn't be used!
Copy file name to clipboardExpand all lines: optimisation-introduction.md
+6-22Lines changed: 6 additions & 22 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,12 +19,12 @@ exercises: 0
19
19
## Introduction
20
20
21
21
<!-- Changing the narrative: you'd better learn how to write a good code an what are the good practice -->
22
-
Think about optimization as the first step on your journey to writing high-performance code.
22
+
Think about optimisation as the first step on your journey to writing high-performance code.
23
23
It’s like a race: the faster you can go without taking unnecessary detours, the better.
24
24
Code optmisation is all about understanding the principles of efficiency in Python and being conscious of how small changes can yield massive improvements.
25
25
26
26
<!-- Necessary to understand how code executes (to a degree) -->
27
-
These are the first steps in code optimization: making better choices as you write your code and have an understanding of what a computer is doing to execute it.
27
+
These are the first steps in code optimisation: making better choices as you write your code and have an understanding of what a computer is doing to execute it.
28
28
29
29
<!-- Goal is to give you a high level understanding of how your code executes. You don't need to be an expert, even a vague general understanding will leave you in a stronger position. -->
30
30
A high-level understanding of how your code executes, such as how Python and the most common data-structures and algorithms are implemented, can help you identify suboptimal approaches when programming. If you have learned to write code informally out of necessity, to get something to work, it's not uncommon to have collected some bad habits along the way.
@@ -34,41 +34,25 @@ The remaining content is often abstract knowledge, that is transferable to the v
34
34
35
35
## Optimising code from scratch: trade-off between performance and maintainability
36
36
37
-
> Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: **premature optimization is the root of all evil**. Yet we should not pass up our opportunities in that critical 3%. - Donald Knuth
37
+
> Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: **premature optimisation is the root of all evil**. Yet we should not pass up our opportunities in that critical 3%. - Donald Knuth
38
38
39
-
This classic quote among computer scientists states; when considering optimisation it is important to focus on the potential impact, both to the performance and maintainability of the code.
40
-
41
-
Advanced optimisations, mostly outside the scope of this course, can increase the cost of maintenance by obfuscating what code is doing. Even if you are a solo-developer working on private code, your future self should be able to easily comprehend your implementation.
42
-
43
-
Therefore, the balance between the impact to both performance and maintainability should be considered when optimising code.
39
+
This classic quote among computer scientists states; when considering optimisation it is important to focus on the potential impact, both to the performance and maintainability of the code. Advanced optimisations, mostly outside the scope of this course, can increase the cost of maintenance by obfuscating what code is doing. Even if you are a solo-developer working on private code, your future self should be able to easily comprehend your implementation. Therefore, the balance between the impact to both performance and maintainability should be considered when optimising code.
44
40
45
41
This is not to say, don't consider performance when first writing code. The selection of appropriate algorithms and data-structures covered in this course form a good practice, simply don't fret over a need to micro-optimise every small component of the code that you write.
46
42
47
-
48
43
## Ensuring Reproducible Results when optimising an existing code
49
44
50
45
<!-- This is also good practice when optimising your code, to ensure mistakes aren't made -->
51
46
When optimising an existing code, you are making speculative changes. It's easy to make mistakes, many of which can be subtle. Therefore, it's important to have a strategy in place to check that the outputs remain correct.
52
47
53
-
Testing is hopefully already a seamless part of your research software development process.
54
-
Test can be used to clarify how your software should perform, ensuring that new features work as intended and protecting against unintended changes to old functionality.
55
-
56
-
There are a plethora of methods for testing code.
48
+
Testing is hopefully already a seamless part of your research software development process. Test can be used to clarify how your software should perform, ensuring that new features work as intended and protecting against unintended changes to old functionality.
57
49
58
50
## pytest Overview
59
51
60
-
Most Python developers use the testing package [pytest](https://docs.pytest.org/en/latest/), it's a great place to get started if you're new to testing code.
52
+
There are a plethora of methods for testing code. Most Python developers use the testing package [pytest](https://docs.pytest.org/en/latest/), it's a great place to get started if you're new to testing code. Tests should be created within a project's testing directory, by creating files named with the form `test_*.py` or `*_test.py`. pytest looks for these files, when running the test suite. Within the created test file, any functions named in the form `test*` are considered tests that will be executed by pytest. The `assert` keyword is used, to test whether a condition evaluates to `True`.
61
53
62
54
Here's a quick example of how a test can be used to check your function's output against an expected value.
63
55
64
-
Tests should be created within a project's testing directory, by creating files named with the form `test_*.py` or `*_test.py`.
65
-
66
-
pytest looks for these files, when running the test suite.
67
-
68
-
Within the created test file, any functions named in the form `test*` are considered tests that will be executed by pytest.
69
-
70
-
The `assert` keyword is used, to test whether a condition evaluates to `True`.
0 commit comments