Skip to content

Commit 103afd4

Browse files
authored
Merge pull request #10 from djeada/djeada-patch-1
Create matrices.md
2 parents e890e99 + 25be011 commit 103afd4

File tree

10 files changed

+5616
-773
lines changed

10 files changed

+5616
-773
lines changed

notes/backtracking.md

Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -209,6 +209,49 @@ Main Idea:
209209
6. If the partial solution is complete and valid, record or output it.
210210
7. If all options are exhausted at a level, remove the last component and backtrack to the previous level.
211211

212+
General Template (pseudocode)
213+
214+
```
215+
function backtrack(partial):
216+
if is_complete(partial):
217+
handle_solution(partial)
218+
return // or continue if looking for all solutions
219+
220+
for candidate in generate_candidates(partial):
221+
if is_valid(candidate, partial):
222+
place(candidate, partial) // extend partial with candidate
223+
backtrack(partial)
224+
unplace(candidate, partial) // undo extension (backtrack)
225+
```
226+
227+
Pieces you supply per problem:
228+
229+
* `is_complete`: does `partial` represent a full solution?
230+
* `handle_solution`: record/output the solution.
231+
* `generate_candidates`: possible next choices given current partial.
232+
* `is_valid`: pruning test to reject infeasible choices early.
233+
* `place` / `unplace`: apply and revert the choice.
234+
235+
Python-ish Generic Framework
236+
237+
```python
238+
def backtrack(partial, is_complete, generate_candidates, is_valid, handle_solution):
239+
if is_complete(partial):
240+
handle_solution(partial)
241+
return
242+
243+
for candidate in generate_candidates(partial):
244+
if not is_valid(candidate, partial):
245+
continue
246+
# make move
247+
partial.append(candidate)
248+
backtrack(partial, is_complete, generate_candidates, is_valid, handle_solution)
249+
# undo move
250+
partial.pop()
251+
```
252+
253+
You can wrap those callbacks into a class or closures for stateful problems.
254+
212255
#### N-Queens Problem
213256

214257
The N-Queens problem is a classic puzzle in which the goal is to place $N$ queens on an $N \times N$ chessboard such that no two queens threaten each other. In chess, a queen can move any number of squares along a row, column, or diagonal. Therefore, no two queens can share the same row, column, or diagonal.

notes/basic_concepts.md

Lines changed: 126 additions & 26 deletions
Large diffs are not rendered by default.

notes/brain_teasers.md

Lines changed: 32 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1,38 +1,49 @@
1+
- tree traversal in order, post order etc.
2+
13
## Solving Programming Brain Teasers
24

3-
Programming puzzles and brain teasers are excellent tools for testing and enhancing your coding abilities and problem-solving skills. They are frequently used in technical interviews to evaluate a candidate's logical thinking, analytical prowess, and ability to devise efficient algorithms. To excel in these scenarios, it is recommended to master effective strategies for approaching and solving these problems.
5+
Programming puzzles and brain teasers are great ways to improve your coding and problem-solving skills. They're commonly used in technical interviews to assess a candidate's logical thinking, analytical ability, and skill in creating efficient solutions. To do well in these situations, it's important to learn and apply effective strategies for solving these problems.
46

57
### General Strategies
68

79
When tackling programming puzzles, consider the following strategies:
810

9-
- Starting with a **simple solution** can help you understand the problem better and identify challenges. This initial approach often highlights areas where optimization is needed later.
10-
- Writing **unit tests** ensures your solution works for a variety of input scenarios. These tests are invaluable for catching logical errors and handling edge cases, and they allow for safe updates through regression testing.
11-
- Analyzing the **time and space complexity** of your algorithm helps you measure its efficiency. Aim for the best possible complexity, such as $O(n)$, while avoiding unnecessary memory usage.
12-
- Choosing the **appropriate data structure** is important for achieving better performance. Knowing when to use structures like arrays, linked lists, stacks, or trees can greatly enhance your solution.
13-
- **Hash tables** are ideal for problems that require fast lookups, such as counting elements, detecting duplicates, or associating keys with values, as they offer average-case $O(1)$ complexity.
14-
- Implementing **memoization or dynamic programming** can optimize problems with overlapping subproblems by storing and reusing previously computed results to save time.
15-
- Breaking a problem into **smaller subproblems** often simplifies the process. Solving these subproblems individually makes it easier to manage and integrate the solutions.
16-
- Considering both **recursive and iterative approaches** allows flexibility. Recursion can simplify the logic for certain problems, while iteration may be more efficient and avoid stack overflow risks.
17-
- Paying attention to **edge cases and constraints** helps ensure robustness. Examples include handling empty inputs, very large or very small values, and duplicate data correctly.
18-
- While optimizing too early can complicate development, **targeted optimization** at the right time focuses on the most resource-intensive parts of the code, improving performance without reducing clarity or maintainability.
11+
* Starting with a *simple solution* can be helpful in understanding the problem and revealing areas that may need further optimization later on.
12+
* Writing *unit tests* is useful for ensuring that your solution works correctly across a range of input scenarios, including edge cases.
13+
* Analyzing *time and space complexity* of your algorithm is important for assessing its efficiency and striving for an optimal *performance*.
14+
* Choosing the *appropriate data structure*, such as an array or tree, is beneficial for improving the speed and clarity of your solution.
15+
* Breaking down the problem into *smaller parts* can make the overall task more manageable and easier to solve.
16+
* Considering both *recursive* and *iterative* approaches gives you flexibility in selecting the method that best suits the problem’s needs.
17+
* Paying attention to *edge cases* and *constraints* ensures your solution handles unusual or extreme inputs gracefully.
18+
* *Targeted optimization*, when applied at the right time, can improve performance in specific areas without sacrificing clarity.
1919

2020
### Data Structures
2121

2222
Understanding and effectively using data structures is fundamental in programming. Below are detailed strategies and tips for working with various data structures.
2323

2424
#### Working with Arrays
2525

26-
Arrays are fundamental data structures that store elements in contiguous memory locations, allowing efficient random access. Here are strategies for working with arrays:
27-
28-
- **Sorting** an array can simplify many problems. Algorithms like Quick Sort and Merge Sort are efficient with $O(n \log n)$ time complexity. For nearly sorted or small arrays, **Insertion Sort** may be a better option due to its simplicity and efficiency in those cases.
29-
- In **sorted arrays**, binary search provides a fast way to find elements or their positions, working in $O(\log n)$. Be cautious with **mid-point calculations** in languages prone to integer overflow due to fixed-size integer types.
30-
- The **two-pointer technique** uses two indices, often starting from opposite ends of the array, to solve problems involving pairs or triplets, like finding two numbers that add up to a target sum. It helps optimize time and space.
31-
- The **sliding window technique** is effective for subarray or substring problems, such as finding the longest substring without repeating characters. It keeps a dynamic subset of the array while iterating, improving efficiency.
32-
- **Prefix sums** enable quick range sum queries after preprocessing the array in $O(n)$. Similarly, **difference arrays** allow efficient range updates without modifying individual elements one by one.
33-
- **In-place operations** modify the array directly without using extra memory. This approach saves space but requires careful handling to avoid unintended side effects on other parts of the program.
34-
- When dealing with **duplicates**, it’s important to adjust the algorithm to handle them correctly. For example, in the two-pointer technique, duplicates may need to be skipped to prevent redundant results or errors.
35-
- **Memory usage** is a important consideration with large arrays, as they can consume significant space. Be mindful of space complexity in constrained environments to prevent excessive memory usage.
26+
Arrays are basic data structures that store elements in a continuous block of memory, making it easy to access any element quickly. Here are some tips for working with arrays:
27+
28+
* Sorting an array can often simplify many problems, with algorithms like Quick Sort and Merge Sort offering efficient $O(n \log n)$ time complexity. For nearly sorted or small arrays, *Insertion Sort* might be a better option due to its simplicity and efficiency in such cases.
29+
* In sorted arrays, *binary search* provides a fast way to find elements or their positions, working in $O(\log n)$. Be cautious with mid-point calculations in languages that may experience integer overflow due to fixed-size integer types.
30+
* The *two-pointer* technique uses two indices, typically starting from opposite ends of the array, to solve problems involving pairs or triplets, like finding two numbers that sum to a target. It helps optimize both time and space efficiency.
31+
* The *sliding window* technique is effective for solving subarray or substring problems, such as finding the longest substring without repeating characters. It maintains a dynamic subset of the array while iterating, improving overall efficiency.
32+
* *Prefix sums* enable fast range sum queries after preprocessing the array in $O(n)$. Likewise, difference arrays allow efficient range updates without the need to modify individual elements one by one.
33+
* In-place operations modify the array directly without using extra memory. This method saves space but requires careful handling to avoid unintended side effects on other parts of the program.
34+
* When dealing with duplicates, it’s important to adjust the algorithm to handle them appropriately. For example, the two-pointer technique may need to skip duplicates to prevent redundant results or errors.
35+
* When working with large arrays, it’s important to be mindful of memory usage, as they can consume a lot of space. To optimize, try to minimize the space complexity by using more memory-efficient data structures or algorithms. For instance, instead of storing a full array of values, consider using a *sliding window* or *in-place modifications* to avoid extra memory allocation. Additionally, analyze the space complexity of your solution and check for operations that create large intermediate data structures, which can lead to excessive memory consumption. In constrained environments, tools like memory profiling or checking the space usage of your program (e.g., using Python’s `sys.getsizeof()`) can help you identify areas for improvement.
36+
* When using dynamic arrays, it’s helpful to allow automatic resizing, which lets the array expand or shrink based on the data size. This avoids the need for manual memory management and improves flexibility.
37+
* Resizing arrays frequently can be costly in terms of time complexity. A more efficient approach is to resize the array exponentially, such as doubling its size, rather than resizing it by a fixed amount each time.
38+
* To avoid unnecessary memory usage, it's important to pass arrays by reference (or using pointers in some languages) when possible, instead of copying the entire array for each function call.
39+
* For arrays with many zero or null values, using sparse arrays or hash maps can be useful. This allows you to store only non-zero values, saving memory when dealing with large arrays that contain mostly empty data.
40+
* When dealing with multi-dimensional arrays, flattening them into a one-dimensional array can make it easier to perform operations, but be aware that this can temporarily increase memory usage.
41+
* To improve performance, accessing memory in contiguous blocks is important. Random access patterns may lead to cache misses, which can slow down operations, so try to access array elements sequentially when possible.
42+
* The `bisect` module helps maintain sorted order in a list by finding the appropriate index for inserting an element or by performing binary searches.
43+
* Use `bisect.insort()` to insert elements into a sorted list while keeping it ordered.
44+
* Use `bisect.bisect_left()` or `bisect.bisect_right()` to find the index where an element should be inserted.
45+
* Don’t use on unsorted lists or when frequent updates are needed, as maintaining order can be inefficient.
46+
* Binary search operations like `bisect_left()` are `O(log n)`, but `insort()` can be `O(n)` due to shifting elements.
3647

3748
#### Working with Strings
3849

notes/dynamic_programming.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,9 @@
22

33
Dynamic Programming (DP) is a way to solve complex problems by breaking them into smaller, easier problems. Instead of solving the same small problems again and again, DP **stores their solutions** in a structure like an array, table, or map. This avoids wasting time on repeated calculations and makes the process much faster and more efficient.
44

5-
DP works best for problems that have two key features. The first is **optimal substructure**, which means you can build the solution to a big problem from the solutions to smaller problems. The second is **overlapping subproblems**, where the same smaller problems show up multiple times during the process. By focusing on these features, DP ensures that each part of the problem is solved only once.
5+
DP works best for problems that have two features. The first is **optimal substructure**, which means you can build the solution to a big problem from the solutions to smaller problems. The second is **overlapping subproblems**, where the same smaller problems show up multiple times during the process. By focusing on these features, DP ensures that each part of the problem is solved only once.
66

7-
This method was introduced by Richard Bellman in the 1950s and has become a valuable tool in areas like computer science, economics, and operations research. It has been used to solve problems that would otherwise take too long by turning slow, exponential-time algorithms into much faster polynomial-time solutions. DP is practical and powerful for tackling real-world optimization challenges.
7+
This method was introduced by Richard Bellman in the 1950s and has become a valuable tool in areas like computer science, economics, and operations research. It has been used to solve problems that would otherwise take too long by turning slow, exponential-time algorithms into much faster polynomial-time solutions. DP is used in practice for tackling real-world optimization challenges.
88

99
### Principles
1010

0 commit comments

Comments
 (0)