- Data Types - Explanation of different types of data in programming.
- Reference Type - Understanding objects stored by reference.
- Value Type - How value types store data directly in memory.
- Storing Data in Computer Memory - Overview of how data is managed in memory.
- Introduction and Basic Concepts - Fundamentals of arrays and their importance.
- Static Array - Arrays with fixed size and their characteristics.
- Dynamic Array - Resizable arrays and how they manage memory.
- Array Operations - Common operations like insertion, deletion, searching, and sorting.
- Application Areas of Arrays - Practical applications of arrays in real-world scenarios.
- Introduction and Basic Concepts - Understanding singly linked lists and their significance in data structures.
- Node Structure - Explanation of the
SinglyLinkedListNode
class, which stores data and a pointer to the next node. - Singly Linked List Operations
- Insertion - Adding elements at the beginning, end, or a specific position.
- Deletion - Removing the first, last, or a specific node.
- Traversal - Iterating through the linked list using an enumerator.
- Searching - Finding a specific node in the list.
- Memory Management in Singly Linked Lists - How nodes are dynamically allocated and deallocated.
- Application Areas of Singly Linked Lists - Practical uses in software development, such as queue implementations, undo features, and memory-efficient data storage.
-
Introduction and Basic Concepts
- Understanding the LIFO (Last-In, First-Out) principle that defines a stack.
- Real-world analogies like a stack of plates or books.
- Importance in algorithm design and memory management.
-
Stack Structure
- Explanation of the internal structure of a stack.
- Common implementations using arrays and linked lists.
- Core components:
push
,pop
,peek
, andisEmpty
operations.
-
Stack Operations
- Push - Adding an element to the top of the stack.
- Pop - Removing the top element from the stack.
- Peek (or Top) - Viewing the top element without removing it.
- IsEmpty - Checking if the stack is empty.
- Size - Getting the number of elements in the stack.
-
Memory Management in Stack
- Stack memory allocation and deallocation.
- Call stack usage during function execution.
- Stack overflow issues and limitations in memory-constrained environments.
-
Application Areas of Stacks
- Function call management in programming languages (call stack).
- Expression evaluation and syntax parsing.
- Backtracking algorithms (e.g., maze solving, undo operations).
- Depth-First Search (DFS) in graph traversal.
- Understanding the FIFO (First-In, First-Out) principle that defines a queue.
- Real-world analogies such as queues in supermarkets or print job scheduling.
- Importance in algorithm design, scheduling, and resource management.
- Explanation of the internal structure of a queue.
- Common implementations using arrays, linked lists, and circular buffers.
- Core components:
enqueue
,dequeue
,peek
, andisEmpty
operations.
- Enqueue - Adding an element to the rear (end) of the queue.
- Dequeue - Removing an element from the front of the queue.
- Peek (or Front) - Viewing the front element without removing it.
- IsEmpty - Checking if the queue is empty.
- Size - Getting the number of elements currently stored in the queue.
- Dynamic vs. static memory allocation for queues.
- Circular queue implementations to optimize space utilization.
- Buffer overflow/underflow issues and preventive mechanisms.
- Scheduling processes in operating systems (CPU scheduling, IO queues).
- Managing print jobs in printers (print spooling).
- Breadth-First Search (BFS) in graph traversal.
- Simulation systems (e.g., customer service systems, traffic flow models).
- Message queues in distributed systems and asynchronous programming.
- Importance of sorting in computer science.
- Comparison of different sorting techniques based on time complexity, space complexity, and stability.
- Simple comparison-based algorithm.
- Repeatedly swaps adjacent elements if they are in the wrong order.
- Best case: O(n), Worst case: O(n²), Space: O(1).
- Builds the final sorted array one element at a time.
- Good for small or nearly sorted datasets.
- Best case: O(n), Worst case: O(n²), Space: O(1).
- Repeatedly finds the minimum element and moves it to the sorted portion.
- Always O(n²) time complexity.
- Not stable but simple to implement.
- Divide-and-conquer algorithm that divides the list into halves, sorts them, and merges.
- Time complexity: O(n log n) in all cases.
- Space complexity: O(n).
- Divide-and-conquer algorithm that picks a pivot and partitions the array.
- Best/Average case: O(n log n), Worst case: O(n²) (can be improved with randomized pivot).
- Space: O(log n) due to recursion stack.
- Time complexity describes how the runtime of an algorithm increases with the size of the input.
- Helps evaluate the efficiency of algorithms.
- Common complexities:
- O(1) – Constant time
- O(log n) – Logarithmic time
- O(n) – Linear time
- O(n log n) – Linearithmic time
- O(n²) – Quadratic time
- O(2ⁿ), O(n!) – Exponential time
- Used to describe the upper, lower, or tight bound behavior of an algorithm:
- Big O (O) – Worst-case upper bound
Example: O(n²) means the runtime grows at most as n². - Big Omega (Ω) – Best-case lower bound
Example: Ω(n) means the algorithm takes at least linear time. - Big Theta (Θ) – Tight bound (both upper and lower)
Example: Θ(n log n) means the runtime grows exactly at that rate.
- Big O (O) – Worst-case upper bound
- A method to solve recurrence relations by guessing the form of the solution and proving it using mathematical induction.
- Example:
- T(n) = 2T(n/2) + n
- Guess: T(n) = O(n log n)
- Prove the guess using induction.
- Solves recurrence relations by expanding them step by step until a pattern is observed.
- Example:
- T(n) = T(n/2) + n
- T(n/2) = T(n/4) + n/2
- …
- Total = n + n/2 + n/4 + … = O(n)
-
Provides a direct way to analyze the time complexity of divide-and-conquer algorithms.
-
Applies to recurrences of the form:
T(n) = aT(n/b) + f(n) -
Cases:
- If f(n) = O(n^log_b(a - ε)) → T(n) = Θ(n^log_b(a))
- If f(n) = Θ(n^log_b(a)) → T(n) = Θ(n^log_b(a) * log n)
- If f(n) = Ω(n^log_b(a + ε)) and regularity condition holds → T(n) = Θ(f(n))
-
Example:
- Merge Sort: T(n) = 2T(n/2) + n → Case 2 → Θ(n log n)
- A tree is a hierarchical data structure consisting of nodes.
- The top node is called the root; nodes without children are called leaves.
- Each node (except the root) has exactly one parent.
- Trees are used in many applications like file systems, databases, and parsing expressions.
- Terminology:
- Parent, Child, Sibling
- Subtree: A tree formed by a node and its descendants.
- Depth: Number of edges from root to node.
- Height: Number of edges on the longest path from node to a leaf.
- A binary tree is a tree where each node has at most two children: left and right.
- Types of binary trees:
- Full Binary Tree: Every node has 0 or 2 children.
- Complete Binary Tree: All levels are filled except possibly the last, which is filled from left to right.
- Perfect Binary Tree: All internal nodes have two children and all leaves are at the same level.
- Common traversals:
- Inorder (Left, Root, Right)
- Preorder (Root, Left, Right)
- Postorder (Left, Right, Root)
- Level-order (Breadth-First Search)
- A Binary Search Tree is a binary tree with the property:
- For any node
n
:- All values in the left subtree <
n.value
- All values in the right subtree >
n.value
- All values in the left subtree <
- For any node
- Operations:
- Search: O(h) time complexity (h = tree height)
- Insert: O(h)
- Delete: O(h)
- Best case: Balanced BST → O(log n)
- Worst case: Unbalanced BST (like a linked list) → O(n)
- BSTs are used in:
- Dynamic sets
- Searching and sorting
- Dictionary implementations