Skip to content

Commit 298531f

Browse files
authored
Merge pull request #534 from Jeet009/jeet/normalizedJSON
Normalized JSON
2 parents f0dac06 + 1b90e40 commit 298531f

File tree

286 files changed

+7230
-555
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

286 files changed

+7230
-555
lines changed

.DS_Store

2 KB
Binary file not shown.

build/1.json

Lines changed: 64 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,64 @@
1+
{
2+
"id": "1",
3+
"title": "Matrix-Vector Dot Product",
4+
"difficulty": "easy",
5+
"category": "Linear Algebra",
6+
"video": "https://youtu.be/DNoLs5tTGAw?si=vpkPobZMA8YY10WY",
7+
"likes": "0",
8+
"dislikes": "0",
9+
"contributor": [
10+
{
11+
"profile_link": "https://github.com/moe18",
12+
"name": "Moe Chabot"
13+
}
14+
],
15+
"tinygrad_difficulty": "easy",
16+
"pytorch_difficulty": "easy",
17+
"description": "Write a Python function that computes the dot product of a matrix and a vector. The function should return a list representing the resulting vector if the operation is valid, or -1 if the matrix and vector dimensions are incompatible. A matrix (a list of lists) can be dotted with a vector (a list) only if the number of columns in the matrix equals the length of the vector. For example, an n x m matrix requires a vector of length m.",
18+
"learn_section": "\n## Matrix-Vector Dot Product\n\nConsider a matrix $A$ and a vector $v$:\n\n**Matrix $A$ (n x m):**\n$$\nA = \\begin{pmatrix}\na_{11} & a_{12} & \\cdots & a_{1m} \\\\\na_{21} & a_{22} & \\cdots & a_{2m} \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\na_{n1} & a_{n2} & \\cdots & a_{nm}\n\\end{pmatrix}\n$$\n\n**Vector $v$ (length m):**\n$$\nv = \\begin{pmatrix}\nv_1 \\\\\nv_2 \\\\\n\\vdots \\\\\nv_m\n\\end{pmatrix}\n$$\n\nThe dot product $A \\cdot v$ produces a new vector of length $n$:\n$$\nA \\cdot v = \\begin{pmatrix}\na_{11}v_1 + a_{12}v_2 + \\cdots + a_{1m}v_m \\\\\na_{21}v_1 + a_{22}v_2 + \\cdots + a_{2m}v_m \\\\\n\\vdots \\\\\na_{n1}v_1 + a_{n2}v_2 + \\cdots + a_{nm}v_m\n\\end{pmatrix}\n$$\n\n### Key Requirement:\nThe number of columns in the matrix ($m$) must equal the length of the vector ($m$). If not, the operation is undefined, and the function should return -1.",
19+
"starter_code": "def matrix_dot_vector(a: list[list[int|float]], b: list[int|float]) -> list[int|float]:\n\t# Return a list where each element is the dot product of a row of 'a' with 'b'.\n\t# If the number of columns in 'a' does not match the length of 'b', return -1.\n\tpass",
20+
"solution": "def matrix_dot_vector(a: list[list[int|float]], b: list[int|float]) -> list[int|float]:\n if len(a[0]) != len(b):\n return -1\n result = []\n for row in a:\n total = 0\n for i in range(len(row)):\n total += row[i] * b[i]\n result.append(total)\n return result",
21+
"example": {
22+
"input": "a = [[1, 2], [2, 4]], b = [1, 2]",
23+
"output": "[5, 10]",
24+
"reasoning": "Row 1: (1 * 1) + (2 * 2) = 1 + 4 = 5; Row 2: (1 * 2) + (2 * 4) = 2 + 8 = 10"
25+
},
26+
"test_cases": [
27+
{
28+
"test": "print(matrix_dot_vector([[1, 2, 3], [2, 4, 5], [6, 8, 9]], [1, 2, 3]))",
29+
"expected_output": "[14, 25, 49]"
30+
},
31+
{
32+
"test": "print(matrix_dot_vector([[1, 2], [2, 4], [6, 8], [12, 4]], [1, 2, 3]))",
33+
"expected_output": "-1"
34+
},
35+
{
36+
"test": "print(matrix_dot_vector([[1.5, 2.5], [3.0, 4.0]], [2, 1]))",
37+
"expected_output": "[5.5, 10.0]"
38+
}
39+
],
40+
"tinygrad_starter_code": "from tinygrad.tensor import Tensor\n\ndef matrix_dot_vector_tg(a, b) -> Tensor:\n \"\"\"\n Compute the product of matrix `a` and vector `b` using tinygrad.\n Inputs can be Python lists, NumPy arrays, or tinygrad Tensors.\n Returns a 1-D Tensor of length m, or Tensor(-1) if dimensions mismatch.\n \"\"\"\n # Dimension mismatch check\n if len(a[0]) != len(b):\n return Tensor(-1)\n # Convert to Tensor\n a_t = Tensor(a)\n b_t = Tensor(b)\n # Your implementation here\n pass",
41+
"tinygrad_solution": "from tinygrad.tensor import Tensor\n\ndef matrix_dot_vector_tg(a, b) -> Tensor:\n \"\"\"\n Compute the product of matrix `a` and vector `b` using tinygrad.\n Inputs can be Python lists, NumPy arrays, or tinygrad Tensors.\n Returns a 1-D Tensor of length m, or Tensor(-1) if dimensions mismatch.\n \"\"\"\n if len(a[0]) != len(b):\n return Tensor(-1)\n a_t = Tensor(a)\n b_t = Tensor(b)\n return a_t.matmul(b_t)",
42+
"tinygrad_test_cases": [
43+
{
44+
"test": "from tinygrad.tensor import Tensor\nres = matrix_dot_vector_tg(\n [[1,2,3],[2,4,5],[6,8,9]],\n [1,2,3]\n)\nprint(res.numpy().tolist())",
45+
"expected_output": "[14.0, 25.0, 49.0]"
46+
},
47+
{
48+
"test": "from tinygrad.tensor import Tensor\nres = matrix_dot_vector_tg(\n [[1,2,3],[2,4,5]],\n [1,2]\n)\nprint(res.numpy().tolist())",
49+
"expected_output": "-1"
50+
}
51+
],
52+
"pytorch_starter_code": "import torch\n\ndef matrix_dot_vector(a, b) -> torch.Tensor:\n \"\"\"\n Compute the product of matrix `a` and vector `b` using PyTorch.\n Inputs can be Python lists, NumPy arrays, or torch Tensors.\n Returns a 1-D tensor of length m, or tensor(-1) if dimensions mismatch.\n \"\"\"\n a_t = torch.as_tensor(a, dtype=torch.float)\n b_t = torch.as_tensor(b, dtype=torch.float)\n # Dimension mismatch check\n if a_t.size(1) != b_t.size(0):\n return torch.tensor(-1)\n # Your implementation here\n pass",
53+
"pytorch_solution": "import torch\n\ndef matrix_dot_vector(a, b) -> torch.Tensor:\n \"\"\"\n Compute the product of matrix `a` and vector `b` using PyTorch.\n Inputs can be Python lists, NumPy arrays, or torch Tensors.\n Returns a 1-D tensor of length m, or tensor(-1) if dimensions mismatch.\n \"\"\"\n a_t = torch.as_tensor(a, dtype=torch.float)\n b_t = torch.as_tensor(b, dtype=torch.float)\n if a_t.size(1) != b_t.size(0):\n return torch.tensor(-1)\n return torch.matmul(a_t, b_t)",
54+
"pytorch_test_cases": [
55+
{
56+
"test": "import torch\nres = matrix_dot_vector(\n torch.tensor([[1,2,3],[2,4,5],[6,8,9]], dtype=torch.float),\n torch.tensor([1,2,3], dtype=torch.float)\n)\nprint(res.numpy().tolist())",
57+
"expected_output": "[14.0, 25.0, 49.0]"
58+
},
59+
{
60+
"test": "import torch\nres = matrix_dot_vector(\n torch.tensor([[1,2,3],[2,4,5]], dtype=torch.float),\n torch.tensor([1,2], dtype=torch.float)\n)\nprint(res.numpy().tolist())",
61+
"expected_output": "-1"
62+
}
63+
]
64+
}

build/10.json

Lines changed: 64 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,64 @@
1+
{
2+
"id": "10",
3+
"title": "Calculate Covariance Matrix",
4+
"difficulty": "easy",
5+
"category": "Statistics",
6+
"video": "https://youtu.be/Mmuz3a4idg4",
7+
"likes": "0",
8+
"dislikes": "0",
9+
"contributor": [
10+
{
11+
"profile_link": "https://github.com/moe18",
12+
"name": "Moe Chabot"
13+
},
14+
{
15+
"profile_link": "https://github.com/Selbl",
16+
"name": "Selbl"
17+
}
18+
],
19+
"tinygrad_difficulty": "medium",
20+
"pytorch_difficulty": "easy",
21+
"description": "Write a Python function to calculate the covariance matrix for a given set of vectors. The function should take a list of lists, where each inner list represents a feature with its observations, and return a covariance matrix as a list of lists. Additionally, provide test cases to verify the correctness of your implementation.",
22+
"learn_section": "## Understanding Covariance Matrix\n\nThe covariance matrix is a fundamental concept in statistics and machine learning, used to understand the relationship between multiple variables (features) in a dataset. It quantifies the degree to which two variables change together.\n\n### Key Concepts\n\n- **Covariance**: Measures the directional relationship between two random variables. A positive covariance indicates that the variables increase together, while a negative covariance indicates that one variable increases as the other decreases.\n- **Covariance Matrix**: For a dataset with $n$ features, the covariance matrix is an $n \\times n$ matrix where each element $(i, j)$ represents the covariance between the $i^{th}$ and $j^{th}$ features.\n\n### Covariance Formula\n\nThe covariance between two variables $X$ and $Y$ is calculated as:\n\n$$\n\\text{cov}(X, Y) = \\frac{\\sum_{k=1}^{m} (X_k - \\bar{X})(Y_k - \\bar{Y})}{m - 1}\n$$\n\nWhere:\n\n- $X_k$ and $Y_k$ are the individual observations of variables $X$ and $Y$.\n- $\\bar{X}$ and $\\bar{Y}$ are the means of $X$ and $Y$.\n- $m$ is the number of observations.\n\n### Constructing the Covariance Matrix\n\nGiven a dataset with $n$ features, the covariance matrix is constructed as follows:\n\n1. **Calculate the Mean**: Compute the mean of each feature.\n2. **Compute Covariance**: For each pair of features, calculate the covariance using the formula above.\n3. **Populate the Matrix**: Place the computed covariance values in the corresponding positions in the matrix. The diagonal elements represent the variance of each feature.\n\n$$\n\\text{Covariance Matrix} =\n\\begin{bmatrix}\n\\text{cov}(X_1, X_1) & \\text{cov}(X_1, X_2) & \\cdots & \\text{cov}(X_1, X_n) \\\\\n\\text{cov}(X_2, X_1) & \\text{cov}(X_2, X_2) & \\cdots & \\text{cov}(X_2, X_n) \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\n\\text{cov}(X_n, X_1) & \\text{cov}(X_n, X_2) & \\cdots & \\text{cov}(X_n, X_n) \\\\\n\\end{bmatrix}\n$$\n\n### Example Calculation\n\nConsider the following dataset with two features:\n\n$$\n\\begin{align*}\n\\text{Feature 1} &: [1, 2, 3] \\\\\n\\text{Feature 2} &: [4, 5, 6]\n\\end{align*}\n$$\n\n1. **Calculate Means**:\n $$\n \\bar{X}_1 = \\frac{1 + 2 + 3}{3} = 2.0 \\\\\n \\bar{X}_2 = \\frac{4 + 5 + 6}{3} = 5.0\n $$\n\n2. **Compute Covariances**:\n $$\n \\text{cov}(X_1, X_1) = \\frac{(1-2)^2 + (2-2)^2 + (3-2)^2}{3-1} = 1.0 \\\\\n \\text{cov}(X_1, X_2) = \\frac{(1-2)(4-5) + (2-2)(5-5) + (3-2)(6-5)}{3-1} = 1.0 \\\\\n \\text{cov}(X_2, X_2) = \\frac{(4-5)^2 + (5-5)^2 + (6-5)^2}{3-1} = 1.0\n $$\n\n3. **Covariance Matrix**:\n $$\n \\begin{bmatrix}\n 1.0 & 1.0 \\\\\n 1.0 & 1.0 \n \\end{bmatrix}\n $$\n\n### Applications\n\nCovariance matrices are widely used in various fields, including:\n\n- **Principal Component Analysis (PCA)**: Reducing the dimensionality of datasets while preserving variance.\n- **Portfolio Optimization**: Understanding the variance and covariance between different financial assets.\n- **Multivariate Statistics**: Analyzing the relationships between multiple variables simultaneously.\n\nUnderstanding the covariance matrix is crucial for interpreting the relationships in multivariate data and for performing advanced statistical analyses.",
23+
"starter_code": "def calculate_covariance_matrix(vectors: list[list[float]]) -> list[list[float]]:\n\t# Your code here\n\treturn []",
24+
"solution": "import numpy as np\n\ndef calculate_covariance_matrix(vectors: list[list[float]]) -> list[list[float]]:\n n_features = len(vectors)\n n_observations = len(vectors[0])\n covariance_matrix = [[0 for _ in range(n_features)] for _ in range(n_features)]\n\n means = [sum(feature) / n_observations for feature in vectors]\n\n for i in range(n_features):\n for j in range(i, n_features):\n covariance = sum((vectors[i][k] - means[i]) * (vectors[j][k] - means[j]) for k in range(n_observations)) / (n_observations - 1)\n covariance_matrix[i][j] = covariance_matrix[j][i] = covariance\n\n return covariance_matrix",
25+
"example": {
26+
"input": "[[1, 2, 3], [4, 5, 6]]",
27+
"output": "[[1.0, 1.0], [1.0, 1.0]]",
28+
"reasoning": "The covariance between the two features is calculated based on their deviations from the mean. For the given vectors, both covariances are 1.0, resulting in a symmetric covariance matrix."
29+
},
30+
"test_cases": [
31+
{
32+
"test": "print(calculate_covariance_matrix([[1, 2, 3], [4, 5, 6]]))",
33+
"expected_output": "[[1.0, 1.0], [1.0, 1.0]]"
34+
},
35+
{
36+
"test": "print(calculate_covariance_matrix([[1, 5, 6], [2, 3, 4], [7, 8, 9]]))",
37+
"expected_output": "[[7.0, 2.5, 2.5], [2.5, 1.0, 1.0], [2.5, 1.0, 1.0]]"
38+
}
39+
],
40+
"tinygrad_starter_code": "from tinygrad.tensor import Tensor\n\ndef calculate_covariance_matrix_tg(vectors) -> Tensor:\n \"\"\"\n Calculate the covariance matrix for given feature vectors using tinygrad.\n Input: 2D array-like of shape (n_features, n_observations).\n Returns a Tensor of shape (n_features, n_features).\n \"\"\"\n v_t = Tensor(vectors).float()\n # Your implementation here\n pass",
41+
"tinygrad_solution": "from tinygrad.tensor import Tensor\n\ndef calculate_covariance_matrix_tg(vectors) -> Tensor:\n \"\"\"\n Calculate the covariance matrix for given feature vectors using tinygrad.\n Input: 2D array-like of shape (n_features, n_observations).\n Returns a Tensor of shape (n_features, n_features).\n \"\"\"\n v_t = Tensor(vectors).float()\n n_features, n_obs = v_t.shape\n # compute feature means\n means = v_t.sum(axis=1).reshape(n_features,1) / n_obs\n centered = v_t - means\n cov = centered.matmul(centered.transpose(0,1)) / (n_obs - 1)\n return cov",
42+
"tinygrad_test_cases": [
43+
{
44+
"test": "from tinygrad.tensor import Tensor\nres = calculate_covariance_matrix_tg([[1.0,2.0,3.0],[4.0,5.0,6.0]])\nprint(res.numpy().tolist())",
45+
"expected_output": "[[1.0, 1.0], [1.0, 1.0]]"
46+
},
47+
{
48+
"test": "from tinygrad.tensor import Tensor\nres = calculate_covariance_matrix_tg([[1.0,2.0,3.0],[3.0,3.0,3.0]])\nprint(res.numpy().tolist())",
49+
"expected_output": "[[1.0, 0.0], [0.0, 0.0]]"
50+
}
51+
],
52+
"pytorch_starter_code": "import torch\n\ndef calculate_covariance_matrix(vectors) -> torch.Tensor:\n \"\"\"\n Calculate the covariance matrix for given feature vectors using PyTorch.\n Input: 2D array-like of shape (n_features, n_observations).\n Returns a tensor of shape (n_features, n_features).\n \"\"\"\n v_t = torch.as_tensor(vectors, dtype=torch.float)\n # Your implementation here\n pass",
53+
"pytorch_solution": "import torch\n\ndef calculate_covariance_matrix(vectors) -> torch.Tensor:\n \"\"\"\n Calculate the covariance matrix for given feature vectors using PyTorch.\n Input: 2D array-like of shape (n_features, n_observations).\n Returns a tensor of shape (n_features, n_features).\n \"\"\"\n v_t = torch.as_tensor(vectors, dtype=torch.float)\n # use built-in covariance\n return torch.cov(v_t)",
54+
"pytorch_test_cases": [
55+
{
56+
"test": "import torch\nv = [[1.0,2.0,3.0],[4.0,5.0,6.0]]\ncov = calculate_covariance_matrix(v)\nprint(cov.detach().numpy().tolist())",
57+
"expected_output": "[[1.0, 1.0], [1.0, 1.0]]"
58+
},
59+
{
60+
"test": "import torch\nv = [[1.0,2.0,3.0],[3.0,3.0,3.0]]\ncov = calculate_covariance_matrix(v)\nprint(cov.detach().numpy().tolist())",
61+
"expected_output": "[[1.0, 0.0], [0.0, 0.0]]"
62+
}
63+
]
64+
}

build/100.json

Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
{
2+
"id": "100",
3+
"title": "Implement the Softsign Activation Function",
4+
"difficulty": "easy",
5+
"category": "Deep Learning",
6+
"video": "",
7+
"likes": "0",
8+
"dislikes": "0",
9+
"contributor": [
10+
{
11+
"profile_link": "https://github.com/Haleshot",
12+
"name": "Haleshot"
13+
}
14+
],
15+
"marimo_link": "https://open-deep-ml.github.io/DML-OpenProblem/problem-softsign",
16+
"description": "Implement the Softsign activation function, a smooth activation function used in neural networks. Your task is to compute the Softsign value for a given input, ensuring the output is bounded between -1 and 1.",
17+
"learn_section": "## Understanding the Softsign Activation Function\n\nThe Softsign activation function is a smooth, non-linear activation function used in neural networks. It’s similar to the hyperbolic tangent (tanh) function but with different properties, particularly in its tails which approach their limits more slowly.\n\n### Mathematical Definition\n\nThe Softsign function is mathematically defined as:\n\n$$\nSoftsign(x) = \\frac{x}{1 + |x|}\n$$\n\nWhere:\n- $x$ is the input to the function\n- $|x|$ represents the absolute value of $x$\n\n### Characteristics\n\n- **Output Range:** The output is bounded between -1 and 1, approaching these values asymptotically as $x$ approaches $\\pm \\infty$.\n- **Shape:** The function has an S-shaped curve, similar to tanh but with a smoother approach to its asymptotes.\n- **Gradient:** The gradient is smoother and more gradual compared to tanh, which can help prevent vanishing gradient problems in deep networks.\n- **Symmetry:** The function is symmetric around the origin $(0,0)$.\n\n### Key Properties\n\n- **Bounded Output:** Unlike ReLU, Softsign naturally bounds its output between -1 and 1.\n- **Smoothness:** The function is continuous and differentiable everywhere.\n- **No Saturation:** The gradients approach zero more slowly than in tanh or sigmoid functions.\n- **Zero-Centered:** The function crosses through the origin, making it naturally zero-centered.\n\nThis activation function can be particularly useful in scenarios where you need bounded outputs with more gradual saturation compared to tanh or sigmoid functions.",
18+
"starter_code": "def softsign(x: float) -> float:\n\t\"\"\"\n\tImplements the Softsign activation function.\n\n\tArgs:\n\t\tx (float): Input value\n\n\tReturns:\n\t\tfloat: The Softsign of the input\t\"\"\"\n\t# Your code here\n\tpass\n\treturn round(val,4)",
19+
"solution": "def softsign(x: float) -> float:\n \"\"\"\n Implements the Softsign activation function.\n\n Args:\n x (float): Input value\n\n Returns:\n float: The Softsign of the input, calculated as x/(1 + |x|)\n \"\"\"\n return round(x / (1 + abs(x)), 4)",
20+
"example": {
21+
"input": "softsign(1)",
22+
"output": "0.5",
23+
"reasoning": "For x = 1, the Softsign activation is calculated as $\\frac{x}{1 + |x|}$."
24+
},
25+
"test_cases": [
26+
{
27+
"test": "print(softsign(0))",
28+
"expected_output": "0.0"
29+
},
30+
{
31+
"test": "print(softsign(1))",
32+
"expected_output": "0.5"
33+
},
34+
{
35+
"test": "print(softsign(-1))",
36+
"expected_output": "-0.5"
37+
},
38+
{
39+
"test": "print(softsign(100))",
40+
"expected_output": "0.9901"
41+
},
42+
{
43+
"test": "print(softsign(-100))",
44+
"expected_output": "-0.9901"
45+
}
46+
]
47+
}

0 commit comments

Comments
 (0)