diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index d5c6825..e6e0739 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -61,7 +61,7 @@ jobs: matrix: python: - "3.7" # oldest Python supported by PSF - - "3.11" # newest Python that is stable + - "3.13" # newest Python that is stable platform: - ubuntu-latest - macos-latest @@ -110,7 +110,7 @@ jobs: steps: - uses: actions/checkout@v3 - uses: actions/setup-python@v4 - with: {python-version: "3.11"} + with: {python-version: "3.12"} - name: Retrieve pre-built distribution files uses: actions/download-artifact@v3 with: {name: python-distribution-files, path: dist/} diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index f331c47..8a753d5 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -65,3 +65,9 @@ repos: # rev: v2.2.5 # hooks: # - id: codespell + +- repo: https://github.com/kynan/nbstripout + rev: 0.8.1 + hooks: + - id: nbstripout + args: [--extra-keys=metadata.kernelspec metadata.language_info] diff --git a/LICENSE.txt b/LICENSE.txt index a1a87e4..f31a526 100644 --- a/LICENSE.txt +++ b/LICENSE.txt @@ -1,6 +1,6 @@ The MIT License (MIT) -Copyright (c) 2023 Dennis Reinsch +Copyright (c) 2023-2025 Thomas Hermann and Dennis Reinsch Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal diff --git a/notebooks/pyamapping-examples.ipynb b/notebooks/pyamapping-examples.ipynb new file mode 100644 index 0000000..c27d591 --- /dev/null +++ b/notebooks/pyamapping-examples.ipynb @@ -0,0 +1,2208 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "0", + "metadata": {}, + "source": [ + "# pyamapping - mapping functions for audio computing (and beyond)\n", + "\n", + "by Thomas Hermann and Dennis Reinsch, 2023++\n", + "\n", + "## Introduction \n", + "\n", + "### Background / History\n", + "\n", + "- pyamapping bundles frequently used mapping functions for audio computing\n", + "- the earliest functions were reimplementations of ampdb, dbamp, midicps, cpsmidi, linlin, coded in analogy to SuperCollider3 functions to be used within the sc3nb package. (coded by TH)\n", + "- later, when I started pya, the same functions were needed, yet importing sc3nb would have caused unwanted dependencies, so we created pyamapping as a very lean package that both sc3nb and pya depend on (created by DR)\n", + "- now in 2025 pyamapping grows strongly (additions by TH) \n", + " - firstly by adding many mapping functions available in sc3 which were beforehand not copied\n", + " - secondly, by introducing ChainableArray, a class that wraps numpy.ndarrays, allowing to daisy chain operations on numpy arrays, similar to how we offer it for pya.\n", + "- This notebook introduces the available mapping functions with examples.\n", + "\n", + "### Overview\n", + "\n", + "**pyamapping** offers a set of mapping functions often used \n", + "\n", + "- in the context of sound and computer music \n", + "- in the context of auditory display and sonification (e.g. parameter mapping sonifications)\n", + "- ...\n", + "\n", + "A source of inspiration is librosa and Supercollider3. This package reimplements them and adds mappings used in the interactive sonification stack (cf. ), including the following packages that all make use of pyamapping:\n", + "\n", + "- **sc3nb** - sc3 interface for Python and Jupyter notebooks\n", + "- **pya** - the python Audio Coding Package\n", + "- **mesonic** - a middleware for sonification and auditory display \n", + "- **sonecules** - a high-level class library for sonification and auditory display\n", + "\n", + "**Chainable numpy arrays**\n", + "\n", + "Method chaining offers concise syntax and proved helpful in pya.\n", + "Numpy offers method chaining only for few functions.\n", + "This package extends method chaining \n", + "\n", + "- by inheriting the class `ChainableArray` from `numpy.ndarray`\n", + "- by adding wrappers to enable a method chain syntax for most `ufuncs`\n", + "- by providing a general `map()` method for direct vectorized mapping\n", + "- by providing helper functions to vectorize Python functions into methods\n", + "\n", + "Furthermore it offers some convenience functions, e.g.\n", + "\n", + "- to plot arrays (optionally as signal with given sample rate)\n", + "\n", + "We hope that pyamapping will help to write signal transformations and manipulations in a more concise, compact and readible manner." + ] + }, + { + "cell_type": "markdown", + "id": "1", + "metadata": {}, + "source": [ + "**Imports and Headers**\n", + "\n", + "- as pyamapping is a long name, importing as `pam` is a suggested abbreviation \n", + "- matplotlib and pprint imports are merely for showing example output" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "import matplotlib as mpl\n", + "import matplotlib.pyplot as plt\n", + "from pprint import pprint\n", + "import pyamapping as pam\n", + "from pyamapping import chain\n", + "\n", + "mpl.rcParams['figure.figsize'] = (9, 2.5)" + ] + }, + { + "cell_type": "markdown", + "id": "3", + "metadata": {}, + "source": [ + "## Available pyamapping functions - Overview\n", + "\n", + "- in import of pyamapping, wrappers are automatically created for numpy ufuncs.\n", + " - later versions may have them created verbatim to enable code completion\n", + "- the following code simply lists those numpy functions plus special dedicated/ new pyamapping functions that do not have their origin in numpy" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4", + "metadata": {}, + "outputs": [], + "source": [ + "from pyamapping import _list_numpy_ufuncs, pyamapping_functions\n", + "\n", + "# print (i) a compact list of all unary and binary numpy functions \n", + "# and (ii) all pyamapping functions\n", + "u_lists = [[], []]\n", + "\n", + "for ufunc in _list_numpy_ufuncs():\n", + " u_lists[ufunc.nin - 1].append(ufunc.__name__)\n", + "\n", + "# compact list numpy functions\n", + "for i, li in enumerate(u_lists):\n", + " print(f\"\\n=== numpy functions with {i+1} argument ===\")\n", + " pprint(li, compact=True, width=80)\n", + "\n", + "# compact list of pyamapping functions\n", + "li = [el.__name__ for el in pyamapping_functions]\n", + "print(f\"\\n=== pyamapping functions: ===\")\n", + "pprint(li, compact=True, width=80)" + ] + }, + { + "cell_type": "markdown", + "id": "5", + "metadata": {}, + "source": [ + "## pyamapping - Demonstration and Examples" + ] + }, + { + "cell_type": "markdown", + "id": "6", + "metadata": {}, + "source": [ + "### ChainableArray - Basics" + ] + }, + { + "cell_type": "markdown", + "id": "7", + "metadata": {}, + "source": [ + "Any numpy array can be turned into a chainable array by using the `ChainableArray` class defined in `pyamapping`.\n", + "- the chain() function provides a shortcut, making this construction shorter.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8", + "metadata": {}, + "outputs": [], + "source": [ + "from pyamapping import ChainableArray, chain\n", + "\n", + "# some data\n", + "data = np.random.random(100)\n", + "\n", + "# create ChainableArray\n", + "dch = ChainableArray(data)\n", + "\n", + "# the same can be obtained shorter by\n", + "dch = chain(data)" + ] + }, + { + "cell_type": "markdown", + "id": "9", + "metadata": {}, + "source": [ + "ChainableArray offer the following methods:\n", + "\n", + "- `to_array` - back to numpy ndarray\n", + "- `to_asig` - convert into pya.Asig\n", + "- `plot` - plot signal(s) as time series\n", + " - optional kwarg 'xs' allows to pass x values for data.\n", + "- `mapvec` - map function on self by using numpy.vectorize\n", + "- `map` - apply function directly to the array itself\n", + "\n", + "Here is a quick demonstration:\n", + "\n", + "- let us\n", + " - map $x \\to (5x)^2 + 0.1$, \n", + " - plot as signal assuming sampling rate 100 Hz, \n", + " - convert into decibel, \n", + " - turn that into an audio signal (i.e. pya.Asig) \n", + " - and plot it in the same figure created above.\n", + " - finally transform the ChainableArray back to a regular numpy.ndarray.\n", + "\n", + "Using pyamapping, the code is both shorter and more concise than the above description:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "10", + "metadata": {}, + "outputs": [], + "source": [ + "dch2 = dch.map(lambda x: (5*x)**2+0.1).plot(sr=100, color=\"r\").mapvec(pam.amp_to_db)\n", + "a1 = dch2.to_asig(sr=100).plot(color=\"c\", lw=0.8)\n", + "\n", + "# conversion back to numpy array is rarely needed but if...\n", + "dd = dch2.to_array()\n", + "type(dd)" + ] + }, + { + "cell_type": "markdown", + "id": "11", + "metadata": {}, + "source": [ + "ChainableArray is a recent addition to pyamapping, yet introduced here as it makes demonstrations of mapping functions extremely readable..." + ] + }, + { + "cell_type": "markdown", + "id": "12", + "metadata": {}, + "source": [ + "## Tour of Mapping Functions: Functions from SuperCollider3 and librosa\n", + "\n", + "- The following function have been implemented in analogy to their versions in Supercollider3 resp. librosa.\n", + "- Please note that some defaults may be different, e.g. functions extrapolate by default\n", + "- For the following demonstrations, we use `xs` to refer to the input array and `ys` for the outputs." + ] + }, + { + "cell_type": "markdown", + "id": "13", + "metadata": {}, + "source": [ + "### `linlin`\n", + "\n", + "- linlin is implemented in analogy to the SC3 linlin\n", + "- `linlin(v, x1, x2, y1, y2)` maps values v (scalar or arraylike) affine linearly so that [x1, x2] is mapped to [y1, y2]:\n", + " $$ z = y_1 + \\frac{v - x_1}{x_2 - x_1} \\cdot (y_2 - y_1) $$\n", + "- note that this linlin function extrapolates by default\n", + "- clipping can be controlled via the clip argument (values None (default), \"min\", \"max\", or anything else for \"minmax\")\n", + "- A frequently used invocation is with $x_1 < x_2$, i.e. thinking of them as a range $[x_1, x_2]$" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "14", + "metadata": {}, + "outputs": [], + "source": [ + "pam.linlin(7, 0, 10, 100, 300)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "15", + "metadata": {}, + "outputs": [], + "source": [ + "pam.linlin(7, 0, 5, 100, 300, \"max\") # clip result to maximum input range" + ] + }, + { + "cell_type": "markdown", + "id": "16", + "metadata": {}, + "source": [ + "- ChainableArray.linlin uses self as input.\n", + "- Here are some mapping examples and plots" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "17", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.linspace(0, 1, 100)) # turn data into ChainableArray\n", + "xs.plot(\"k-\", label=\"input data\") # plot input data\n", + "xs.linlin(0, 1, 1, 3).plot(\"r-\", label=\"linlin\")\n", + "xs.linlin(0.2, 0.7, -2, 2, \"minmax\").plot(\"g-\", label=\"linlin with clip\")\n", + "plt.legend(); plt.grid()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "18", + "metadata": {}, + "outputs": [], + "source": [ + "# plot mapping function (output vs input)\n", + "plt.figure(figsize=(3, 1.5)); plt.grid();\n", + "xs.linlin(0.25, 0.75, -1, 1, \"minmax\").plot(xs=xs); " + ] + }, + { + "cell_type": "markdown", + "id": "19", + "metadata": {}, + "source": [ + "### `linexp`\n", + "\n", + "- linexp is implemented in analogy to the SC3 linexp\n", + "- `linexp(v, x1, x2, y1, y2)` maps values v (scalar or arraylike) exponentially so that [x1, x2] is mapped to [y1, y2]:\n", + " $$ z = y_1 \\text{exp}\\left(\\frac{v - x_1}{x_2 - x_1}\\cdot (\\log(y_2) - \\log(y_1))\\right)\n", + "- note that this linexp function extrapolates by default\n", + "- clipping can be controlled via the clip argument (values None (default), \"min\", \"max\", or anything else for \"minmax\")\n", + "- A frequently used invocation is with $x_1 < x_2$, i.e. thinking of them as a range $[x_1, x_2]$" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "20", + "metadata": {}, + "outputs": [], + "source": [ + "pam.linexp(5, 1, 8, 2, 256)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "21", + "metadata": {}, + "outputs": [], + "source": [ + "pam.linexp(7, 0, 5, 100, 300, \"max\") # clip result to maximum input range" + ] + }, + { + "cell_type": "markdown", + "id": "22", + "metadata": {}, + "source": [ + "- ChainableArray.linexp uses self as input.\n", + "- Here are some mapping examples and plots" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "23", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.linspace(0, 1, 100)) # turn data into ChainableArray\n", + "xs.plot(\"k-\", label=\"input data\") # plot input data\n", + "xs.linexp(0, 1, 0.01, 1).plot(\"r-\", label=\"linexp\")\n", + "xs.linexp(0.2, 0.7, 2, 0.2, \"minmix\").plot(\"g-\", label=\"linexp with clip\")\n", + "plt.legend(); plt.grid()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "24", + "metadata": {}, + "outputs": [], + "source": [ + "# plot linexp mapping function (output vs input)\n", + "ys = xs.linexp(0.25, 0.85, 0.2, 2).plot(xs=xs)" + ] + }, + { + "cell_type": "markdown", + "id": "25", + "metadata": {}, + "source": [ + "### `explin`\n", + "\n", + "- explin is implemented in analogy to the SC3 function\n", + "- `explin(v, x1, x2, y1, y2)` maps values v (scalar or arraylike) logarithmically so that [x1, x2] is mapped to [y1, y2]:\n", + "\n", + " $$ y = y_1 + (y_2-y_1) \\frac{\\log(v / x_1)}{\\log(x_2 / x_1)} $$\n", + "\n", + "- note that this `explin` function extrapolates by default\n", + "- clipping can be controlled via the clip argument (values None (default), `\"min\"`, `\"max\"`, or anything else for `\"minmax\"`)\n", + "- A frequently used invocation is with $x_1 < x_2$, i.e. thinking of them as a range $[x_1, x_2]$" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "26", + "metadata": {}, + "outputs": [], + "source": [ + "# example: unmap a frequency to MIDI note with explin\n", + "f = 220 * 2**(-5/12) # 5 semitones higher than 220 Hz\n", + "pam.explin(f, 220, 440, 0, 12)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "27", + "metadata": {}, + "outputs": [], + "source": [ + "# example: unmap amplitude to level in decibel\n", + "pam.explin(0.01, 0.001, 1.0, -30, 0, \"max\") # clip result to maximum input range" + ] + }, + { + "cell_type": "markdown", + "id": "28", + "metadata": {}, + "source": [ + "- ChainableArray.explin uses self as input.\n", + "- Here are some mapping examples and plots" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "29", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.linspace(0.01, 1, 100)) # turn data into ChainableArray\n", + "xs.plot(\"k-\", label=\"input data\") # plot input data\n", + "xs.explin(0.1, 1, 0, 1).plot(\"r-\", label=\"explin\")\n", + "xs.explin(0.1, 0.5, 1, 0, \"minmix\").plot(\"g-\", label=\"explin with clip\")\n", + "plt.legend(); plt.grid()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "30", + "metadata": {}, + "outputs": [], + "source": [ + "# plot linexp mapping function (output vs input)\n", + "ys = xs.explin(0.01, 1, 0, 20).plot(xs=xs)" + ] + }, + { + "cell_type": "markdown", + "id": "31", + "metadata": {}, + "source": [ + "### `lincurve`\n", + "\n", + "- lincurve is implemented in analogy to the SC3 function\n", + "- `lincurve(v, x1, x2, y1, y2, curve=-2)` maps v (scalar or arraylike) from [x1, x2] to [y1, y2] using the following function (with c=curve): \n", + "\n", + "$$ y_1 + \\frac{y_2 - y_1}{1.0 - e^c} \\left(1 - \\exp\\left(c \\frac{v - x_1}{x_2 - x_1}\\right) \\right) $$\n", + "- in contrast to `explin` (resp. `linexp`) this allows source (resp. target) range to include 0.\n", + "- note that this `lincurve` function extrapolates by default\n", + "- clipping can be controlled via the clip argument (values None (default), `\"min\"`, `\"max\"`, or anything else for `\"minmax\"`)\n", + "- A frequently used invocation is with $x_1 < x_2$, i.e. thinking of them as a range $[x_1, x_2]$" + ] + }, + { + "cell_type": "markdown", + "id": "32", + "metadata": {}, + "source": [ + "- ChainableArray.lincurve uses self as input.\n", + "- Here are some mapping examples and plots" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "33", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.linspace(0, 1, 100)) # turn data into ChainableArray\n", + "xs.plot(\"k-\", label=\"input data\") # plot input data\n", + "xs.lincurve(0, 1, 0, 0.4, 5).plot(\"r-\", label=\"lincurve\")\n", + "xs.lincurve(0.2, 0.5, 1, 0, 2.5, \"minmax\").plot(\"g-\", label=\"lincurve with clip\")\n", + "plt.legend(); plt.grid()" + ] + }, + { + "cell_type": "markdown", + "id": "34", + "metadata": {}, + "source": [ + "the following plot shows how the curve parameter influences the mapping function" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "35", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.linspace(0, 100, 100))\n", + "for i, curve in enumerate(range(-9, 10, 3)):\n", + " xs.lincurve(0, 100, -10, 10, curve).plot('-', label=f\"curve={curve}\")\n", + "plt.legend(fontsize=6); plt.grid();" + ] + }, + { + "cell_type": "markdown", + "id": "36", + "metadata": {}, + "source": [ + "### `curvelin`\n", + "\n", + "- curvelin is implemented in analogy to the SC3 function\n", + "- `curvelin(v, x1, x2, y1, y2, curve=-2)` maps v (scalar or arraylike) from an assumed curve-exponential input range [x1, x2] to a linear output range [y1, y2] using the following function (with c=curve): \n", + "$$ f(x) = y_1 + \\frac{y_2 - y_1}{c}\\log\\left(\\frac{a + x_1 - x}{a}\\right) ~~~\\text{with}~~~ a = \\frac{x_2 - x_1}{1 - e^c}$$\n", + "\n", + "- This is the opposite transformation to `lincurve`.\n", + "- note that this `curvelin` function extrapolates by default.\n", + "- clipping can be controlled via the clip argument (values None (default), `\"min\"`, `\"max\"`, or anything else for minmax clipping.\n", + "- A frequently used invocation is with $x_1 < x_2$, i.e. thinking of them as a range $[x_1, x_2]$" + ] + }, + { + "cell_type": "markdown", + "id": "37", + "metadata": {}, + "source": [ + "- ChainableArray.curvelin uses self as input.\n", + "- Here are some mapping examples and plots\n", + "- The first shows how curvelin unmaps or reverts a lincurve warped interval when using the same curve argument." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "38", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.linspace(0, 1, 100)) # turn data into ChainableArray\n", + "xs.plot(\"k-\", label=\"input data for lincurve\") # plot input data\n", + "curve = 10\n", + "ys = xs.lincurve(0, 1, 0, 0.6, curve).plot(\"r-\", label=\"lincurve output\")\n", + "xs = ys.curvelin(0, 0.6, 0, 1, curve).plot(\"b-.\", lw=3, alpha=0.5, label=\"curvelin to undo lincurve\")\n", + "plt.legend(); plt.grid()" + ] + }, + { + "cell_type": "markdown", + "id": "39", + "metadata": {}, + "source": [ + "the following plot shows how the curve parameter influences the mapping function" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "40", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.linspace(0, 100, 500))\n", + "for i, curve in enumerate(range(-9, 10, 3)):\n", + " xs.curvelin(0, 100, -10, 10, curve).plot('-', label=f\"curve={curve}\")\n", + "plt.legend(fontsize=6); plt.grid();" + ] + }, + { + "cell_type": "markdown", + "id": "41", + "metadata": {}, + "source": [ + "### `bilin`\n", + "\n", + "- bilin is implemented similar to the SC3 function, yet with an API change:\n", + "- `bilin(v, xcenter, xmin, xmax, ycenter, ymin, ymax)` maps v (scalar or arraylike) \n", + " according to two linear segments:\n", + " - [xmin, xcenter] to [ymin, ycenter] with default extrapolation beyond xmin\n", + " - [xcenter, xmax] to [ycenter, ymax] with default extrapolation beyond xmax\n", + "- this mapping is achieved using `pyamapping.interp_spline()`.\n", + "- kwargs are passed on to `interp_spline()` if needed.\n", + "- in case, no extrapolation is wanted, pyampapping.interp() offers an alternative\n", + " with a different interface." + ] + }, + { + "cell_type": "markdown", + "id": "42", + "metadata": {}, + "source": [ + "- ChainableArray.bilin uses self as input.\n", + "- Here are some mapping examples and plots" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "43", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.arange(0, 100)) # turn data into ChainableArray\n", + "xs.bilin(60, 20, 80, 0, -20, 60).plot(\"b.\", ms=1, label='bilin');\n", + "for (x, y, t) in [[20, -20, \"(xmin, ymin)\"], [80, 60, \"(xmax, ymax)\"], [60, 0, \"(xcenter, ycenter)\"]]:\n", + " plt.plot([x], [y], \"ro\")\n", + " plt.text(x+1, y-5, t, fontsize=6)\n", + "plt.legend(); plt.grid()" + ] + }, + { + "cell_type": "markdown", + "id": "44", + "metadata": {}, + "source": [ + "### `clip`\n", + "\n", + "- clip is implemented in analogy to the SC3 clip\n", + "- `clip(value, minimum, maximum)` clips value (scalar or arraylike) to a certain range [minimum, maximum]\n", + "- default values for minium and maximum are so that no clipping occurs, i.e. specifying only minimum or maximum allows one-sided clipping." + ] + }, + { + "cell_type": "markdown", + "id": "45", + "metadata": {}, + "source": [ + "- ChainableArray.clip uses self as input.\n", + "- Here are some mapping examples and plots" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "46", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.linspace(0, 1, 100)) # turn data into ChainableArray\n", + "xs.plot(\"k:\", label=\"input data\") # plot input data\n", + "xs.clip(0.2, 0.7).plot(\"r-\", label=\"linlin with clip\")\n", + "plt.legend(); plt.grid()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "47", + "metadata": {}, + "outputs": [], + "source": [ + "# plot mapping function (output vs input)\n", + "ys = xs.clip(0.25, 0.75).plot(xs=xs, label=''); plt.grid(); " + ] + }, + { + "cell_type": "markdown", + "id": "48", + "metadata": {}, + "source": [ + "### `midi_to_cps`\n", + "\n", + "- midi_to_cps is implemented in analogy to the SC3 midicps function\n", + "- the shorter (less pythonic name) midicps can be used as well\n", + "- `midi_to_cps(midi_note)` converts MIDI note midi_note (value or arraylike) to cycles per second (aka Hz).\n", + "- The mapping function is\n", + " $$ f(x) = 440\\cdot 2^{\\frac{x-69}{12}} $$\n", + " which obviously maps MIDI 69 to 440 Hz, the reference for definition. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "49", + "metadata": {}, + "outputs": [], + "source": [ + "pam.midi_to_cps(69+12) # should be 880, i.e. one octave above MIDI 69" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "50", + "metadata": {}, + "outputs": [], + "source": [ + "chain(np.arange(60, 74, 2)).midicps().round(1) # rounded frequencies of whole-tone scale ('c-d-e-f#-g#-a#-c')" + ] + }, + { + "cell_type": "markdown", + "id": "51", + "metadata": {}, + "source": [ + "- ChainableArray.midi_to_cps uses self as input.\n", + "- Here a mapping examples and plots" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "52", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.arange(21, 109)) # the standard MIDI range\n", + "xs.plot(\"k-\", label=\"input data (MIDI notes)\") # plot input data\n", + "xs.midi_to_cps().plot(\"r-\", label=\"midi_to_cps result [in Hz]\")\n", + "plt.legend(); plt.grid();" + ] + }, + { + "cell_type": "markdown", + "id": "53", + "metadata": {}, + "source": [ + "### `cps_to_midi`\n", + "\n", + "- cps_to_midi is implemented in analogy to the SC3 cpsmidi function\n", + "- the shorter (less pythonic name) cpsmidi can be used as well\n", + "- `cps_to_midi(cps)` converts a frequency cps in Hz (value or arraylike) to a MIDI note (in float, resp. Arraylike of float).\n", + "- The mapping function is\n", + " $$ f(x) = 69 + 12 \\log_2\\left(\\frac{x}{440}\\right) $$\n", + " which obviously maps 440 Hz to MIDI 69, the reference by definition. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "54", + "metadata": {}, + "outputs": [], + "source": [ + "pam.cps_to_midi(440*2) # should be 81=69+12" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "55", + "metadata": {}, + "outputs": [], + "source": [ + "chain([110, 220, 330, 440, 550, 660, 770, 880]).cpsmidi().round(2) # rounded MIDI notes for the harmonics series over low A" + ] + }, + { + "cell_type": "markdown", + "id": "56", + "metadata": {}, + "source": [ + "- ChainableArray.cps_to_midi uses self as input.\n", + "- Here a mapping examples and plots" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "57", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.arange(100, 1000, 100)) # frequencies with 50 Hz spacing\n", + "ys = xs.cps_to_midi() \n", + "\n", + "plt.plot(xs, ys, \"o-\", label=\"MIDI note for harmonics series\"); \n", + "plt.xlabel(\"frequency [Hz]\"); plt.ylabel(\"MIDI note\"); \n", + "plt.legend(); plt.grid();" + ] + }, + { + "cell_type": "markdown", + "id": "58", + "metadata": {}, + "source": [ + "### `midi_to_ratio`\n", + "\n", + "- midi_to_ratio is implemented in analogy to the SC3 midiratio function\n", + "- the shorter (less pythonic name) midiratio can be used as well\n", + "- `midi_to_ratio(midi_note)` converts MIDI note difference midi_note (value or arraylike) to the ratio of their corresponding frequencies.\n", + "- The mapping function is\n", + " $$ f(x) = 2^{(x/\\tiny 12)} $$" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "59", + "metadata": {}, + "outputs": [], + "source": [ + "pam.midi_to_ratio(7) # a fifth is ~3/2 (in equal tuning)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "60", + "metadata": {}, + "outputs": [], + "source": [ + "chain(np.arange(0, 12, 2)).midi_to_ratio().round(2) # ratio of tones in the whole-tone scale ('c-d-e-f#-g#-a#-c')" + ] + }, + { + "cell_type": "markdown", + "id": "61", + "metadata": {}, + "source": [ + "- ChainableArray.midi_to_cps uses self as input.\n", + "- Here a mapping examples and plots" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "62", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.arange(-12, 13)) # one octave around any note\n", + "ys = xs.midi_to_ratio().plot(xs=xs, c=\"r\", ls=\"-\", label=\"midi_to_ratio result\")\n", + "plt.plot(0, 1, \"ro\", label=\"midi_ratio of 0\")\n", + "plt.legend(); plt.grid();" + ] + }, + { + "cell_type": "markdown", + "id": "63", + "metadata": {}, + "source": [ + "### `ratio_to_midi`\n", + "\n", + "- radio_to_midi is implemented in analogy to the SC3 ratiomidi function\n", + "- the shorter (less pythonic name) ratiomidi can be used as well\n", + "- `ratio_to_midi(ratio)` converts a frequency ratio (value or arraylike) to a MIDI note difference (in float, resp. Arraylike of float).\n", + "- The mapping function is\n", + " $$ f(x) = 12 \\log_2(x) $$" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "64", + "metadata": {}, + "outputs": [], + "source": [ + "pam.ratio_to_midi(2) # should be 12 for ratio=2 (i.e. an octave)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "65", + "metadata": {}, + "outputs": [], + "source": [ + "chain(np.arange(1, 10)).ratio_to_midi().round(2) # rounded MIDI offsets for the harmonics series" + ] + }, + { + "cell_type": "markdown", + "id": "66", + "metadata": {}, + "source": [ + "- ChainableArray.ratio_to_midi uses self as input.\n", + "- Here a mapping examples and plots" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "67", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.arange(1, 10)) # integer frequency ratios / harmonics\n", + "ys = xs.ratio_to_midi() # likewise ratiomidi()\n", + "\n", + "plt.plot(xs, ys, \"o-\", label=\"MIDI offsets for harmonics series\"); \n", + "plt.xlabel(\"ratio to fundamental frequency\"); plt.ylabel(\"MIDI offset\"); \n", + "plt.legend(); plt.grid();" + ] + }, + { + "cell_type": "markdown", + "id": "68", + "metadata": {}, + "source": [ + "### `octave_to_cps`\n", + "\n", + "- octave_to_cps is implemented in analogy to the SC3 octcps function\n", + "- the shorter (less pythonic name) octcps can be used as well\n", + "- `octave_to_cps(octave)` converts octaves (value or arraylike) to cycles per second (aka Hz).\n", + "- The mapping function is\n", + " $$ f(x) = 440\\cdot 2^{(x - 4.75)} $$" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "69", + "metadata": {}, + "outputs": [], + "source": [ + "pam.octave_to_cps(3.75) # one octave below 440" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "70", + "metadata": {}, + "outputs": [], + "source": [ + "chain(np.arange(2, 9)).octcps().round(1) # rounded frequencies of c-tones" + ] + }, + { + "cell_type": "markdown", + "id": "71", + "metadata": {}, + "source": [ + "- ChainableArray.octave_to_cps uses self as input.\n", + "- Here a mapping examples and plots" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "72", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.arange(0, 8, 1/4)) # 8 octaves in minor thirds (3 semitone steps)\n", + "ys = xs.octave_to_cps()\n", + "plt.plot(xs, ys, \"ro-\", ms=2,label=\"octave_to_cps result [in Hz]\")\n", + "plt.legend(); plt.grid(); plt.semilogy()" + ] + }, + { + "cell_type": "markdown", + "id": "73", + "metadata": {}, + "source": [ + "### `cps_to_octave`\n", + "\n", + "- cps_to_octave is implemented in analogy to the SC3 cpsoct function\n", + "- the shorter (less pythonic name) cpsoct can be used as well\n", + "- `cps_to_oct(cps)` converts a frequency cps in Hz (value or arraylike) to an octave value (in float, resp. Arraylike of float).\n", + "- The mapping function is\n", + " $$ f(x) = 4.75 + \\log_2\\left(\\frac{x}{440}\\right) $$" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "74", + "metadata": {}, + "outputs": [], + "source": [ + "pam.cps_to_octave(440*2) # one octave above reference" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "75", + "metadata": {}, + "outputs": [], + "source": [ + "chain(pam.midicps(12) * np.arange(1, 10)).cpsoct().round(2) # harmonics series over low C in octaves" + ] + }, + { + "cell_type": "markdown", + "id": "76", + "metadata": {}, + "source": [ + "- ChainableArray.cps_to_octave uses self as input.\n", + "- Here a mapping examples and plots" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "77", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.arange(100, 1000, 100)) # frequencies with 50 Hz spacing\n", + "ys = xs.cps_to_octave() \n", + "\n", + "plt.plot(xs, ys, \"o-\", label=\"octaves for harmonics series\"); \n", + "plt.xlabel(\"frequency [Hz]\"); plt.ylabel(\"octave\"); \n", + "plt.legend(); plt.grid();" + ] + }, + { + "cell_type": "markdown", + "id": "78", + "metadata": {}, + "source": [ + "### `db_to_amp`\n", + "\n", + "- db_to_amp is implemented in analogy to the SC3 dbamp function\n", + "- the shorter (less pythonic name) dbamp can be used as well\n", + "- `db_to_amp(decibels)` converts decibels (value or arraylike) to amplitudes.\n", + "- The mapping function is\n", + " $$ f(x) = 10^{(x/20)}$$" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "79", + "metadata": {}, + "outputs": [], + "source": [ + "pam.db_to_amp(np.array([-6, -12, -20]))" + ] + }, + { + "cell_type": "markdown", + "id": "80", + "metadata": {}, + "source": [ + "- ChainableArray.db_to_amp uses self as input.\n", + "- Here a mapping examples and plots" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "81", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.arange(-60, 0, 6)) # some 6 dB steps\n", + "ys = xs.dbamp().plot(xs=xs, marker=\"o\", ms=2, label=\"db_to_amp result [arb. unit]\")\n", + "plt.legend(); plt.grid();" + ] + }, + { + "cell_type": "markdown", + "id": "82", + "metadata": {}, + "source": [ + "### `amp_to_db`\n", + "\n", + "- amp_to_db is implemented in analogy to the SC3 ampdb function\n", + "- the shorter (less pythonic name) ampdb can be used as well\n", + "- `amp_to_db(amp)` converts amplitude(s) amp (in arb. units) (value or arraylike) to decibel values (in float, resp. Arraylike of float).\n", + "- The mapping function is\n", + " $$ f(x) = 20 * \\log_{10}(x) $$" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "83", + "metadata": {}, + "outputs": [], + "source": [ + "pam.amp_to_db(0.01) # 10^(-2) will be 10^(-4) for energy, aka -40 dB" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "84", + "metadata": {}, + "outputs": [], + "source": [ + "chain(np.arange(0.1, 1, 0.1)).ampdb().round(2) # harmonics series over low C in octaves" + ] + }, + { + "cell_type": "markdown", + "id": "85", + "metadata": {}, + "source": [ + "- ChainableArray.amp_to_db uses self as input.\n", + "- Here a mapping examples and plots" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "86", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.arange(0.1, 2.1, 0.1)) # ampltitudes from 0.1 to 2.0\n", + "ys = xs.ampdb() # likewise amp_to_db()\n", + "plt.plot(xs, ys, \"o-\", label=\"decibel\"); \n", + "plt.xlabel(\"amplitudes\"); plt.ylabel(\"dB\"); \n", + "plt.legend(); plt.grid();" + ] + }, + { + "cell_type": "markdown", + "id": "87", + "metadata": {}, + "source": [ + "### `mel_to_hz`\n", + "\n", + "- mel_to_hz is implemented in analogy to its librosa counterpart, i.e. using Slanay's formula with a linear and logarithmic part by default, and alternatively the formula from O'Shaughnessy (1987) via the argument `htk=True`.\n", + "- `mel_to_hz(mel)` converts mel (value or arraylike) to cycles per second (aka Hz). \n", + "- For the default (Slaney) the mapping function is\n", + " - linear part (for $mel<15$): \n", + " $$ f(\\text{mel}) = \\frac{200}{3} \\cdot\\text{mel}$$\n", + " - exponential part (for $\\text{mel}>15$):\n", + " $$ f(\\text{mel}) = 1000 \\cdot 6.4 ^{(\\frac{mel - 15}{27})}$$\n", + " - Note that this mel scale has another range: 15 mel = 1000 Hz, and ~10kHz is obtained for mel = 48.5.\n", + "- if `htk==True`, the mapping function is the O'Shaughnessy (1987) formula\n", + " $$ f(\\text{mel}) = 700 \\cdot (10^{\\frac{\\text{mel}}{2595}} - 1) $$\n", + "- Note that here 1000 mel roughly matches 1000 Hz.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "88", + "metadata": {}, + "outputs": [], + "source": [ + "pam.mel_to_hz(1000, htk=True) # using the O'Shaughnessy (1987) formula" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "89", + "metadata": {}, + "outputs": [], + "source": [ + "pam.mel_to_hz(15) # using the Slaney formula" + ] + }, + { + "cell_type": "markdown", + "id": "90", + "metadata": {}, + "source": [ + "- ChainableArray.mel_to_hz uses self as input.\n", + "- Here a mapping examples and plots" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "91", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.arange(0, 2500, 100)) \n", + "ys = xs.mel_to_hz(htk=True)\n", + "plt.plot(xs, ys, \"r-\", label=\"mel_to_hz (O'Shaughnessy)\")\n", + "plt.legend(); plt.grid();" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "92", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.arange(0, 40, 1)) \n", + "ys = xs.mel_to_hz() # i.e. Slaney\n", + "plt.plot(xs, ys, \"r-\", label=\"mel_to_hz (Slaney)\")\n", + "plt.legend(); plt.grid();" + ] + }, + { + "cell_type": "markdown", + "id": "93", + "metadata": {}, + "source": [ + "### `hz_to_mel`\n", + "\n", + "- hz_to_mel is implemented in analogy to its librosa counterpart, using Slaney method as default, allowing to set htk=True flag for the formula from O'Shaughnessy (1987).\n", + "- `hz_to_mel(hz)` converts a frequency in Hz (value or arraylike) to a mel scale value (in float, resp. Arraylike of float).\n", + "- The default (Slaney) mapping is:\n", + " $$ \\text{mel}(f) = \\left\\{ \\begin{align*} 3f/200 &~\\text{if}~& f < 1000\\\\\n", + " 15 + 27 \\log_{6.4}(f/1000) &~\\text{if}~& f \\ge 1000\\\\\n", + " \\end{align*}\\right.$$ \n", + "- The alternative (htk=True, O'Shaughnessy (1987) formula) mapping is:\n", + " $$ f(\\text{mel}) = 2595 \\cdot \\log_{10}\\left(1 + \\frac{\\text{mel}}{700}\\right) $$\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "94", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.arange(1, 10000))\n", + "xs.hz_to_mel().plot(xs=xs)\n", + "plt.xlabel('frequencies [Hz]'); plt.ylabel('mel scale (Slaney)'); plt.grid();" + ] + }, + { + "cell_type": "markdown", + "id": "95", + "metadata": {}, + "source": [ + "- ChainableArray.hz_to_mel uses self as input.\n", + "- Here a mapping examples and plots" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "96", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.arange(21, 120, 7)).midicps() # frequencies of fifth series\n", + "xs.hz_to_mel(htk=True).plot(xs=xs, marker=\"o\", label=\"mel values for frequencies\"); \n", + "plt.xlabel(\"frequency [Hz]\"); plt.ylabel(\"mel scale\"); \n", + "plt.legend(); plt.grid(); plt.loglog()\n", + "plt.plot([1000], [1000], \"ro-\", label=\"a reference point\");" + ] + }, + { + "cell_type": "markdown", + "id": "97", + "metadata": {}, + "source": [ + "### `distort`\n", + "\n", + "- `distort(x, threshold=1)` is implemented in analogy to its sc3 counterpart `.distort`\n", + "\n", + "- It applies a distortion to x (float, resp. Arraylike of float).\n", + "- the threshold parameter controls the non-linearity\n", + "- The mapping function is:\n", + " $$ f(x, \\theta) = \\frac{x}{\\theta + |x|}$$\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "98", + "metadata": {}, + "outputs": [], + "source": [ + "pam.distort([0, 1, 2, 3], 1)" + ] + }, + { + "cell_type": "markdown", + "id": "99", + "metadata": {}, + "source": [ + "- ChainableArray.distort uses self as input.\n", + "- Here a mapping examples and plots" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "100", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.arange(-3, 3, 0.01))\n", + "for theta in [0.1, 0.5, 1, 3]:\n", + " xs.distort(theta).plot(xs=xs, label=f\"threshold = {theta}\")\n", + "plt.xlabel('input'); plt.ylabel('output'); plt.grid(); \n", + "plt.legend(); plt.title(\"distort mapping function\");" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "101", + "metadata": {}, + "outputs": [], + "source": [ + "ts = chain(np.linspace(0, 0.1, 500))\n", + "np.sin(2*np.pi*50*ts).plot(label='sine').distort(0.3).plot(label='distorted');\n", + "plt.legend(loc='upper right');" + ] + }, + { + "cell_type": "markdown", + "id": "102", + "metadata": {}, + "source": [ + "### `softclip`\n", + "\n", + "- `softclip(x)` is implemented in analogy to its sc3 counterpart `.softclip`\n", + "\n", + "- It applies a softclip distortion to x (float, resp. Arraylike of float).\n", + "- The mapping function is:\n", + " $$ f(x) = \\left\\{\\begin{align*}\n", + " x & ~~~\\text{if}~~~ & |x| \\le 0.5 \\\\\n", + " \\frac{|x| - 0.25}{x} & ~~~\\text{else}~~~ & \\\\\n", + " \\end{align*}\\right.$$\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "103", + "metadata": {}, + "outputs": [], + "source": [ + "pam.softclip(np.arange(1, 5))" + ] + }, + { + "cell_type": "markdown", + "id": "104", + "metadata": {}, + "source": [ + "- ChainableArray.softclip uses self as input.\n", + "- Here a mapping examples and plots" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "105", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.arange(-3, 3, 0.01))\n", + "xs.softclip().plot(xs=xs)\n", + "plt.xlabel('input'); plt.ylabel('output'); plt.grid(); \n", + "plt.title(\"softclip mapping function\");" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "106", + "metadata": {}, + "outputs": [], + "source": [ + "ts = chain(np.linspace(0, 0.1, 500))\n", + "xs = np.sin(2*np.pi*50*ts)\n", + "xs.plot(label='sine').softclip().plot(label='softclip distorted');\n", + "plt.legend(loc='upper right');" + ] + }, + { + "cell_type": "markdown", + "id": "107", + "metadata": {}, + "source": [ + "### `scurve`\n", + "\n", + "- `scurve(x)` is implemented in analogy to its sc3 counterpart `.scurve`\n", + "\n", + "- It applies an scurve distortion to x (float, resp. Arraylike of float).\n", + "- The mapping function is:\n", + " $$ f(x) = v^2 (3-2v) ~~\\text{with}~~ v = \\min(\\max(x, 0), 1)~~~\\text{i.e. v=clip(x, 0, 1)}$$\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "108", + "metadata": {}, + "outputs": [], + "source": [ + "pam.scurve(np.arange(0, 1, 0.25))" + ] + }, + { + "cell_type": "markdown", + "id": "109", + "metadata": {}, + "source": [ + "- ChainableArray.scurve uses self as input.\n", + "- Here a mapping examples and plots" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "110", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.arange(0, 1, 0.01))\n", + "xs.scurve().plot(xs=xs)\n", + "plt.xlabel('input'); plt.ylabel('output'); plt.grid(); \n", + "plt.title(\"scurve mapping function\");" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "111", + "metadata": {}, + "outputs": [], + "source": [ + "ts = chain(np.linspace(0, 0.1, 500))\n", + "xs = (2*np.pi*50*ts).sin().linlin(-1, 1, 0, 1)\n", + "xs.plot(label='sine with offset').scurve().plot(label='scurve distorted');\n", + "plt.legend(loc='upper right');" + ] + }, + { + "cell_type": "markdown", + "id": "112", + "metadata": {}, + "source": [ + "### `lcurve`\n", + "\n", + "- `lcurve(x, m=0.0, n=1.0, tau=1.0)` is implemented in analogy to its sc3 counterpart `.lcurve`\n", + "\n", + "- It applies an l-curve distortion to x (float, resp. Arraylike of float).\n", + "- The mapping function is:\n", + " $$ f(x) = \n", + " \\frac{1 + m e^{-x/\\tau}}{1 + n e^{-x/\\tau}} $$\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "113", + "metadata": {}, + "outputs": [], + "source": [ + "pam.lcurve(np.array([-1, -0.5, 0, 0.5, 1]))" + ] + }, + { + "cell_type": "markdown", + "id": "114", + "metadata": {}, + "source": [ + "- ChainableArray.lcurve uses self as input.\n", + "- Here a mapping examples and plots" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "115", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.arange(-10, 10, 0.005))\n", + "for tau in [0.2, 0.5, 1, 2]:\n", + " xs.lcurve(tau=tau).plot(xs=xs, label=f'lcurve for tau={tau}')\n", + "plt.xlabel('input'); plt.ylabel('output'); plt.grid(); \n", + "plt.legend(); plt.title(\"lcurve mapping function\");" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "116", + "metadata": {}, + "outputs": [], + "source": [ + "ts = chain(np.linspace(0, 0.1, 500))\n", + "xs = np.sin(2*np.pi*50*ts)\n", + "xs.plot(label='sine').lcurve(tau=0.25).plot(label='lcurve distorted');\n", + "plt.legend(loc='upper right');" + ] + }, + { + "cell_type": "markdown", + "id": "117", + "metadata": {}, + "source": [ + "### `wrap`\n", + "\n", + "- `wrap(x, y1=-1.0, y2=1.0)` is implemented in analogy to its sc3 counterpart `.wrap`\n", + "\n", + "- It wraps x (float, resp. Arraylike of float) around target range [y1, y2].\n", + "- The mapping function is:\n", + " $$ f(x) = y1 + ((x - y_1) \\mod (y_2 - y_1)) $$\n", + "- the order of y1 and y2 is irrelevant\n", + "- wrap delivers the quantization error when quantizing a signal in units of the interval $|y_2-y_1|$" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "118", + "metadata": {}, + "outputs": [], + "source": [ + "pam.wrap(np.arange(0, 13), 0, 3)" + ] + }, + { + "cell_type": "markdown", + "id": "119", + "metadata": {}, + "source": [ + "- ChainableArray.wrap uses self as input.\n", + "- Here a mapping examples and plots" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "120", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.arange(-5, 5, 0.01))\n", + "xs.wrap(y1=-1, y2=1).plot(xs=xs)\n", + "plt.xlabel('input'); plt.ylabel('output'); plt.grid(); \n", + "plt.title(\"wrap mapping function\");" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "121", + "metadata": {}, + "outputs": [], + "source": [ + "ts = chain(np.linspace(0, 0.1, 500))\n", + "xs = np.sin(2*np.pi*30*ts)\n", + "xs.plot(label='sine').wrap(-0.8, 0.5).plot(label='wrapped sine');\n", + "plt.legend(loc='upper right');" + ] + }, + { + "cell_type": "markdown", + "id": "122", + "metadata": {}, + "source": [ + "### `fold`\n", + "\n", + "- `fold(x, y1=-1.0, y2=1.0)` is implemented in analogy to its sc3 counterpart `.fold`\n", + "\n", + "- It folds x (float, resp. Arraylike of float) beyond limits [y1, y2] back by mirroring the signal.\n", + "- The mapping function is:\n", + " $$ f(x) = y_1 + |(x - y_2) \\mod (2L) - L| \\text{~~with~~} L = y_2 -y_1 $$\n", + "- the order of y1 and y2 is irrelevant: a swap is done internally " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "123", + "metadata": {}, + "outputs": [], + "source": [ + "pam.fold(np.arange(0, 13), 0, 4)" + ] + }, + { + "cell_type": "markdown", + "id": "124", + "metadata": {}, + "source": [ + "- ChainableArray.wrap uses self as input.\n", + "- Here a mapping examples and plots" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "125", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.arange(-5, 5, 0.01))\n", + "xs.fold(y1=-1, y2=1).plot(xs=xs)\n", + "plt.xlabel('input'); plt.ylabel('output'); plt.grid(); " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "126", + "metadata": {}, + "outputs": [], + "source": [ + "ts = chain(np.linspace(0, 0.1, 500))\n", + "xs = np.sin(2*np.pi*30*ts)\n", + "xs.plot(label='sine').fold(-0.75, 0.5).plot(label='folded sine');\n", + "plt.legend(loc='upper right'); plt.grid()" + ] + }, + { + "cell_type": "markdown", + "id": "127", + "metadata": {}, + "source": [ + "## Tour of Mapping Functions: additional/novel mapping functions\n", + "\n", + "- The following function are new additions (i.e. there is no counterpart in Supercollider3 resp. librosa).\n", + "- For the following demonstrations, we use `xs` to refer to the input array and `ys` for the outputs." + ] + }, + { + "cell_type": "markdown", + "id": "128", + "metadata": {}, + "source": [ + "### `linpoly`\n", + "\n", + "- linpoly has no corresponding SC3 function, it provides a polynomial mapping\n", + "- `linpoly(v, xmax, y1, y2, curve=2, clip)` maps v (scalar or arraylike) from an assumed linear input range [-xmax, xmax] to an output range [y1, y2] using the following polynomial mapping function, using a polynomial order m\n", + "\n", + "$$ m = \\left\\{\\begin{align*} 1 + \\text{curve} & ~~~\\text{if} & \\text{curve} \\ge 0\\\\\n", + " \\frac{1}{1-\\text{curve}} & ~~~\\text{else} & \n", + " \\end{align*} \\right.\n", + "$$\n", + "using the mapping function\n", + "$$ f(x) = y_1 + \\frac{y_2 - y_1}{2} \\cdot \\left(1 + \\text{sign}(x) \\left|\\frac{x}{x_{\\max}}\\right|^m\\right)$$\n", + "\n", + "- note that np.sign is used\n", + "- note that this `linpoly` function extrapolates by default.\n", + "- clipping can be controlled via the clip argument (values: None as default, `\"min\"`, `\"max\"`, or anything else for minmax clipping.)\n", + "- It can be used to provide a sensitivity magnification (or reduction) around 0, the center of the input interval." + ] + }, + { + "cell_type": "markdown", + "id": "129", + "metadata": {}, + "source": [ + "- ChainableArray.linpoly uses self as input.\n", + "- Here are some mapping examples and plots" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "130", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.linspace(-3, 3, 200)) # turn data into ChainableArray\n", + "xs.plot(\"k-\", label=\"input data for linpoly\") # plot input data\n", + "xs.linpoly(3, 0, 20, curve=1).plot(\"r-\", label=\"linpoly output\")\n", + "xs.linpoly(3, 0, 20, curve=-1).plot(\"b-\", label=\"linpoly output\")\n", + "plt.legend(); plt.grid()" + ] + }, + { + "cell_type": "markdown", + "id": "131", + "metadata": {}, + "source": [ + "the following plot shows how the curve parameter influences the mapping function" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "132", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.linspace(-1, 1, 200))\n", + "for i, curve in enumerate(range(-3, 3, 1)):\n", + " xs.linpoly(1, -10, 10, curve).plot('-', label=f\"curve={curve}\")\n", + "plt.legend(fontsize=6); plt.grid();" + ] + }, + { + "cell_type": "markdown", + "id": "133", + "metadata": {}, + "source": [ + "### `interp_spline`\n", + "\n", + "- interp_spline provides an interface to `scipy.interpolate.make_interp_spline` for line segment interpolation.\n", + "- `interp_spline(v, xc, yc, k)` maps v (scalar or arraylike) along the spline \n", + " - defined by the input coordinates in array xc \n", + " - and corresponding output coordinates in yc\n", + " - using interpolation order k (default 1=linear)\n", + "- note that interp_spline extrapolation beyond segments.\n", + "- interp_spline is called from `bilin()`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "134", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.linspace(0, 8.2, 1000)) # turn data into ChainableArray\n", + "xc = [1, 2, 5, 7, 8]\n", + "yc = [3, 1, 2, 9, 1]\n", + "plt.plot(xc, yc, \"ro\", label=\"given points\")\n", + "for k in [0, 1, 2]:\n", + " ys = xs.interp_spline(xc, yc, k=k)\n", + " plt.plot(xs,ys, label=f\"spline with k={k}\")\n", + "plt.legend(fontsize=7); plt.grid()" + ] + }, + { + "cell_type": "markdown", + "id": "135", + "metadata": {}, + "source": [ + "### `interp`\n", + "\n", + "- interp provides an interface to `np.interp` for line segment interpolation.\n", + "- `interp(v, xc, yc)` maps v (scalar or arraylike) along the sample points \n", + " - defined by the input coordinates in array xc (monotonically increasing)\n", + " - and corresponding output coordinates in yc\n", + "- note that interp clips beyond xc limits.\n", + "- interp can be used as alternative to bilin in case clipped output is needed.\n", + " - some may prefer the more tidy API with x and y values in their arrays." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "136", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.linspace(0, 8.2, 1000)) # turn data into ChainableArray\n", + "xc = [1, 2, 5, 7, 8]\n", + "yc = [7, 1, 2, 9, 1]\n", + "plt.plot(xc, yc, \"ro\", ms=3, label=\"given points\")\n", + "ys = xs.interp(xc, yc)\n", + "plt.plot(xs, ys, \"b,\", label=f\"interp\")\n", + "plt.legend(fontsize=7); plt.grid()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "137", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.arange(-100, 150)) # turn data into ChainableArray\n", + "# use interp if clipping is wanted (faster for small datasets)\n", + "xs.interp([-50, 50, 100],[0, 0.4, 1]).plot(\"r-.\", label=\"interp with clipping\"); \n", + "xs.bilin(50, -50, 100, 0.4, 0, 1).plot(\"b:\", label='bilin with extrapolation');\n", + "plt.legend();plt.grid();" + ] + }, + { + "cell_type": "markdown", + "id": "138", + "metadata": {}, + "source": [ + "### `fermi`\n", + "\n", + "- `fermi(x, tau=1.0, mu=0.0)` implements a (shiftable) fermi function.\n", + "\n", + "- It applies a Fermi function to x (float, resp. Arraylike of float).\n", + "- The mapping function is:\n", + " $$ f(x) = \n", + " \\frac{1}{1 + e^{-(x-\\mu)/\\tau}} $$\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "139", + "metadata": {}, + "outputs": [], + "source": [ + "pam.fermi(np.array([-1, -0.5, 0, 0.5, 1]))" + ] + }, + { + "cell_type": "markdown", + "id": "140", + "metadata": {}, + "source": [ + "- ChainableArray.fermi uses self as input.\n", + "- Here a mapping examples and plots" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "141", + "metadata": {}, + "outputs": [], + "source": [ + "xs = chain(np.arange(-10, 10, 0.005))\n", + "for i, mu in enumerate([-2, 0, 2]):\n", + " for j, tau in enumerate([0.2, 0.5, 1]):\n", + " xs.fermi(tau, mu).plot(xs=xs, color=['r','g','b'][i], \n", + " lw=j+1, label=f'lcurve for tau={tau}')\n", + "plt.xlabel('input'); plt.ylabel('output'); plt.grid(); \n", + "plt.legend(fontsize=8); plt.title(\"fermi curve mapping functions\");" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "142", + "metadata": {}, + "outputs": [], + "source": [ + "ts = chain(np.linspace(0, 0.1, 500))\n", + "xs = np.sin(2*np.pi*50*ts)\n", + "xs.plot(label='sine').fermi(tau=0.25, mu=1).plot(label='fermi distorted');\n", + "plt.legend(loc='upper right'); plt.grid()" + ] + }, + { + "cell_type": "markdown", + "id": "143", + "metadata": {}, + "source": [ + "### `normalize`\n", + "\n", + "- `normalize(x, y1=-1.0, y2=1.0)` implements a signal normalization to [y1,y2].\n", + "- A linear mapping from input range [min(x) to max(x)] to output range [y1, y2]\n", + " is applied to argument x (Arraylike of float).\n", + "- Note that this won't work for min(x) = max(x)\n", + "- Note that an implicit polarity change can be achieved by choosing y1>y2.\n", + "- Note that normalize is different from sc3 normalize (see `pyamapping.norm_peak()`). \n", + "- The mapping function is:\n", + " $$ f(x) = y_1 + \\frac{x - x_1}{x_2 - x_1} (y_2 - y_1) $$\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "144", + "metadata": {}, + "outputs": [], + "source": [ + "pam.normalize(np.random.rand(10)) # you'll find a 1 and a -1" + ] + }, + { + "cell_type": "markdown", + "id": "145", + "metadata": {}, + "source": [ + "- ChainableArray.normalize uses self as input.\n", + "- Here a mapping examples and plots" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "146", + "metadata": {}, + "outputs": [], + "source": [ + "ts = chain(np.linspace(0, 0.1, 500))\n", + "xs = (2*np.pi*50*ts).sin()\n", + "xs.plot(label='sine').normalize(0.5,1).plot(label='normalized to [0.5, 1]');\n", + "plt.legend(loc='upper right'); plt.grid()" + ] + }, + { + "cell_type": "markdown", + "id": "147", + "metadata": {}, + "source": [ + "### `norm_peak`\n", + "\n", + "- `norm_peak(x, peak=1.0)` implements a signal normalization by scaling to new peak.\n", + "- The signal is scaled by peak/max(abs(x)).\n", + "- note that a polarity change results in negative values of `peak`.\n", + "- Note that `norm_peak` is resembles .normalize from SuperCollider. \n", + "- The mapping function is:\n", + " $$ f(x, \\text{peak}) = \\text{peak}\\cdot\\frac{x}{\\max|x|} $$\n", + "- i.e. if the signal is DC-free, it remains so as it is merely scaled. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "148", + "metadata": {}, + "outputs": [], + "source": [ + "pam.norm_peak(np.random.rand(10), 5) # you'll find a 5 (not not a -5)" + ] + }, + { + "cell_type": "markdown", + "id": "149", + "metadata": {}, + "source": [ + "- ChainableArray.norm_peak uses self as input.\n", + "- Here a mapping examples and plots" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "150", + "metadata": {}, + "outputs": [], + "source": [ + "ts = chain(np.linspace(0, 0.1, 500))\n", + "xs = np.sin(2*np.pi*50*ts)\n", + "xs.plot(label='sine').norm_peak(0.5).plot(label='norm_peak to 0.5');\n", + "plt.legend(loc='upper right'); plt.grid()" + ] + }, + { + "cell_type": "markdown", + "id": "151", + "metadata": {}, + "source": [ + "### `norm_rms`\n", + "\n", + "- `norm_rms(x, rms=1.0)` implements a signal normalization by scaling to target RMS.\n", + "- The signal is scaled, not shifted.\n", + "- Note that negative `rms` result in a polarity change.\n", + "- The mapping function is:\n", + " $$ f(x, \\text{rms}) = \\text{rms}\\cdot\\frac{x}{\\sqrt{\\langle x^2 \\rangle}} \n", + " = \\text{rms}\\cdot\\frac{x}{\\sqrt{\\frac{1}{n}\\sum\\limits_{i=1}^n x_i^2}} $$\n", + "- i.e. if the signal is DC-free, it remains so as it is merely scaled. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "152", + "metadata": {}, + "outputs": [], + "source": [ + "pam.norm_rms(np.array([1,0,0,-1]), 1) # scale by sqrt(2) to magnify RMS" + ] + }, + { + "cell_type": "markdown", + "id": "153", + "metadata": {}, + "source": [ + "- ChainableArray.norm_rms uses self as input.\n", + "- Here a mapping examples and plots" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "154", + "metadata": {}, + "outputs": [], + "source": [ + "ts = chain(np.linspace(0, 0.1, 500))\n", + "xs = np.sin(2*np.pi*50*ts)\n", + "xs.plot(label='sine').norm_rms(0.5).plot(label='norm_rms to 0.5');\n", + "plt.legend(loc='upper right'); plt.grid()" + ] + }, + { + "cell_type": "markdown", + "id": "155", + "metadata": {}, + "source": [ + "### `remove_dc`\n", + "\n", + "- `remove_dc(x)` removes the signal's mean.\n", + "- The mapping function is:\n", + " $$ f(x) = x - \\left< x \\right> $$\n", + "- i.e. the signals mean is shifted to zero.\n", + "- Note that this could cause a signal to clip [-1,1]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "156", + "metadata": {}, + "outputs": [], + "source": [ + "pam.remove_dc(np.array([1,2,3,4]))" + ] + }, + { + "cell_type": "markdown", + "id": "157", + "metadata": {}, + "source": [ + "- ChainableArray.remove_dc uses self as input.\n", + "- Here a mapping examples and plots" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "158", + "metadata": {}, + "outputs": [], + "source": [ + "ts = chain(np.linspace(0, 0.1, 500))\n", + "xs = np.sin(2*np.pi*25*ts)**2\n", + "xs.plot(label='sine').remove_dc().plot(label='remove_dc output');\n", + "plt.legend(loc='upper right'); plt.grid()" + ] + }, + { + "cell_type": "markdown", + "id": "159", + "metadata": {}, + "source": [ + "### `gain`\n", + "\n", + "- `gain(x, db=None, amp=None` applies gain in either dB or amp.\n", + "- The mapping function is:\n", + " $$ f(x) = \\left\\{\\begin{align*}\n", + " x\\cdot \\text{db\\_to\\_amp}(\\text{db}) & ~~\\text{if~ } & \\text{db} \\neq \\text{None}\\\\\n", + " x\\cdot \\text{amp} & ~~\\text{elif~} & \\text{amp} \\neq \\text{None}\\\\\n", + " x & ~~\\text{else~} & ~ \\\\\n", + " \\end{align*}\\right. $$" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "160", + "metadata": {}, + "outputs": [], + "source": [ + "pam.gain(np.array([1,2,3,4]), amp=2)" + ] + }, + { + "cell_type": "markdown", + "id": "161", + "metadata": {}, + "source": [ + "- ChainableArray.gain uses self as input.\n", + "- Here a mapping examples and plots" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "162", + "metadata": {}, + "outputs": [], + "source": [ + "ts = chain(np.linspace(0, 0.1, 500))\n", + "xs = np.sin(2*np.pi*25*ts)\n", + "xs.plot(label='sine').gain(db=-6).plot(label='-6 dB gain');\n", + "plt.legend(loc='upper right'); plt.grid()" + ] + }, + { + "cell_type": "markdown", + "id": "163", + "metadata": {}, + "source": [ + "### `ecdf`\n", + "\n", + "- `ecdf(x, selection=slice())` computes the empirical cumulative distribution function for x.\n", + "- it basically:\n", + " - sorts x to obtain locations for the steps (step_x)\n", + " - creates a step function with len(x)+1 steps (these values become step_y)\n", + " - returns the step_x and step_y coordinates for the given selection\n", + "- Applications:\n", + " - This enables handcrafted mapping functions such as for using `ChainableArray.interp()`.\n", + " - it is used in `lin_to_ecdf()` and `ecdf_to_lin()`\n", + "- Remarks:\n", + " - cdf steps by 1/n occur at points in the sorted data.\n", + " - the values are the cdf at (i.e. including) the data point\n", + " - in consequence the correct cdf for any point left of min(x) is 0\n", + " - however, as there are no data points left of min(x), `interp()` would rather hold, i.e. stay on value 1/n\n", + " - use left=0 as remedy to get cdf=0 for values < min(x)\n", + " - there is no extrapolation problem on the right side: hold on 1 is correct for any v > max(x)\n", + " - note that interp would interpolate between these points, so not generate a step function" + ] + }, + { + "cell_type": "markdown", + "id": "164", + "metadata": {}, + "source": [ + "**Example 1**: (compute once - use many)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "165", + "metadata": {}, + "outputs": [], + "source": [ + "from pyamapping import ecdf\n", + "\n", + "data = np.array([1, 3, 1.5, 3.5, 2, 5, 8, 9, 16]) # your data, unsorted\n", + "myecdf = ecdf(data) # compute the (xc, yc) for interp()\n", + "\n", + "# now myecdf may be used many times\n", + "xn = chain(np.linspace(0, 20, 50)) # your custom x (at which you need ecdf)\n", + "yn = xn.interp(*myecdf, left=0) # extra argument to specify left extrapolation\n", + "\n", + "# plot data, ecdf and results of interp\n", + "plt.plot(data, np.zeros_like(data), \"bx\", label=\"data\")\n", + "plt.plot(*myecdf, \"ro\", ms=5, label=\"ecdf for data\")\n", + "plt.plot(xn, yn, \"ko-\", lw=0.5, ms=2, label=\"your applied ecdf to custom data\")\n", + "plt.grid()\n" + ] + }, + { + "cell_type": "markdown", + "id": "166", + "metadata": {}, + "source": [ + "**Example 2:** (compute and map in one go)\n", + "\n", + "```chain(otherdata).interp(*ecdf(data))```\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "167", + "metadata": {}, + "outputs": [], + "source": [ + "newdata = chain(np.array([0.5, 3.25, 20]))\n", + "newdata.interp(*ecdf(data), left=0) # turn newdata into ecdfs of data " + ] + }, + { + "cell_type": "markdown", + "id": "168", + "metadata": {}, + "source": [ + "### `lin_to_ecdf`\n", + "\n", + "- `lin_to_ecdf(x, ref_data, sorted=False)` maps data using the empiric cumulative \n", + " distribution function as mapping.\n", + " - This means feature values are mapped to quantiles.\n", + " - if `sorted==True`, `ref_data` is regarded as sorted, speeding repeated invocations.\n", + "- Note that left=0 argument to interp() is used to make sure cdf=0 for values < min(ref_data)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "169", + "metadata": {}, + "outputs": [], + "source": [ + "data_feat = np.random.randn(200) # data used for crafting the mapping\n", + "test_data = np.linspace(-3, 3, 200) # data to apply mapping to\n", + "chain(test_data).lin_to_ecdf(data_feat, sorted=False).plot(xs=test_data);" + ] + }, + { + "cell_type": "markdown", + "id": "170", + "metadata": {}, + "source": [ + "### `ecdf_to_lin`\n", + "\n", + "- `ecdf_to_lin(x, ref_data, sorted=False)` maps data using the inverse empiric cumulative \n", + " distribution function as mapping.\n", + " - if ref_data is omitted, x is used instead.\n", + " - if `sorted==True`, `ref_data` is regarded as sorted, speeding repeated invocations." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "171", + "metadata": {}, + "outputs": [], + "source": [ + "# ecdf_to_lin\n", + "data = chain(np.random.randn(500)).normalize(0,10) # some quantiles, i.e. in [0,1]\n", + "test_data = np.linspace(0, 1, 100) # data to apply mapping to\n", + "chain(test_data).ecdf_to_lin(data).plot(xs=test_data)\n", + "\n", + "plt.axvline(0.5, ls=\":\", color='k'); \n", + "plt.xlabel('test data, resp. quantile'); plt.ylabel('feature values')\n", + "plt.axhline(np.median(data), ls=\":\", color='k');" + ] + }, + { + "cell_type": "markdown", + "id": "172", + "metadata": {}, + "source": [ + "- quantiles mapping should pass a cdf array \n", + " - so that this does not need to be computed each invocation" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "173", + "metadata": {}, + "outputs": [], + "source": [ + "data = chain(np.random.randn(200)) # data used for crafting the mapping\n", + "test_data = np.linspace(-3, 3, 200) # data to apply mapping to\n", + "\n", + "chain(test_data).interp(*ecdf(data)).plot(xs=test_data, label='full data ecdf');\n", + "chain(test_data).interp(*ecdf(data, np.s_[20:-15:10])).plot(\"r-\", xs=test_data, ms=1, label='ecdf sliced');" + ] + } + ], + "metadata": {}, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/pyproject.toml b/pyproject.toml index 89a5bed..19cf8da 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -7,3 +7,10 @@ build-backend = "setuptools.build_meta" # For smarter version schemes and other configuration options, # check out https://github.com/pypa/setuptools_scm version_scheme = "no-guess-dev" + +[tool.ruff] +line-length = 88 +lint.select = ["D"] + +[tool.ruff.lint.pydocstyle] +convention = "numpy" diff --git a/setup.cfg b/setup.cfg index 333c4f6..cf14e4b 100644 --- a/setup.cfg +++ b/setup.cfg @@ -49,6 +49,7 @@ package_dir = # For more information, check out https://semver.org/. install_requires = numpy + scipy importlib-metadata; python_version<"3.8" diff --git a/src/pyamapping/__init__.py b/src/pyamapping/__init__.py index 0ae280d..b750b6c 100644 --- a/src/pyamapping/__init__.py +++ b/src/pyamapping/__init__.py @@ -15,33 +15,225 @@ finally: del version, PackageNotFoundError +from typing import Callable, Union -from pyamapping.mappings import ( +import numpy as np + +from pyamapping.chainable_array import ChainableArray, chain +from pyamapping.mappings import ( # some synonyms; the class and helper functions amp_to_db, ampdb, + bilin, clip, cps_to_midi, + cps_to_octave, cpsmidi, + cpsoct, + curvelin, db_to_amp, dbamp, + distort, + ecdf, + ecdf_to_lin, + explin, + fermi, + fold, + gain, hz_to_mel, + interp, + interp_spline, + lcurve, + lin_to_ecdf, + lincurve, + linexp, linlin, + linpoly, mel_to_hz, midi_to_cps, + midi_to_ratio, midicps, + midiratio, + norm_peak, + norm_rms, + normalize, + octave_to_cps, + octcps, + ratio_to_midi, + ratiomidi, + remove_dc, + scurve, + softclip, + wrap, ) +# defined here: register_chain_fn, + + __all__ = [ "amp_to_db", - "ampdb", + "bilin", "clip", "cps_to_midi", - "cpsmidi", + "cps_to_octave", + "curvelin", "db_to_amp", - "dbamp", - "hz_to_mel", + "distort", + "ecdf_to_lin", + "ecdf", + "explin", + "fermi", + "fold", + "gain", + "interp_spline", + "interp", + "lcurve", + "lin_to_ecdf", + "lincurve", + "linexp", "linlin", - "mel_to_hz", + "linpoly", "midi_to_cps", + "midi_to_ratio", + "norm_peak", + "norm_rms", + "normalize", + "octave_to_cps", + "ratio_to_midi", + "remove_dc", + "scurve", + "softclip", + "wrap", + "mel_to_hz", + "hz_to_mel", + # some synonyms + "ampdb", + "cpsmidi", + "dbamp", "midicps", + # class and helper functions + "ChainableArray", + "chain", + "register_chain_fn", +] # type: ignore + + +def register_numpy_ufunc(fn: np.ufunc, name: Union[None, str] = None) -> None: + """Register numpy ufunc with one or two ndarray arguments.""" + nin = fn.nin + if nin == 1: + + def method1(self, *args, **kwargs): + return ChainableArray(fn(self, *args, **kwargs)) + + method = method1 + + elif nin == 2: + + def method2(self, other, *args, **kwargs): + return ChainableArray(fn(self, other, *args, **kwargs)) + + method = method2 + + else: + print("warning: np.ufunc fn has nin not in [1,2]") + + def default_method(x): + return None + + method = default_method + + method.__name__ = fn.__name__ if not name else name + method.__doc__ = ( + f"{method.__name__} implements numpy.{method.__name__}" + + f"function for ChainableArray. See help(np.{fn.__name__})" + ) + + setattr(ChainableArray, method.__name__, method) + + +def register_chain_fn(fn: Callable, name: Union[None, str] = None) -> None: + """Register function fn for chaining, optionally under given name.""" + + def method(self, *args, **kwargs): + return ChainableArray(fn(self, *args, **kwargs)) + + method.__name__ = fn.__name__ if not name else name + method.__doc__ = ( + f"{method.__name__} implements the {method.__name__}" + + "operation for ChainableArray. Argument: np.ndarray" + ) + + setattr(ChainableArray, method.__name__, method) + + +def _list_numpy_ufuncs(): + """Return all numpy ufuncs with 1 or 2 ndarray arguments.""" + ufunc_list = [] + for attr_name in dir(np): # all attributes in numpy + attr = getattr(np, attr_name) + if isinstance(attr, np.ufunc): + if attr.nin <= 2: + ufunc_list.append(attr) + else: + print(attr, attr.nin) + return ufunc_list + + +# create class methods for numpy functions and pyamapping functions +for fn in _list_numpy_ufuncs(): # numpy_mapping_functions: + name = "abs" if fn.__name__ == "absolute" else None + register_numpy_ufunc(fn, name) + +# register some non-ufunc which nontheless should workd +register_chain_fn(np.angle, "angle") +ChainableArray.magnitude = ChainableArray.abs + +pyamapping_functions = [ + amp_to_db, + bilin, + clip, + cps_to_midi, + cps_to_octave, + curvelin, + db_to_amp, + distort, + ecdf_to_lin, + ecdf, + explin, + fermi, + fold, + gain, + hz_to_mel, + interp_spline, + interp, + lcurve, + lin_to_ecdf, + lincurve, + linexp, + linlin, + linpoly, + mel_to_hz, + midi_to_cps, + midi_to_ratio, + norm_peak, + norm_rms, + normalize, + octave_to_cps, + ratio_to_midi, + remove_dc, + scurve, + softclip, + wrap, ] + +for fn in pyamapping_functions: + register_chain_fn(fn, None) + +register_chain_fn(cpsmidi, "cpsmidi") +register_chain_fn(midicps, "midicps") +register_chain_fn(ratiomidi, "ratiomidi") +register_chain_fn(midiratio, "midiratio") +register_chain_fn(cpsoct, "cpsoct") +register_chain_fn(octcps, "octcps") +register_chain_fn(ampdb, "ampdb") +register_chain_fn(dbamp, "dbamp") diff --git a/src/pyamapping/chainable_array.py b/src/pyamapping/chainable_array.py new file mode 100644 index 0000000..599d7c6 --- /dev/null +++ b/src/pyamapping/chainable_array.py @@ -0,0 +1,108 @@ +"""ChainableArray - a subclass of numpy.ndarray.""" + +from typing import Any, Callable, TypeVar + +import numpy as np +from numpy.typing import ArrayLike + +NDArrayType = TypeVar("NDArrayType", bound=np.ndarray) + + +class ChainableArray(np.ndarray): + """subclass for simpler numpy mapping by chaining syntax.""" + + def __new__(cls, input_array, *args, **kwargs): + """Create new instance.""" + obj = np.asarray(input_array).view(cls) + return obj + + def __array_finalize__(self, obj): + """Finalize array.""" + if obj is None: + return + + def to_array(self): + """Convert self to np.ndarray.""" + return np.array(self) + + def to_asig(self, sr=44100): + """Convert self to pya.Asig.""" + from pya import Asig + + return Asig(self, sr=sr) + + def plot(self, *args, **kwargs): + """Plot self via matplotlib.""" + import matplotlib.pyplot as plt + + sr = kwargs.pop("sr", None) + if sr: + xs = np.arange(0, self.shape[0]) / sr + plt.plot(xs, self, *args, **kwargs) + plt.xlabel("time [s]") + else: + xs = kwargs.pop("xs", None) + if xs is not None: + plt.plot(xs, self, *args, **kwargs) + else: + plt.plot(self, *args, **kwargs) + + return self + + def mapvec( + self: NDArrayType, fn: Callable[..., Any], *args: Any, **kwargs: Any + ) -> NDArrayType: + """Map fn on self by using np.vectorize(). + + Parameters + ---------- + self (NDArrayType): array to map + fn (Callable[..., Any]): function to call on each element + + Returns + ------- + NDArrayType: mapping result as ChainableArray + """ + return np.vectorize(fn)(self, *args, **kwargs) + + def map( + self: NDArrayType, fn: Callable[..., Any], *args: Any, **kwargs: Any + ) -> NDArrayType: + """Apply function fn directly to self; on fail suggest to use mapvec(). + + Parameters + ---------- + self (np.ndarray): array used as input of fn + fn (Callable[..., Any]): mapping function + args and kwargs are passed on to fn + + Raises + ------ + TypeError: if fn fails to operate on np.ndarray as first argument. + mapvec() is then proposed as alternative. + + Returns + ------- + ChainableArray: the mapping result as ChainableArray + """ + try: + return chain(fn(self, *args, **kwargs)) + except (TypeError, ValueError, AttributeError) as e: + raise TypeError( + f"Function {fn.__name__} does not support NumPy arrays directly. " + "Use .mapvec() instead for np.vectorize elementwise mapping." + ) from e + + def __getattr__(self, name: str) -> Callable: + """Dynamically handle method calls.""" + if name.startswith("dynamic_"): + return lambda *args: f"Called {name} with {args}" + raise AttributeError( + f"'{self.__class__.__name__}' object has no attribute '{name}'" + ) + + +def chain(input_array: ArrayLike) -> ChainableArray: + """Turn np.ndarray into ChainableArray.""" + # ToDo: check difference to input_array.view(ChainableArray) + return ChainableArray(input_array) diff --git a/src/pyamapping/mappings.py b/src/pyamapping/mappings.py index 4b665f1..0286d0f 100644 --- a/src/pyamapping/mappings.py +++ b/src/pyamapping/mappings.py @@ -1,18 +1,22 @@ -"""Collection of audio related mapping functions""" -from typing import Optional, Union +"""Collection of audio related mapping functions.""" + +from typing import List, Optional, Union import numpy as np +from numpy.typing import ArrayLike + +from .chainable_array import ChainableArray, chain def linlin( - value: Union[float, np.ndarray], + value: Union[float, ArrayLike], x1: float, x2: float, y1: float, y2: float, clip: Optional[str] = None, ) -> Union[float, np.ndarray]: - """Map value linearly so that [x1, x2] is mapped to [y1, y2] + """Map value linearly so that [x1, x2] is mapped to [y1, y2]. linlin is implemented in analogy to the SC3 linlin, yet this function extrapolates by default. @@ -21,7 +25,7 @@ def linlin( Parameters ---------- - value : float or np.ndarray + value : float or np.ndarray (ArrayLike) value(s) to be mapped x1 : float source value 1 @@ -54,16 +58,381 @@ def linlin( return np.minimum(np.maximum(z, y1), y2) +def linexp( + value: Union[float, ArrayLike], + x1: float, + x2: float, + y1: float, + y2: float, + clip: Optional[str] = None, +) -> Union[float, np.ndarray]: + """Map value exponentially so that [x1, x2] is mapped to [y1, y2]. + + linexp is implemented in analogy to the SC3 linexp, yet this + function extrapolates by default. + A frequently used invocation is with x1 < x2, i.e. thinking + of them as a range [x1,x2] + + Parameters + ---------- + value : float or np.ndarray (ArrayLike) + value(s) to be mapped + x1 : float + source value 1 + x2 : float + source value 2 + y1 : float + destination value to be reached for value == x1 + y2 : float + destination value to be reached for value == x2 + clip: None or string + None extrapolates, "min" or "max" clip at floor resp. ceiling + of the destination range, any other value defaults to "minmax", + i.e. it clips on both sides. + + Returns + ------- + float or np.ndarray + the mapping result + """ + z = np.exp((value - x1) / (x2 - x1) * (np.log(y2) - np.log(y1)) + np.log(y1)) + if clip is None: + return z + if y1 > y2: + x1, x2, y1, y2 = x2, x1, y2, y1 + if clip == "max": + return np.minimum(z, y2) + elif clip == "min": + return np.maximum(z, y1) + else: # imply clip to be "minmax" + return np.minimum(np.maximum(z, y1), y2) + + +def explin( + value: Union[float, ArrayLike], + x1: float, + x2: float, + y1: float, + y2: float, + clip: Optional[str] = None, +) -> Union[float, np.ndarray]: + """Map value logarithmically so that [x1, x2] is mapped to [y1, y2]. + + explin is implemented in analogy to the SC3 explin, yet this + function extrapolates by default. + A frequently used invocation is with x1 < x2, i.e. thinking + of them as a range [x1,x2] + + Parameters + ---------- + value : float or np.ndarray (ArrayLike) + value(s) to be mapped + x1 : float + source value 1 + x2 : float + source value 2 + y1 : float + destination value to be reached for value == x1 + y2 : float + destination value to be reached for value == x2 + clip: None or string + None extrapolates, "min" or "max" clip at floor resp. ceiling + of the destination range, any other value defaults to "minmax", + i.e. it clips on both sides. + + Returns + ------- + float or np.ndarray + the mapping result + """ + z = np.log(value / x1) / np.log(x2 / x1) * (y2 - y1) + y1 + + if clip is None: + return z + if y1 > y2: + x1, x2, y1, y2 = x2, x1, y2, y1 + if clip == "max": + return np.minimum(z, y2) + elif clip == "min": + return np.maximum(z, y1) + else: # imply clip to be "minmax" + return np.minimum(np.maximum(z, y1), y2) + + +def lincurve( + x: Union[float, ArrayLike], + x1: float, + x2: float, + y1: float = -1.0, + y2: float = 1.0, + curve: float = -2.0, + clip: Optional[str] = None, +) -> Union[float, np.ndarray]: + """Map value exponentially so that [x1, x2] is mapped to [y1, y2]. + + lincurve is implemented in analogy to the SC3 lincurve, yet this + function extrapolates by default. + A frequently used invocation is with x1 < x2, + i.e. thinking of them as a range [x1, x2] + x1 is mapped to y1. Use y2 < y1 for polarity inversion, i.e. curve = -curve + returns y1 + (y2 - y1) / + (1.0 - exp(curve)) * (1 - exp(curve) ** ((x - x1) / (x2 - x1))) + yoffset + yrange * (this goes from 0 =(1-grow**0) to (1-grow**1) + + Parameters + ---------- + value : float or np.ndarray (ArrayLike) + value(s) to be mapped + x1 : float + source value 1 + x2 : float + source value 2 + y1 : float + destination value to be reached for value == x1 + y2 : float + destination value to be reached for value == x2 + curve : float + specification of the curvature. TBA + clip: None or string + None extrapolates, "min" or "max" clip at floor resp. ceiling + of the destination range, any other value defaults to "minmax", + i.e. it clips on both sides. + + Returns + ------- + float or np.ndarray + the mapping result + """ + if abs(curve) < 0.001: + z = (x - x1) / (x2 - x1) * (y2 - y1) + y1 + else: + z = y1 + (y2 - y1) / (1.0 - np.exp(curve)) * ( + 1 - np.exp((curve * (x - x1) / (x2 - x1))) + ) + + if y1 > y2: + y1, y2 = y2, y1 + if clip: + if clip == "max": + z = np.minimum(z, y2) + elif clip == "min": + z = np.maximum(z, y1) + else: # imply clip to be "minmax" + z = np.minimum(np.maximum(z, y1), y2) + return z + + +def curvelin( + x: Union[float, ArrayLike], + x1: float, + x2: float, + y1: float = -1.0, + y2: float = 1.0, + curve: float = -2.0, + clip: Optional[str] = None, +) -> Union[float, np.ndarray]: + """Map (assumedly exponentially curved) x from [x1, x2] linearly to [y1, y2]. + + This is done by applying a curve parameter as in sc3. the input range can include 0, + different from explin a clipping is performed according to the clip argument. + + curvelin is implemented in analogy to the SC3 curvelin, yet extrapolates by default. + A frequently used invocation is with x1 y2: + y1, y2 = y2, y1 + if clip: + if clip == "max": + z = np.minimum(z, y2) + elif clip == "min": + z = np.maximum(z, y1) + else: # imply clip to be "minmax" + z = np.minimum(np.maximum(z, y1), y2) + return z + + +def linpoly( + x: Union[float, ArrayLike], + xmax: float = 1.0, + y1: float = -1.0, + y2: float = 1.0, + curve: float = 2.0, + clip: Optional[str] = None, +) -> Union[float, np.ndarray]: + """Map x between [-xmax, xmax] to [y1, y2] using a polynomial mapping. + + The mapping is y1 + (y2 - y1) * (1 + (x/xmax)**order) / 2 + where order = 1 + curve if curve>0 else (1 - 1 / (1 + curve)) + + + Parameters + ---------- + x (Union[float, np.typing.ArrayLike]): values to be mapped + xmax (float): source scale, defaults to 1.0 + y1 (float, optional): target range low value, Defaults to -1.0. + y2 (float, optional): target ragne 2nd value. Defaults to 1.0. + curve (float, optional): defaults to 2 + if curve > 0: polynomial order is curve + 1 + if curve < 0: polynomial order is 1 - 1/curve + clip (Optional[str], optional): clip flags (min / max / minmax). + Defaults to None. + + Returns + ------- + Union[float, np.ndarray]: mapping result for x + """ + order = 1 + curve if curve >= 0 else (1 / (1 - curve)) + z = y1 + (y2 - y1) * (1 + np.sign(x) * (np.abs(x) / xmax) ** order) / 2 + if clip is None: + return z + if y1 > y2: + y1, y2 = y2, y1 + if clip == "max": + return np.minimum(z, y2) + elif clip == "min": + return np.maximum(z, y1) + else: # imply clip to be "minmax" + return np.minimum(np.maximum(z, y1), y2) + + +def interp_spline( + x: Union[float, ArrayLike], + xc: Union[List[float], np.ndarray], + yc: Union[List[float], np.ndarray] = [-1, 0, 1], + k=1, + **kwarg, +) -> Union[float, np.ndarray]: + """Apply scipy.interpolate.interp_spline interpolation. + + applicable for piecewise linear mappings, with extrapolation. + interp_spline is slower than numpy.interp() for smaller data sets (e.g. <5000) + however, it allows extrapolation. + + Parameters + ---------- + x : float or np.ndarray (ArrayLike) + value(s) to be mapped + xc : Union[List[float], np.ndarray] + x coordinates of line segment function, must be sorted + yc : Union[List[float], np.ndarray] + y coordinates of line segment function, requires len(yc)==len(xc) + k : interpolation order (see interp_spline, defaults to 1 = linear) + + Returns + ------- + float or np.ndarray + the mapping result, extrapolating beyond bounds + + """ + from scipy.interpolate import make_interp_spline + + spl = make_interp_spline(xc, yc, k=k) # k=1: linear + return spl(x) if isinstance(x, ChainableArray) else chain(spl(x)) + + +def interp( + x: Union[float, ArrayLike], + xc: Union[List[float], np.ndarray], + yc: Union[List[float], np.ndarray] = [-1, 0, 1], + **kwarg, +) -> Union[float, np.ndarray]: + """Apply numpy.interp interpolation. + + applicable for piecewise linear mappings, with extrapolation + interp is faster than interp_spline() for small x (e.g. <5000), + but it clips by default. + + Parameters + ---------- + x : float or np.ndarray (ArrayLike) + value(s) to be mapped + xc : Union[List[float], np.ndarray] + x coordinates of line segment function, must be sorted + yc : Union[List[float], np.ndarray] + y coordinates of line segment function, requires len(yc)==len(xc) + + Returns + ------- + float or np.ndarray + the mapping result, clipping beyond bounds + + """ + return np.interp(x, xc, yc, **kwarg) + + +def bilin( + x: Union[float, ArrayLike], + xcenter: float, + xmin: float, + xmax: float, + ycenter: float = 0, + ymin: float = -1, + ymax: float = 1, + **kwargs, +) -> Union[float, np.ndarray]: + """Bilin compatibility function. implements sc3 bilin function. + + This maps x in 2 linear segments as given by coordinates. + + Parameter: + --------- + x (Union[float, np.typing.ArrayLike]): _description_ + xcenter (float): _description_ + xmin (float): _description_ + xmax (float): _description_ + ycenter (float): _description_ + ymin (float): _description_ + ymax (float): _description_ + + Returns + ------- + Union[float, np.ndarray], + the mapping result + """ + return interp_spline(x, [xmin, xcenter, xmax], [ymin, ycenter, ymax], **kwargs) + + def clip( - value: Union[float, np.ndarray], + value: Union[float, ArrayLike], minimum: float = -float("inf"), maximum: float = float("inf"), ) -> Union[float, np.ndarray]: - """Clips a value to a certain range + """Clips a value to a certain range. Parameters ---------- - value : float or np.ndarray + value : float or np.ndarray (ArrayLike) Value(s) to clip minimum : float, optional Minimum output value, by default -float("inf") @@ -75,7 +444,7 @@ def clip( float clipped value """ - if type(value) == np.ndarray: + if isinstance(value, np.ndarray): return np.maximum(np.minimum(value, maximum), minimum) else: # ToDo: check if better performance than above numpy code - if not: delete if value < minimum: @@ -86,7 +455,7 @@ def clip( def midi_to_cps(midi_note: float) -> float: - """Convert MIDI note to cycles per second + """Convert MIDI note to cycles per second. Parameters ---------- @@ -105,7 +474,7 @@ def midi_to_cps(midi_note: float) -> float: def cps_to_midi(cps: float) -> float: - """Convert cycles per second to MIDI note + """Convert cycles per second to MIDI note. Parameters ---------- @@ -123,36 +492,141 @@ def cps_to_midi(cps: float) -> float: cpsmidi = cps_to_midi -def hz_to_mel(hz): - """Convert a value in Hertz to Mels +def midi_to_ratio(midi_note: float) -> float: + """Convert MIDI difference to ratio. + + Parameters + ---------- + m : float + MIDI note + + Returns + ------- + float + corresponding ratio + """ + return 2 ** (midi_note / 12.0) + + +midiratio = midi_to_ratio + + +def ratio_to_midi(ratio: float) -> float: + """Convert ratio to MIDI difference. + + Parameters + ---------- + ratio : float + ratio (e.g. of frequencies) + + Returns + ------- + float + corresponding MIDI difference + """ + return 12 * np.log2(ratio) + + +ratiomidi = ratio_to_midi + + +def cps_to_octave(cps: float) -> float: + """Convert cycles per second into decimal octaves. + + reference Middle C (i.e. MIDI 60, C_4, 261.626 Hz) yields 4 (octaves). + + Parameters + ---------- + cps : float + cycles per second + + Returns + ------- + float + octaves relative to Middle C (MIDI 60 C_4, 261.626 HZ) + """ + return np.log2(cps / 440) + 4.75 + + +cpsoct = cps_to_octave + + +def octave_to_cps(octave: float) -> float: + """Convert octaves to cps. + + reference 4.75 yields 440 Hz, i.e. 4 -> freq of Middle C (C4) + + Parameters + ---------- + octave : float + octave + + Returns + ------- + float + cycles per second + """ + return 440 * 2 ** (octave - 4.75) + + +octcps = octave_to_cps + + +def hz_to_mel(hz, htk=False): + """Convert frequencies [Hz] to mel scale. Parameters ---------- hz : number of array - value in Hz, can be an array + frequencies in Hz, can be an array - Returns: - -------- + htk: bool + flag: if True use O'Shaughnessy (1987) formula + if False use Slaney's matlab formula + + Returns + ------- _ : number of array - value in Mels, same type as the input. + mel scale value, same type as the input. """ - return 2595 * np.log10(1 + hz / 700.0) + if htk: + return 2595 * np.log10(1 + hz / 700.0) + else: + hz = np.asanyarray(hz) # supports both scalars and arrays + mel = np.where( + hz < 1000, # point between linear and log scale + 3.0 * hz / 200, # linear law + 15 + 27 * np.log(hz / 1000) / np.log(6.4), # log law + ) + return mel if mel.ndim > 0 else float(mel) -def mel_to_hz(mel): - """Convert a value in Hertz to Mels +def mel_to_hz(mel, htk=False): + """Convert mel from mel scale to frequency [Hz]. Parameters ---------- - hz : number of array - value in Hz, can be an array + mel : number of array + melody value + htk: bool + flag: if True use O'Shaughnessy (1987) formula + if False use Slaney's matlab formula - Returns: - -------- + Returns + ------- _ : number of array - value in Mels, same type as the input. + cps in Hz, same type as the input. """ - return 700 * (10 ** (mel / 2595.0) - 1) + if htk: + return 700 * (10 ** (mel / 2595.0) - 1) + else: + mel = np.asanyarray(mel) + hz = np.where( + mel < 15, # border between lin/exp regime + (200.0 / 3) * mel, # linear regime + 1000 * (6.4 ** ((mel - 15) / 27)), + ) # exp. regime + return hz if hz.ndim > 0 else float(hz) def db_to_amp(decibels: float) -> float: @@ -191,3 +665,323 @@ def amp_to_db(amp: float) -> float: ampdb = amp_to_db + + +def distort( + x: Union[float, ArrayLike], threshold: float = 1.0 +) -> Union[float, np.ndarray]: + """Apply value distortion x/(threshold + |x|). + + Parameters + ---------- + x (Union[float, np.typing.ArrayLike]): input value or array + threshold (float, optional): defaults to 1.0. + + Returns + ------- + Union[float, np.ndarray]: distorted value / array + """ + return x / (threshold + np.abs(x)) + + +def softclip(x: Union[float, ArrayLike]) -> Union[float, np.ndarray]: + """Apply softclip distortion to x. + + This yields a perfectly linear region within [-0.5, 0.5], + outside values computed by (|x| - 0.25) / x + + Parameters + ---------- + x (Union[float, np.typing.ArrayLike]): input value or array + + Returns + ------- + Union[float, np.ndarray]: softclip distorted value / array + """ + x = np.asarray(x, dtype=float) # ensure numpy array for elementwise operations + y = np.empty_like(x) + mask = np.abs(x) <= 0.5 + y[mask] = x[mask] + y[~mask] = (np.abs(x[~mask]) - 0.25) / x[~mask] + return y if y.ndim > 0 else float(y) + + +def scurve(x: Union[float, ArrayLike]) -> Union[float, np.ndarray]: + """Map value onto an S-curve bound to [0,1]. + + Implements v * v * (3-(2*v)) mit v = x.clip(0, 1) + + Parameters + ---------- + x (Union[float, np.typing.ArrayLike]): input value or array + + Returns + ------- + Union[float, np.ndarray]: scurve distorted value / array + """ + v = clip(x, 0, 1) + return v**2 * (3 - 2 * v) + + +def lcurve( + x: Union[float, ArrayLike], m: float = 0.0, n: float = 1.0, tau: float = 1.0 +) -> Union[float, np.ndarray]: + """Map value or array onto an L-curve. + + Implements (1 + m * exp(-x/tau)) / (1 + n * exp(-x/tau)) + - equal to fermi function with default parameters + - note that different to the sc3 implementation, tau is inside + the exp function (...unclear tau placement in sc3...) + + Parameters + ---------- + x (Union[float, np.typing.ArrayLike]): input value or array + m (float, optional): numerator factor defaults to 0.0. + n (float, optional): denumerator factor defaults to 1.0. + tau (float, optional): scale constant, defaults to 1.0. + + Returns + ------- + Union[float, np.ndarray]: lcurve distorted value / array + """ + return (1 + m * np.exp(-x / tau)) / (1 + n * np.exp(-x / tau)) + + +def fermi( + x: Union[float, ArrayLike], tau: float = 1.0, mu: float = 0.0 +) -> Union[float, np.ndarray]: + """Apply fermi function to value or array. + + Implements 1 / (1 + exp(-(x-mu)/tau)) + + Parameters + ---------- + x (Union[float, np.typing.ArrayLike]): input value or array + tau (float, optional): scale constant, defaults to 1.0. + mu (float, optional): shift, defaults to 0.0 + + Returns + ------- + Union[float, np.ndarray]: fermi distorted value / array + """ + return 1.0 / (1 + np.exp(-(x - mu) / tau)) + + +def normalize(x: np.ndarray, y1: float = -1.0, y2: float = 1.0) -> np.ndarray: + """Normalize array to target range [y1, y2]. + + Linear mapping [min(x), max(x)] to [y1, y2]. Use y1 > y2 to change polarity. + + Parameters + ---------- + x (Union[float, np.typing.ArrayLike]): input value or array + y1 (float, optional): mapping target for min(x). Defaults to -1.0. + y2 (float, optional): mapping target for max(x). Defaults to 1.0. + + Returns + ------- + Union[float, np.ndarray]: normalized / scaled array + """ + x1, x2 = np.amin(x), np.amax(x) + return (x - x1) / (x2 - x1) * (y2 - y1) + y1 + + +def wrap( + x: Union[float, ArrayLike], y1: float = -1.0, y2: float = 1.0 +) -> Union[float, np.ndarray]: + """Wrap array around target range [y1, y2]. + + This implements the mapping y1 + np.mod(x - y1, y2 - y1). + The order of y1, y2 is irrelevant. + + Parameters + ---------- + x (Union[float, np.typing.ArrayLike]): input value or array + y1 (float, optional): 1st wrap bound. Defaults to -1.0. + y2 (float, optional): 2nd wrap bound. Defaults to 1.0. + + Returns + ------- + Union[float, np.ndarray]: wraped array + """ + return y1 + np.mod(x - y1, y2 - y1) + + +def fold( + x: Union[float, ArrayLike], y1: float = -1.0, y2: float = 1.0 +) -> Union[float, np.ndarray]: + """Fold array around target range [y1, y2]. + + This implements (np.abs((x - y2) % (2 * L) - L) + y1), + ordering bounds so that y1 < y2. + + Parameters + ---------- + x (Union[float, np.typing.ArrayLike]): input value or array + y1 (float, optional): 1st fold bound. Defaults to -1.0. + y2 (float, optional): 2nd fold bound. Defaults to 1.0. + + Returns + ------- + Union[float, np.ndarray]: folded array + """ + if y2 < y1: + y1, y2 = y2, y1 + L = y2 - y1 + return np.abs((x - y2) % (2 * L) - L) + y1 + + +def remove_dc(x: np.ndarray) -> np.ndarray: + """Remove DC bias. + + Parameters + ---------- + x (np.ndarray): input array + + Returns + ------- + np.ndarray: mean-free array + """ + return x - np.mean(x) + + +def norm_peak(x: np.ndarray, peak=1.0): + """Normalize by scaling array so that max(abs(x)) = peak. + + Parameters + ---------- + x (np.ndarray]): input array + peak (float): target peak + + Returns + ------- + np.ndarray: normalized (scaled) array + """ + peak_of_x = np.max(np.abs(x)) + return (x / peak_of_x) * peak if peak_of_x != 0 else x + + +def norm_rms(x: np.ndarray, rms=1.0): + """Normalize array so that its RMS value equals `rms`. + + Parameters + ---------- + x (np.ndarray): input array + rms (float): target rms of array + + Returns + ------- + np.ndarray: rms normalized (scaled) array + """ + rms_of_x = np.sqrt(np.mean(x**2)) + return (x / rms_of_x) * rms if rms_of_x != 0 else x + + +def gain(x: np.ndarray, db: Optional[float] = None, amp: Optional[float] = None): + """Apply gain, either as dB (SPL) or scalar factor amp. + + No operation done if neither argument is given, it applies both if both are given. + + Parameters + ---------- + x (np.ndarray): input array + db (None or float): dB SPL = gain 10**(db/20), e.g. -6 dB ~ factor 0.5 + amp (None or float): gain factor + + Returns + ------- + np.ndarray: scaled (amplified / attenuated) array + """ + if db: + sig = x * dbamp(db) + else: + sig = x.copy() + if amp: + sig *= amp + return sig + + +def lin_to_ecdf( + x: Union[float, ArrayLike], ref_data: np.ndarray, sorted: bool = False +) -> Union[float, np.ndarray]: + """Map data using empiric cumulative distribution function as mapping. + + This means feature values are mapped to quantiles. + if sorted==True: ref_data is regarded as sorted, speeding repeated invocations. + + Parameters + ---------- + x (Union[float, np.typing.ArrayLike]): value or array to map + ref_data (np.ndarray): reference data used to create ecdf. + sorted (bool): whether ref_data is sorted. + Defaults to False, i.e. by default data will be sorted. + + Returns + ------- + np.ndarray: resulting mapped data + """ + if sorted: + return interp( + x, ref_data, np.arange(1, len(ref_data) + 1) / float(len(ref_data), left=0) + ) + else: + return interp(x, *ecdf(ref_data)) + + +def ecdf_to_lin( + x: Union[float, ArrayLike], ref_data: np.ndarray, sorted: bool = False +) -> Union[float, np.ndarray]: + """Map data using inverse empiric cumulative distribution function. + + This means that quantiles are mapped back to estimated feature values. + - if ref_data is omitted, x is used instead + - if sorted==True: ref_data is regarded as sorted, speeding repeated invocations + + Parameters + ---------- + x (Union[float, np.typing.ArrayLike]): value or array to map + ref_data (np.ndarray): reference data used to create ecdf. + sorted (bool): whether ref_data is sorted. + Defaults to False, i.e. data will be sorted. + + Returns + ------- + np.ndarray: resulting mapped data + """ + if sorted: + return interp( + x, np.arange(1, len(ref_data) + 1) / float(len(ref_data)), ref_data + ) + else: + xc, yc = ecdf(ref_data) + return interp(x, yc, xc) + + +def ecdf( + x: np.ndarray, selection: slice = slice(None, None, None) +) -> tuple[np.ndarray, np.ndarray]: + """Empirical cumulative distribution function. + + Usable for handcrafted mapping functions such as with using ChainableArray.interp() + + Example 1: (compute once - use many) + >>> myecdf = ecdf(data); chain(otherdata).interp(*ecdf) + + Example 2: (compute and map in one go) + >>> chain(otherdata).interp(*ecdf(data)) + + Example 3: (use a sparser (more smooth) ecdf mapping) + >>> chain(otherdata).interp(*ecdf(data, np.s_[::5])) + + Parameters + ---------- + x (np.ndarray): array + selection (slice): slice applied to x and y coordinates of the resulting tuple + + Returns + ------- + tuple[np.ndarray, np.ndarray]: sorted array of x and y coordinates of the ecdf + """ + xs = np.sort(x) + ys = np.arange(1, len(xs) + 1) / float(len(xs)) + return xs[selection], ys[selection] diff --git a/tests/test_mappings.py b/tests/test_mappings.py index 21182be..a421495 100644 --- a/tests/test_mappings.py +++ b/tests/test_mappings.py @@ -11,6 +11,30 @@ mel_to_hz, midi_to_cps, ) +from pyamapping.mappings import ( + bilin, + cps_to_octave, + curvelin, + distort, + explin, + fermi, + fold, + gain, + lcurve, + lincurve, + linexp, + linpoly, + midi_to_ratio, + norm_peak, + norm_rms, + normalize, + octave_to_cps, + ratio_to_midi, + remove_dc, + scurve, + softclip, + wrap, +) def test_linlin(): @@ -43,20 +67,153 @@ def test_clip(): assert np.array_equal(clip(a1, 2, 4), a2) -def test_midi_cps(): +def test_midi_to_cps_to_midi(): assert midi_to_cps(69) == 440 assert cps_to_midi(440) == 69 for x in range(128): assert x == cps_to_midi(midi_to_cps(x)) -def test_db_amp(): +def test_midi_ratio_midi(): + pytest.approx(midi_to_ratio(7), 1.4983070768766815) + pytest.approx(ratio_to_midi(2), 12.0) + + +def test_cps_octave_cps(): + pytest.approx(octave_to_cps(5.75), 880) + pytest.approx(cps_to_octave(220), 3.75) + + +def test_db_amp_db(): for x in range(128): assert x == pytest.approx(amp_to_db(db_to_amp(x))) -def test_hz_mel(): +def test_hz_mel_hz(): pytest.approx(hz_to_mel(440), 549.64) pytest.approx(mel_to_hz(549.64), 440) - for x in range(128): + for x in np.arange(1, 128): assert x == pytest.approx(hz_to_mel(mel_to_hz(x))) + + +def test_linexp(): + pytest.approx(linexp(5, 1, 8, 2, 256), 32.0) + pytest.approx(linexp(7, 0, 5, 100, 300, "max"), 300) + + +def test_explin(): + f = 220 * 2 ** (-5 / 12) + pytest.approx(explin(f, 220, 440, 0, 12), -5.0) + pytest.approx(explin(0.01, 0.001, 1.0, -30, 0, "max"), -20.0) + + +def test_lincurve(): + pytest.approx( + lincurve(np.array([0.0, 0.1, 0.4, 0.7, 1.0]), 0, 1, 0, 0.4), + np.array([0.0, 0.08385643, 0.25474431, 0.34852956, 0.4]), + ) + + +def test_curvelin(): + pytest.approx( + curvelin(np.array([0, 0.1, 0.3, 0.5]), 0, 0.5, 0, 10), + np.array([0.0, 0.94934752, 3.65734932, 10.0]), + ) + + +def test_bilin(): + pytest.approx( + bilin(np.array([0, 20, 40, 60, 80, 100]), 60, 20, 80, 0, -20, 60), + np.array([-30.0, -20.0, -10.0, 0.0, 60.0, 120.0]), + ) + + +def test_distort(): + pytest.approx( + distort([0, 1, 2, 3], 1), + np.array([0.0, 0.5, 0.66666667, 0.75]), + ) + + +def test_softclip(): + pytest.approx( + softclip(np.arange(1, 5)), + np.array([0.75, 0.875, 0.91666667, 0.9375]), + ) + + +def test_scurve(): + pytest.approx( + scurve(np.arange(0, 1, 0.25)), + np.array([0.0, 0.15625, 0.5, 0.84375]), + ) + + +def test_fermi(): + pytest.approx( + fermi(np.array([-1, -0.5, 0, 0.5, 1])), + np.array([0.26894142, 0.37754067, 0.5, 0.62245933, 0.73105858]), + ) + + +def test_lcurve(): + pytest.approx( + lcurve(np.array([-1, -0.5, 0, 0.5, 1])), + np.array([0.26894142, 0.37754067, 0.5, 0.62245933, 0.73105858]), + ) + + +def test_wrap(): + pytest.approx( + wrap(np.arange(-3, 5), 0, 3), + np.array([0, 1, 2, 0, 1, 2, 0, 1]), + ) + + +def test_fold(): + pytest.approx( + fold(np.arange(0, 13), 0, 4), + np.array([0, 1, 2, 3, 4, 3, 2, 1, 0, 1, 2, 3, 4]), + ) + + +def test_linpoly(): + pytest.approx( + linpoly(np.arange(-2, 3), 2.5, 100, 500, curve=1), + np.array([172.0, 268.0, 300.0, 332.0, 428.0]), + ) + + +def test_normalize(): + pytest.approx( + np.sort(normalize(np.random.rand(10)))[[0, -1]], + np.array([-1, 1]), + ) + + +def test_norm_peak(): + pytest.approx( + np.max(norm_peak(np.random.rand(10), 5)), + 5, + ) + + +def test_norm_rms(): + pytest.approx( + norm_rms(np.array([1, 0, 0, -1]), 1), + np.array([1.41421356, 0.0, 0.0, -1.41421356]), + ) + + +def test_remove_dc(): + pytest.approx( + remove_dc(np.array([1, 2, 3, 4])), + np.array([-1.5, -0.5, 0.5, 1.5]), + ) + + +def test_gain(): + pytest.approx( + gain(np.array([1, 2, 3, 4]), amp=2), + np.array([2, 4, 6, 8]), + )