Skip to content

Use rst code-block over code:: #7882

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions docs/source/contributing/developer_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -245,7 +245,7 @@ usually created in order to optimise performance. But getting a
possible (see also in
{ref}`doc <pymc_overview##Transformed-distributions-and-changes-of-variables>`):

.. code:: python
.. code-block:: python


lognorm = Exp().apply(pm.Normal.dist(0., 1.))
Expand All @@ -262,15 +262,15 @@ Now, back to ``model.RV(...)`` - things returned from ``model.RV(...)``
are PyTensor tensor variables, and it is clear from looking at
``TransformedRV``:

.. code:: python
.. code-block:: python

class TransformedRV(TensorVariable):
...

as for ``FreeRV`` and ``ObservedRV``, they are ``TensorVariable``\s with
``Factor`` as mixin:

.. code:: python
.. code-block:: python

class FreeRV(Factor, TensorVariable):
...
Expand All @@ -283,7 +283,7 @@ distribution into a ``TransformedDistribution``, and then ``model.Var`` is
called again to added the RV associated with the
``TransformedDistribution`` as a ``FreeRV``:

.. code:: python
.. code-block:: python

...
self.transformed = model.Var(
Expand All @@ -295,13 +295,13 @@ only add one ``FreeRV``. In another word, you *cannot* do chain
transformation by nested applying multiple transforms to a Distribution
(however, you can use ``Chain`` transformation.

.. code:: python
.. code-block:: python

z = pm.LogNormal.dist(mu=0., sigma=1., transform=tr.Log)
z.transform # ==> pymc.distributions.transforms.Log


.. code:: python
.. code-block:: python

z2 = Exp().apply(z)
z2.transform is None # ==> True
Expand Down
2 changes: 1 addition & 1 deletion pymc/distributions/discrete.py
Original file line number Diff line number Diff line change
Expand Up @@ -1357,7 +1357,7 @@ class OrderedProbit:

Examples
--------
.. code:: python
.. code-block:: python

# Generate data for a simple 1 dimensional example problem
n1_c = 300; n2_c = 300; n3_c = 300
Expand Down
66 changes: 33 additions & 33 deletions pymc/distributions/multivariate.py
Original file line number Diff line number Diff line change
Expand Up @@ -1390,7 +1390,7 @@ class LKJCholeskyCov:

Examples
--------
.. code:: python
.. code-block:: python

with pm.Model() as model:
# Note that we access the distribution for the standard
Expand Down Expand Up @@ -1682,28 +1682,25 @@ class LKJCorr:

Examples
--------
.. code:: python
.. code-block:: python

with pm.Model() as model:

# Define the vector of fixed standard deviations
sds = 3*np.ones(10)
sds = 3 * np.ones(10)

corr = pm.LKJCorr(
'corr', eta=4, n=10, return_matrix=True
)
corr = pm.LKJCorr("corr", eta=4, n=10, return_matrix=True)

# Define a new MvNormal with the given correlation matrix
vals = sds*pm.MvNormal('vals', mu=np.zeros(10), cov=corr, shape=10)
vals = sds * pm.MvNormal("vals", mu=np.zeros(10), cov=corr, shape=10)

# Or transform an uncorrelated normal distribution:
vals_raw = pm.Normal('vals_raw', shape=10)
vals_raw = pm.Normal("vals_raw", shape=10)
chol = pt.linalg.cholesky(corr)
vals = sds*pt.dot(chol,vals_raw)
vals = sds * pt.dot(chol, vals_raw)

# The matrix is internally still sampled as a upper triangular vector
# If you want access to it in matrix form in the trace, add
pm.Deterministic('corr_mat', corr)
pm.Deterministic("corr_mat", corr)


References
Expand Down Expand Up @@ -1797,7 +1794,7 @@ class MatrixNormal(Continuous):
Define a matrixvariate normal variable for given row and column covariance
matrices.

.. code:: python
.. code-block:: python

import pymc as pm
import numpy as np
Expand All @@ -1820,16 +1817,20 @@ class MatrixNormal(Continuous):
constant, both the covariance and scaling could be learned as follows
(see the docstring of `LKJCholeskyCov` for more information about this)

.. code:: python
.. code-block:: python

# Setup data
true_colcov = np.array([[1.0, 0.5, 0.1],
[0.5, 1.0, 0.2],
[0.1, 0.2, 1.0]])
true_colcov = np.array(
[
[1.0, 0.5, 0.1],
[0.5, 1.0, 0.2],
[0.1, 0.2, 1.0],
]
)
m = 3
n = true_colcov.shape[0]
true_scale = 3
true_rowcov = np.diag([true_scale**(2*i) for i in range(m)])
true_rowcov = np.diag([true_scale ** (2 * i) for i in range(m)])
mu = np.zeros((m, n))
true_kron = np.kron(true_rowcov, true_colcov)
data = np.random.multivariate_normal(mu.flatten(), true_kron)
Expand All @@ -1838,13 +1839,12 @@ class MatrixNormal(Continuous):
with pm.Model() as model:
# Setup right cholesky matrix
sd_dist = pm.HalfCauchy.dist(beta=2.5, shape=3)
colchol,_,_ = pm.LKJCholeskyCov('colchol', n=3, eta=2,sd_dist=sd_dist)
colchol, _, _ = pm.LKJCholeskyCov("colchol", n=3, eta=2, sd_dist=sd_dist)
# Setup left covariance matrix
scale = pm.LogNormal('scale', mu=np.log(true_scale), sigma=0.5)
rowcov = pt.diag([scale**(2*i) for i in range(m)])
scale = pm.LogNormal("scale", mu=np.log(true_scale), sigma=0.5)
rowcov = pt.diag([scale ** (2 * i) for i in range(m)])

vals = pm.MatrixNormal('vals', mu=mu, colchol=colchol, rowcov=rowcov,
observed=data)
vals = pm.MatrixNormal("vals", mu=mu, colchol=colchol, rowcov=rowcov, observed=data)
"""

rv_op = matrixnormal
Expand Down Expand Up @@ -2010,30 +2010,30 @@ class KroneckerNormal(Continuous):
Define a multivariate normal variable with a covariance
:math:`K = K_1 \otimes K_2`

.. code:: python
.. code-block:: python

K1 = np.array([[1., 0.5], [0.5, 2]])
K2 = np.array([[1., 0.4, 0.2], [0.4, 2, 0.3], [0.2, 0.3, 1]])
K1 = np.array([[1.0, 0.5], [0.5, 2]])
K2 = np.array([[1.0, 0.4, 0.2], [0.4, 2, 0.3], [0.2, 0.3, 1]])
covs = [K1, K2]
N = 6
mu = np.zeros(N)
with pm.Model() as model:
vals = pm.KroneckerNormal('vals', mu=mu, covs=covs, shape=N)
vals = pm.KroneckerNormal("vals", mu=mu, covs=covs, shape=N)

Efficiency gains are made by cholesky decomposing :math:`K_1` and
:math:`K_2` individually rather than the larger :math:`K` matrix. Although
only two matrices :math:`K_1` and :math:`K_2` are shown here, an arbitrary
number of submatrices can be combined in this way. Choleskys and
eigendecompositions can be provided instead

.. code:: python
.. code-block:: python

chols = [np.linalg.cholesky(Ki) for Ki in covs]
evds = [np.linalg.eigh(Ki) for Ki in covs]
with pm.Model() as model:
vals2 = pm.KroneckerNormal('vals2', mu=mu, chols=chols, shape=N)
vals2 = pm.KroneckerNormal("vals2", mu=mu, chols=chols, shape=N)
# or
vals3 = pm.KroneckerNormal('vals3', mu=mu, evds=evds, shape=N)
vals3 = pm.KroneckerNormal("vals3", mu=mu, evds=evds, shape=N)

neither of which will be converted. Diagonal noise can also be added to
the covariance matrix, :math:`K = K_1 \otimes K_2 + \sigma^2 I_N`.
Expand All @@ -2042,13 +2042,13 @@ class KroneckerNormal(Continuous):
utilizing eigendecompositons of the submatrices behind the scenes [1].
Thus,

.. code:: python
.. code-block:: python

sigma = 0.1
with pm.Model() as noise_model:
vals = pm.KroneckerNormal('vals', mu=mu, covs=covs, sigma=sigma, shape=N)
vals2 = pm.KroneckerNormal('vals2', mu=mu, chols=chols, sigma=sigma, shape=N)
vals3 = pm.KroneckerNormal('vals3', mu=mu, evds=evds, sigma=sigma, shape=N)
vals = pm.KroneckerNormal("vals", mu=mu, covs=covs, sigma=sigma, shape=N)
vals2 = pm.KroneckerNormal("vals2", mu=mu, chols=chols, sigma=sigma, shape=N)
vals3 = pm.KroneckerNormal("vals3", mu=mu, evds=evds, sigma=sigma, shape=N)

are identical, with `covs` and `chols` each converted to
eigendecompositions.
Expand Down
2 changes: 1 addition & 1 deletion pymc/gp/cov.py
Original file line number Diff line number Diff line change
Expand Up @@ -956,7 +956,7 @@ class WrappedPeriodic(Covariance):
In order to construct a kernel equivalent to the `Periodic` kernel you
can do the following (though using `Periodic` will likely be a bit faster):

.. code:: python
.. code-block:: python

exp_quad = pm.gp.cov.ExpQuad(1, ls=0.5)
cov = pm.gp.cov.WrappedPeriodic(exp_quad, period=5)
Expand Down
16 changes: 8 additions & 8 deletions pymc/gp/gp.py
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ class Latent(Base):

Examples
--------
.. code:: python
.. code-block:: python

# A one dimensional column vector of inputs.
X = np.linspace(0, 1, 10)[:, None]
Expand Down Expand Up @@ -439,7 +439,7 @@ class Marginal(Base):

Examples
--------
.. code:: python
.. code-block:: python

# A one dimensional column vector of inputs.
X = np.linspace(0, 1, 10)[:, None]
Expand Down Expand Up @@ -719,7 +719,7 @@ class MarginalApprox(Marginal):

Examples
--------
.. code:: python
.. code-block:: python

# A one dimensional column vector of inputs.
X = np.linspace(0, 1, 10)[:, None]
Expand Down Expand Up @@ -963,7 +963,7 @@ class LatentKron(Base):

Examples
--------
.. code:: python
.. code-block:: python

# One dimensional column vectors of inputs
X1 = np.linspace(0, 1, 10)[:, None]
Expand Down Expand Up @@ -1066,7 +1066,7 @@ def conditional(self, name, Xnew, jitter=JITTER_DEFAULT, **kwargs):
1, and 4, respectively, then `Xnew` must have 7 columns and a
covariance between the prediction points

.. code:: python
.. code-block:: python

cov_func(Xnew) = cov_func1(Xnew[:, :2]) * cov_func1(Xnew[:, 2:3]) * cov_func1(Xnew[:, 3:])

Expand Down Expand Up @@ -1118,13 +1118,13 @@ class MarginalKron(Base):

Examples
--------
.. code:: python
.. code-block:: python

# One dimensional column vectors of inputs
X1 = np.linspace(0, 1, 10)[:, None]
X2 = np.linspace(0, 2, 5)[:, None]
Xs = [X1, X2]
y = np.random.randn(len(X1)*len(X2)) # toy data
y = np.random.randn(len(X1) * len(X2)) # toy data
with pm.Model() as model:
# Specify the covariance functions for each Xi
cov_func1 = pm.gp.cov.ExpQuad(1, ls=0.1) # Must accept X1 without error
Expand Down Expand Up @@ -1266,7 +1266,7 @@ def conditional(self, name, Xnew, pred_noise=False, diag=False, **kwargs):
1, and 4, respectively, then `Xnew` must have 7 columns and a
covariance between the prediction points

.. code:: python
.. code-block:: python

cov_func(Xnew) = cov_func1(Xnew[:, :2]) * cov_func1(Xnew[:, 2:3]) * cov_func1(Xnew[:, 3:])

Expand Down
11 changes: 5 additions & 6 deletions pymc/gp/hsgp_approx.py
Original file line number Diff line number Diff line change
Expand Up @@ -215,7 +215,7 @@ class HSGP(Base):

Examples
--------
.. code:: python
.. code-block:: python

# A three dimensional column vector of inputs.
X = np.random.rand(100, 3)
Expand Down Expand Up @@ -357,7 +357,7 @@ def prior_linearized(self, X: TensorLike):

Examples
--------
.. code:: python
.. code-block:: python

# A one dimensional column vector of inputs.
X = np.linspace(0, 10, 100)[:, None]
Expand Down Expand Up @@ -544,7 +544,7 @@ class HSGPPeriodic(Base):

Examples
--------
.. code:: python
.. code-block:: python

# A three dimensional column vector of inputs.
X = np.random.rand(100, 3)
Expand Down Expand Up @@ -640,7 +640,7 @@ def prior_linearized(self, X: TensorLike):

Examples
--------
.. code:: python
.. code-block:: python

# A one dimensional column vector of inputs.
X = np.linspace(0, 10, 100)[:, None]
Expand All @@ -665,8 +665,7 @@ def prior_linearized(self, X: TensorLike):

# The (non-centered) GP approximation is given by
f = pm.Deterministic(
"f",
phi_cos @ (psd * beta[:m]) + phi_sin[..., 1:] @ (psd[1:] * beta[m:])
"f", phi_cos @ (psd * beta[:m]) + phi_sin[..., 1:] @ (psd[1:] * beta[m:])
)
...

Expand Down
Loading
Loading