Notice

Note that the documentation below is the original release, and is out-of-date. See the repository for up-to-date docs.

As I mentioned a few times earlier, I'm trying to move from PDL to numpy as my
computational tool of choice. I like PDL, but it was full of warts in several
areas, and much work would be needed to clean it up. Oddly, numpy is very warty
as well, but in completely different ways: their warts are complementary. The
major issues with numpy are strange and inconsistent core functions, and weak
and inconsistent broadcasting support. This is an area where PDL is quite good,
so to make this transition more palatable to me, I wrote `numpysane`

, a python
module providing more reasonable core functionality. There will be many changes
down the line, but I just made the initial version 0.1 release. The code
repository lives here, and the python module is available on pypi as well. The
full README from the initial release appears verbatim below.

## NAME

numpysane: more-reasonable core functionality for numpy

## SYNOPSIS

>>> import numpy as np >>> import numpysane as nps >>> a = np.arange(6).reshape(2,3) >>> b = a + 100 >>> row = a[0,:] >>> a array([[0, 1, 2], [3, 4, 5]]) >>> b array([[100, 101, 102], [103, 104, 105]]) >>> row array([1000, 1001, 1002]) >>> nps.glue(a,b, axis=-1) array([[ 0, 1, 2, 100, 101, 102], [ 3, 4, 5, 103, 104, 105]]) >>> nps.glue(a,b,row, axis=-2) array([[ 0, 1, 2], [ 3, 4, 5], [ 100, 101, 102], [ 103, 104, 105], [1000, 1001, 1002]]) >>> nps.cat(a,b) array([[[ 0, 1, 2], [ 3, 4, 5]], [[100, 101, 102], [103, 104, 105]]]) >>> @nps.broadcast_define( ('n',), ('n',) ) ... def inner_product(a, b): ... return a.dot(b) >>> inner_product(a,b) array([ 305, 1250])

## DESCRIPTION

Numpy is widely used, relatively polished, and has a wide range of libraries available. At the same time, some of its very core functionality is strange, confusing and just plain wrong. This is in contrast with PDL (http://pdl.perl.org), which has a much more reasonable core, but a number of higher-level warts, and a relative dearth of library support. This module intends to improve the developer experience by providing alternate APIs to some core numpy functionality that is much more reasonable, especially for those who have used PDL in the past.

Instead of writing a new module (this module), it would be really nice to simply patch numpy to give everybody the more reasonable behavior. I'd be very happy to do that, but the issues lie with some very core functionality, and any changes in behavior would likely break existing code. Any comments in how to achieve better behaviors in a less forky manner as welcome.

Finally, if the system DOES make sense in some way that I'm simply not understanding, I'm happy to listen. I have no intention to disparage anyone or anything; I just want a more usable system for numerical computations.

The issues addressed by this module fall into two broad categories:

- Incomplete broadcasting support
- Strange, special-case-ridden rules for basic array manipulation, especially dealing with dimensionality

### Broadcasting

#### Problem

Numpy has a limited support for broadcasting (http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html), a generic way to vectorize functions. When making a broadcasted call to a function, you pass in arguments with the inputs to vectorize available in new dimensions, and the broadcasting mechanism automatically calls the function multiple times as needed, and reports the output as an array collecting all the results.

A basic example is an inner product: a function that takes in two identically-sized vectors (1-dimensional arrays) and returns a scalar (0-dimensional array). A broadcasted inner product function could take in two arrays of shape (2,3,4), compute the 6 inner products of length-4 each, and report the output in an array of shape (2,3). Numpy puts the most-significant dimension at the end, which is why this isn't 12 inner products of length-2 each. This is a semi-arbitrary design choice, which could have been made differently: PDL puts the most-significant dimension at the front, for instance.

The user doesn't choose whether to use broadcasting or not: some functions support it, and some do not. In PDL, broadcasting (called "threading" in that system) is a pervasive concept throughout. A PDL user has an expectation that every function can broadcast, and the documentation for every function is very explicit about the dimensionality of the inputs and outputs. Any data above the expected input dimensions is broadcast.

By contrast, in numpy very few functions know how to broadcast. On top of that, the documentation is usually silent about the broadcasting status of a function in question. And on top of THAT, broadcasting rules state that an array of dimensions (n,m) is functionally identical to one of dimensions (1,1,1,….1,n,m). However, many numpy functions have special-case rules to create different behaviors for inputs with different numbers of dimensions, and this creates unexpected results. The effect of all this is a messy situation where the user is often not sure of the exact behavior of the functions they're calling, and trial and error is required to make the system do what one wants.

#### Solution

This module contains functionality to make any arbitrary function broadcastable. This is invoked as a decorator, applied to the arbitrary user function. An example:

>>> import numpysane as nps >>> @nps.broadcast_define( ('n',), ('n',) ) ... def inner_product(a, b): ... return a.dot(b)

Here we have a simple inner product function to compute ONE inner product. We call 'broadcast_define' to add a broadcasting-aware wrapper that takes two 1D vectors of length 'n' each (same 'n' for the two inputs). This new 'inner_product' function applies broadcasting, as needed:

>>> import numpy as np >>> a = np.arange(6).reshape(2,3) >>> b = a + 100 >>> a array([[0, 1, 2], [3, 4, 5]]) >>> b array([[100, 101, 102], [103, 104, 105]]) >>> inner_product(a,b) array([ 305, 1250])

A detailed description of broadcasting rules is available in the numpy documentation: http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html

In short:

- The most significant dimension in a numpy array is the LAST one, so the prototype of an input argument must exactly match a given input's trailing shape. So a prototype shape of (a,b,c) accepts an argument shape of (……, a,b,c), with as many or as few leading dimensions as desired.
- The extra leading dimensions must be compatible across all the inputs. This
means that each leading dimension must either
- equal to 1
- be missing (thus assumed to equal 1)
- equal to some positive integer >1, consistent across all arguments

- The output is collected into an array that's sized as a superset of the above-prototype shape of each argument

More involved example: A function with input prototype ( (3,), ('n',3), ('n',), ('m',) ) given inputs of shape

(1,5, 3) (2,1, 8,3) ( 8) ( 5, 9)

will return an output array of shape (2,5, …), where … is the shape of each output slice. Note again that the prototype dictates the TRAILING shape of the inputs.

Stock numpy has some rudimentary support for this with its vectorize() function, but it assumes only scalar inputs and outputs, which severaly limits its usefulness.

#### New planned functionality

In addition to this basic broadcasting support, I'm planning the following:

- Output memory should be used more efficiently. This means that the output array should be allocated once, and each slice output should be written directly into the correct place in the array. To make this possible, the output dimensions need to be a part of the prototype, and the output array should be passable to the function being wrapped.
- A C-level broadcast_define(). This would be the analogue of PDL::PP (http://pdl.perl.org/PDLdocs/PP.html). This flavor of broadcast_define() would be invoked by the build system to wrap C functions. It would implement broadcasting awareness in C code it generates, which should work more effectively for performance-sensitive inner loops.
- Automatic parallelization for broadcasted slices. Since each broadcasting loop is independent, this is a very natural place to add parallelism.
- Dimensions should support a symbolic declaration. For instance, one could want a function to accept an input of shape (n) and another of shape (n*n). There's no way to declare this currently, but there should be.

### Strangeness in core routines

#### Problem

There are some core numpy functions whose behavior is strange, full of special cases and very confusing, at least to me. That makes it difficult to achieve some very basic things. In the following examples, I use a function "arr" that returns a numpy array with given dimensions:

>>> def arr(*shape): ... product = reduce( lambda x,y: x*y, shape) ... return np.arange(product).reshape(*shape) >>> arr(1,2,3) array([[[0, 1, 2], [3, 4, 5]]]) >>> arr(1,2,3).shape (1, 2, 3)

The following sections are an incomplete list of the strange functionality I've encountered.

##### Concatenation

A prime example of confusing functionality is the array concatenation routines. Numpy has a number of functions to do this, each being strange.

###### hstack()

hstack() performs a "horizontal" concatenation. When numpy prints an array, this is the last dimension (remember, the most significant dimensions in numpy are at the end). So one would expect that this function concatenates arrays along this last dimension. In the special case of 1D and 2D arrays, one would be right:

>>> np.hstack( (arr(3), arr(3))).shape (6,) >>> np.hstack( (arr(2,3), arr(2,3))).shape (2, 6)

but in any other case, one would be wrong:

>>> np.hstack( (arr(1,2,3), arr(1,2,3))).shape (1, 4, 3) <------ I expect (1, 2, 6) >>> np.hstack( (arr(1,2,3), arr(1,2,4))).shape [exception] <------ I expect (1, 2, 7) >>> np.hstack( (arr(3), arr(1,3))).shape [exception] <------ I expect (1, 6) >>> np.hstack( (arr(1,3), arr(3))).shape [exception] <------ I expect (1, 6)

I think the above should all succeed, and should produce the shapes as indicated. Cases such as "np.hstack( (arr(3), arr(1,3)))" are maybe up for debate, but broadcasting rules allow adding as many extra length-1 dimensions as we want without changing the meaning of the object, so I claim this should work. Either way, if you print out the operands for any of the above, you too would expect a "horizontal" stack() to work as stated above.

It turns out that normally hstack() concatenates along axis=1, unless the first argument only has one dimension, in which case axis=0 is used. This is 100% wrong in a system where the most significant dimension is the last one, unless you assume that everyone has only 2D arrays, where the last dimension and the second dimension are the same.

The correct way to do this is to concatenate along axis=-1. It works for n-dimensionsal objects, and doesn't require the special case logic for 1-dimensional objects that hstack() has.

###### vstack()

Similarly, vstack() performs a "vertical" concatenation. When numpy prints an array, this is the second-to-last dimension (remember, the most significant dimensions in numpy are at the end). So one would expect that this function concatenates arrays along this second-to-last dimension. In the special case of 1D and 2D arrays, one would be right:

>>> np.vstack( (arr(2,3), arr(2,3))).shape (4, 3) >>> np.vstack( (arr(3), arr(3))).shape (2, 3) >>> np.vstack( (arr(1,3), arr(3))).shape (2, 3) >>> np.vstack( (arr(3), arr(1,3))).shape (2, 3) >>> np.vstack( (arr(2,3), arr(3))).shape (3, 3)

Note that this function appears to tolerate some amount of shape mismatches. It does it in a form one would expect, but given the state of the rest of this system, I found it surprising. For instance "np.hstack( (arr(1,3), arr(3)))" fails, so one would think that "np.vstack( (arr(1,3), arr(3)))" would fail too.

And once again, adding more dimensions make it confused, for the same reason:

>>> np.vstack( (arr(1,2,3), arr(2,3))).shape [exception] <------ I expect (1, 4, 3) >>> np.vstack( (arr(1,2,3), arr(1,2,3))).shape (2, 2, 3) <------ I expect (1, 4, 3)

Similarly to hstack(), vstack() concatenates along axis=0, which is "vertical" only for 2D arrays, but not for any others. And similarly to hstack(), the 1D case has special-cased logic to work properly.

The correct way to do this is to concatenate along axis=-2. It works for n-dimensionsal objects, and doesn't require the special case for 1-dimensional objects that vstack() has.

###### dstack()

I'll skip the detailed description, since this is similar to hstack() and vstack(). The intent was to concatenate across axis=-3, but the implementation takes axis=2 instead. This is wrong, as before. And I find it strange that these 3 functions even exist, since they are all special-cases: the concatenation axis should be an argument, and at most, the edge special case (hstack()) should exist. This brings us to the next function:

###### concatenate()

This is a more general function, and unlike hstack(), vstack() and dstack(), it takes as input a list of arrays AND the concatenation dimension. It accepts negative concatenation dimensions to allow us to count from the end, so things should work better. And in many ways that failed previously, they do:

>>> np.concatenate( (arr(1,2,3), arr(1,2,3)), axis=-1).shape (1, 2, 6) >>> np.concatenate( (arr(1,2,3), arr(1,2,4)), axis=-1).shape (1, 2, 7) >>> np.concatenate( (arr(1,2,3), arr(1,2,3)), axis=-2).shape (1, 4, 3)

But many things still don't work as I would expect:

>>> np.concatenate( (arr(1,3), arr(3)), axis=-1).shape [exception] <------ I expect (1, 6) >>> np.concatenate( (arr(3), arr(1,3)), axis=-1).shape [exception] <------ I expect (1, 6) >>> np.concatenate( (arr(1,3), arr(3)), axis=-2).shape [exception] <------ I expect (3, 3) >>> np.concatenate( (arr(3), arr(1,3)), axis=-2).shape [exception] <------ I expect (2, 3) >>> np.concatenate( (arr(2,3), arr(2,3)), axis=-3).shape [exception] <------ I expect (2, 2, 3)

This function works as expected only if

- All inputs have the same number of dimensions
- All inputs have a matching shape, except for the dimension along which we're concatenating
- All inputs HAVE the dimension along which we're concatenating

A legitimate use case that violates these conditions: I have an object that contains N 3D vectors, and I want to add another 3D vector to it. This is essentially the first failing example above.

###### stack()

The name makes it sound exactly like concatenate(), and it takes the same arguments, but it is very different. stack() requires that all inputs have EXACTLY the same shape. It then concatenates all the inputs along a new dimension, and places that dimension in the location given by the 'axis' input. If this is the exact type of concatenation you want, this function works fine. But it's one of many things a user may want to do.

##### inner() and dot()

Another arbitrary example of a strange API is np.dot() and np.inner(). In a real-valued n-dimensional Euclidean space, a "dot product" is just another name for an "inner product". Numpy disagrees.

It looks like np.dot() is matrix multiplication, with some wonky behaviors when given higher-dimension objects, and with some special-case behaviors for 1-dimensional and 0-dimensional objects:

>>> np.dot( arr(4,5,2,3), arr(3,5)).shape (4, 5, 2, 5) <--- expected result for a broadcasted matrix multiplication >>> np.dot( arr(3,5), arr(4,5,2,3)).shape [exception] <--- np.dot() is not commutative. Expected for matrix multiplication, but not for a dot product >>> np.dot( arr(4,5,2,3), arr(1,3,5)).shape (4, 5, 2, 1, 5) <--- don't know where this came from at all >>> np.dot( arr(4,5,2,3), arr(3)).shape (4, 5, 2) <--- 1D special case. This is a dot product. >>> np.dot( arr(4,5,2,3), 3).shape (4, 5, 2, 3) <--- 0D special case. This is a scaling.

It looks like np.inner() is some sort of quasi-broadcastable inner product, also with some funny dimensioning rules. In many cases it looks like np.dot(a,b) is the same as np.inner(a, transpose(b)) where transpose() swaps the last two dimensions:

>>> np.inner( arr(4,5,2,3), arr(5,3)).shape (4, 5, 2, 5) <--- All the length-3 inner products collected into a shape with not-quite-broadcasting rules >>> np.inner( arr(5,3), arr(4,5,2,3)).shape (5, 4, 5, 2) <--- np.inner() is not commutative. Unexpected for an inner product >>> np.inner( arr(4,5,2,3), arr(1,5,3)).shape (4, 5, 2, 1, 5) <--- No idea >>> np.inner( arr(4,5,2,3), arr(3)).shape (4, 5, 2) <--- 1D special case. This is a dot product. >>> np.inner( arr(4,5,2,3), 3).shape (4, 5, 2, 3) <--- 0D special case. This is a scaling.

##### atleast_xd()

Numpy has 3 special-case functions atleast_1d(), atleast_2d() and atleast_3d(). For 4d and higher, you need to do something else. As expected by now, these do surprising things:

>>> np.atleast_3d( arr(3)).shape (1, 3, 1)

I don't know when this is what I would want, so we move on.

#### Solution

This module introduces new functions that can be used for this core functionality instead of the builtin numpy functions. These new functions work in ways that (I think) are more intuitive and more reasonable. They do not refer to anything being "horizontal" or "vertical", nor do they talk about "rows" or "columns"; these concepts simply don't apply in a generic N-dimensional system. These functions are very explicit about the dimensionality of the inputs/outputs, and fit well into a broadcasting-aware system. Furthermore, the names and semantics of these new functions come directly from PDL, which is more consistent in this area.

Since these functions assume that broadcasting is an important concept in the system, the given axis indices should be counted from the most significant dimension: the last dimension in numpy. This means that where an axis index is specified, negative indices are encouraged. glue() forbids axis>=0 outright.

Example for further justification:

An array containing N 3D vectors would have shape (N,3). Another array containing a single 3D vector would have shape (3). Counting the dimensions from the end, each vector is indexed in dimension -1. However, counting from the front, the vector is indexed in dimension 0 or 1, depending on which of the two arrays we're looking at. If we want to add the single vector to the array containing the N vectors, and we mistakenly try to concatenate along the first dimension, it would fail if N != 3. But if we're unlucky, and N=3, then we'd get a nonsensical output array of shape (3,4). Why would an array of N 3D vectors have shape (N,3) and not (3,N)? Because if we apply python iteration to it, we'd expect to get N iterates of arrays with shape (3,) each, and numpy iterates from the first dimension:

>>> a = np.arange(2*3).reshape(2,3) >>> a array([[0, 1, 2], [3, 4, 5]]) >>> [x for x in a] [array([0, 1, 2]), array([3, 4, 5])]

New functions this module provides (documented fully in the next section):

##### glue

Concatenates arrays along a given axis. Implicit length-1 dimensions are added at the start as needed. Dimensions other than the glueing axis must match exactly.

##### cat

Concatenate a given list of arrays along a new least-significant (leading) axis. Again, implicit length-1 dimensions are added, and the resulting shapes must match, and no data duplication occurs.

##### clump

Reshapes the array by grouping together the 'n' most significant dimensions, where 'n' is given. So for instance, if x.shape is (2,3,4) then nps.clump(x,2).shape is (2,12)

##### atleast_dims

Adds length-1 dimensions at the front of an array so that all the given dimensions are in-bounds. Given axis<0 can expand the shape; given axis>=0 MUST already be in-bounds. This preserves the alignment of the most-significant axis index.

##### mv

Moves a dimension from one position to another

##### xchg

Exchanges the positions of two dimensions

##### transpose

Reverses the order of the two most significant dimensions in an array. The whole array is seen as being an array of 2D matrices, each matrix living in the 2 most significant dimensions, which implies this definition.

##### dummy

Adds a single length-1 dimension at the given position

##### reorder

Completely reorders the dimensions in an array

##### dot

Broadcast-aware non-conjugating dot product. Identical to inner

##### vdot

Broadcast-aware conjugating dot product

##### inner

Broadcast-aware inner product. Identical to dot

##### outer

Broadcast-aware outer product.

##### matmult

Broadcast-aware matrix multiplication

#### New planned functionality

The function listed above are a start, but more will be added with time.

## INTERFACE

### broadcast_define()

Vectorizes an arbitrary function, expecting input as in the given prototype.

Synopsis:

>>> import numpy as np >>> import numpysane as nps >>> @nps.broadcast_define( ('n',), ('n',) ) ... def inner_product(a, b): ... return a.dot(b) >>> a = np.arange(6).reshape(2,3) >>> b = a + 100 >>> a array([[0, 1, 2], [3, 4, 5]]) >>> b array([[100, 101, 102], [103, 104, 105]]) >>> inner_product(a,b) array([ 305, 1250])

The prototype defines the dimensionality of the inputs. In the inner product example above, the input is two 1D n-dimensional vectors. In particular, the 'n' is the same for the two inputs. This function is intended to be used as a decorator, applied to a function defining the operation to be vectorized. Each element in the prototype list refers to each input, in order. In turn, each such element is a list that describes the shape of that input. Each of these shape descriptors can be any of

- a positive integer, indicating an input dimension of exactly that length
- a string, indicating an arbitrary, but internally consistent dimension

The normal numpy broadcasting rules (as described elsewhere) apply. In summary:

- Dimensions are aligned at the end of the shape list, and must match the prototype
- Extra dimensions left over at the front must be consistent for all the
input arguments, meaning:
- All dimensions !=1 must be identical
- Missing dimensions are implicitly set to 1
- Dimensions that are =1 are set to the lengths implied by other arguments
- The output has a shape where
- The trailing dimensions are whatever the function being broadcasted outputs
- The leading dimensions come from the extra dimensions in the inputs

Let's look at a more involved example. Let's say we have a function that takes a set of points in R^2 and a single center point in R^2, and finds a best-fit least-squares line that passes through the given center point. Let it return a 3D vector containing the slope, y-intercept and the RMS residual of the fit. This broadcasting-enabled function can be defined like this:

import numpy as np import numpysane as nps @nps.broadcast_define( ('n',2), (2,) ) def fit(xy, c): # line-through-origin-model: y = m*x # E = sum( (m*x - y)**2 ) # dE/dm = 2*sum( (m*x-y)*x ) = 0 # ----> m = sum(x*y)/sum(x*x) x,y = (xy - c).transpose() m = np.sum(x*y) / np.sum(x*x) err = m*x - y err **= 2 rms = np.sqrt(err.mean()) # I return m,b because I need to translate the line back b = c[1] - m*c[0] return np.array((m,b,rms))

And I can use broadcasting to compute a number of these fits at once. Let's say I want to compute 4 different fits of 5 points each. I can do this:

n = 5 m = 4 c = np.array((20,300)) xy = np.arange(m*n*2, dtype=np.float64).reshape(m,n,2) + c xy += np.random.rand(*xy.shape)*5 res = fit( xy, c ) mb = res[..., 0:2] rms = res[..., 2] print "RMS residuals: {}".format(rms)

Here I had 4 different sets of points, but a single center point c. If I wanted 4 different center points, I could pass c as an array of shape (4,2). I can use broadcasting to plot all the results (the points and the fitted lines):

import gnuplotlib as gp gp.plot( *nps.mv(xy,-1,0), _with='linespoints', equation=['{}*x + {}'.format(mb_single[0], mb_single[1]) for mb_single in mb], unset='grid', square=1)

This function is analogous to thread_define() in PDL.

### glue()

Concatenates a given list of arrays along the given 'axis' keyword argument.

Synopsis:

>>> import numpy as np >>> import numpysane as nps >>> a = np.arange(6).reshape(2,3) >>> b = a + 100 >>> row = a[0,:] + 1000 >>> a array([[0, 1, 2], [3, 4, 5]]) >>> b array([[100, 101, 102], [103, 104, 105]]) >>> row array([1000, 1001, 1002]) >>> nps.glue(a,b, axis=-1) array([[ 0, 1, 2, 100, 101, 102], [ 3, 4, 5, 103, 104, 105]]) >>> nps.glue(a,b,row, axis=-2) array([[ 0, 1, 2], [ 3, 4, 5], [ 100, 101, 102], [ 103, 104, 105], [1000, 1001, 1002]]) >>> nps.glue(a,b, axis=-3) array([[[ 0, 1, 2], [ 3, 4, 5]], [[100, 101, 102], [103, 104, 105]]])

If no 'axis' keyword argument is given, a new dimension is added at the front, and we concatenate along that new dimension. This case is equivalent to numpysane.cat()

In order to count dimensions from the inner-most outwards, this function accepts only negative axis arguments. This is because numpy broadcasts from the last dimension, and the last dimension is the inner-most in the (usual) internal storage scheme. Allowing glue() to look at dimensions at the start would allow it to unalign the broadcasting dimensions, which is never what you want.

To glue along the last dimension, pass axis=-1; to glue along the second-to-last dimension, pass axis=-2, and so on.

Unlike in PDL, this function refuses to create duplicated data to make the shapes fit. In my experience, this isn't what you want, and can create bugs. For instance, PDL does this:

```
pdl> p sequence(3,2)
[
[0 1 2]
[3 4 5]
]
pdl> p sequence(3)
[0 1 2]
pdl> p PDL::glue( 0, sequence(3,2), sequence(3) )
[
[0 1 2 0 1 2] <--- Note the duplicated "0,1,2"
[3 4 5 0 1 2]
]
```

while numpysane.glue() does this:

>>> import numpy as np >>> import numpysane as nps >>> a = np.arange(6).reshape(2,3) >>> b = a[0:1,:] >>> a array([[0, 1, 2], [3, 4, 5]]) >>> b array([[0, 1, 2]]) >>> nps.glue(a,b,axis=-1) [exception]

Finally, this function adds as many length-1 dimensions at the front as required. Note that this does not create new data, just new degenerate dimensions. Example:

>>> import numpy as np >>> import numpysane as nps >>> a = np.arange(6).reshape(2,3) >>> b = a + 100 >>> a array([[0, 1, 2], [3, 4, 5]]) >>> b array([[100, 101, 102], [103, 104, 105]]) >>> res = nps.glue(a,b, axis=-5) >>> res array([[[[[ 0, 1, 2], [ 3, 4, 5]]]], [[[[100, 101, 102], [103, 104, 105]]]]]) >>> res.shape (2, 1, 1, 2, 3)

### cat()

Concatenates a given list of arrays along a new first (outer) dimension.

Synopsis:

>>> import numpy as np >>> import numpysane as nps >>> a = np.arange(6).reshape(2,3) >>> b = a + 100 >>> c = a - 100 >>> a array([[0, 1, 2], [3, 4, 5]]) >>> b array([[100, 101, 102], [103, 104, 105]]) >>> c array([[-100, -99, -98], [ -97, -96, -95]]) >>> res = nps.cat(a,b,c) >>> res array([[[ 0, 1, 2], [ 3, 4, 5]], [[ 100, 101, 102], [ 103, 104, 105]], [[-100, -99, -98], [ -97, -96, -95]]]) >>> res.shape (3, 2, 3) >>> [x for x in res] [array([[0, 1, 2], [3, 4, 5]]), array([[100, 101, 102], [103, 104, 105]]), array([[-100, -99, -98], [ -97, -96, -95]])]

This function concatenates the input arrays into an array shaped like the highest-dimensioned input, but with a new outer (at the start) dimension. The concatenation axis is this new dimension.

As usual, the dimensions are aligned along the last one, so broadcasting will continue to work as expected. Note that this is the opposite operation from iterating a numpy array; see the example above.

### clump()

Groups the given n most significant dimensions together.

Synopsis:

>>> import numpysane as nps >>> nps.clump( arr(2,3,4), n=2).shape (2, 12)

### atleast_dims()

Returns an array with extra length-1 dimensions to contain all given axes.

Synopsis:

>>> import numpy as np >>> import numpysane as nps >>> a = np.arange(6).reshape(2,3) >>> a array([[0, 1, 2], [3, 4, 5]]) >>> nps.atleast_dims(a, -1).shape (2, 3) >>> nps.atleast_dims(a, -2).shape (2, 3) >>> nps.atleast_dims(a, -3).shape (1, 2, 3) >>> nps.atleast_dims(a, 0).shape (2, 3) >>> nps.atleast_dims(a, 1).shape (2, 3) >>> nps.atleast_dims(a, 2).shape [exception] >>> l = [-3,-2,-1,0,1] >>> nps.atleast_dims(a, l).shape (1, 2, 3) >>> l [-3, -2, -1, 1, 2]

If the given axes already exist in the given array, the given array itself is returned. Otherwise length-1 dimensions are added to the front until all the requested dimensions exist. The given axis>=0 dimensions MUST all be in-bounds from the start, otherwise the most-significant axis becomes unaligned; an exception is thrown if this is violated. The given axis<0 dimensions that are out-of-bounds result in new dimensions added at the front.

If new dimensions need to be added at the front, then any axis>=0 indices become offset. For instance:

>>> x.shape (2, 3, 4) >>> [x.shape[i] for i in (0,-1)] [2, 4] >>> x = nps.atleast_dims(x, 0, -1, -5) >>> x.shape (1, 1, 2, 3, 4) >>> [x.shape[i] for i in (0,-1)] [1, 4]

Before the call, axis=0 refers to the length-2 dimension and axis=-1 refers to the length=4 dimension. After the call, axis=-1 refers to the same dimension as before, but axis=0 now refers to a new length=1 dimension. If it is desired to compensate for this offset, then instead of passing the axes as separate arguments, pass in a single list of the axes indices. This list will be modified to offset the axis>=0 appropriately. Ideally, you only pass in axes<0, and this does not apply. Doing this in the above example:

>>> l [0, -1, -5] >>> x.shape (2, 3, 4) >>> [x.shape[i] for i in (l[0],l[1])] [2, 4] >>> x=nps.atleast_dims(x, l) >>> x.shape (1, 1, 2, 3, 4) >>> l [2, -1, -5] >>> [x.shape[i] for i in (l[0],l[1])] [2, 4]

We passed the axis indices in a list, and this list was modified to reflect the new indices: The original axis=0 becomes known as axis=2. Again, if you pass in only axis<0, then you don't need to care about this.

### mv()

Moves a given axis to a new position. Similar to numpy.moveaxis().

Synopsis:

>>> import numpy as np >>> import numpysane as nps >>> a = np.arange(24).reshape(2,3,4) >>> a.shape (2, 3, 4) >>> nps.mv( a, -1, 0).shape (4, 2, 3) >>> nps.mv( a, -1, -5).shape (4, 1, 1, 2, 3) >>> nps.mv( a, 0, -5).shape (2, 1, 1, 3, 4)

New length-1 dimensions are added at the front, as required, and any axis>=0 that are passed in refer to the array BEFORE these new dimensions are added.

### xchg()

Exchanges the positions of the two given axes. Similar to numpy.swapaxes()

Synopsis:

>>> import numpy as np >>> import numpysane as nps >>> a = np.arange(24).reshape(2,3,4) >>> a.shape (2, 3, 4) >>> nps.xchg( a, -1, 0).shape (4, 3, 2) >>> nps.xchg( a, -1, -5).shape (4, 1, 2, 3, 1) >>> nps.xchg( a, 0, -5).shape (2, 1, 1, 3, 4)

New length-1 dimensions are added at the front, as required, and any axis>=0 that are passed in refer to the array BEFORE these new dimensions are added.

### transpose()

Reverses the order of the last two dimensions.

Synopsis:

>>> import numpy as np >>> import numpysane as nps >>> a = np.arange(24).reshape(2,3,4) >>> a.shape (2, 3, 4) >>> nps.transpose(a).shape (2, 4, 3) >>> nps.transpose( np.arange(3) ).shape (3, 1)

A "matrix" is generally seen as a 2D array that we can transpose by looking at the 2 dimensions in the opposite order. Here we treat an n-dimensional array as an n-2 dimensional object containing 2D matrices. As usual, the last two dimensions contain the matrix.

New length-1 dimensions are added at the front, as required, meaning that 1D input of shape (n,) is interpreted as a 2D input of shape (1,n), and the transpose is 2 of shape (n,1).

### dummy()

Adds a single length-1 dimension at the given position.

Synopsis:

>>> import numpy as np >>> import numpysane as nps >>> a = np.arange(24).reshape(2,3,4) >>> a.shape (2, 3, 4) >>> nps.dummy(a, 0).shape (1, 2, 3, 4) >>> nps.dummy(a, 1).shape (2, 1, 3, 4) >>> nps.dummy(a, -1).shape (2, 3, 4, 1) >>> nps.dummy(a, -2).shape (2, 3, 1, 4) >>> nps.dummy(a, -5).shape (1, 1, 2, 3, 4)

This is similar to numpy.expand_dims(), but handles out-of-bounds dimensions better. New length-1 dimensions are added at the front, as required, and any axis>=0 that are passed in refer to the array BEFORE these new dimensions are added.

### reorder()

Reorders the dimensions of an array.

Synopsis:

>>> import numpy as np >>> import numpysane as nps >>> a = np.arange(24).reshape(2,3,4) >>> a.shape (2, 3, 4) >>> nps.reorder( a, 0, -1, 1 ).shape (2, 4, 3) >>> nps.reorder( a, -2 , -1, 0 ).shape (3, 4, 2) >>> nps.reorder( a, -4 , -2, -5, -1, 0 ).shape (1, 3, 1, 4, 2)

This is very similar to numpy.transpose(), but handles out-of-bounds dimensions much better.

New length-1 dimensions are added at the front, as required, and any axis>=0 that are passed in refer to the array BEFORE these new dimensions are added.

### dot()

Non-conjugating dot product of two 1-dimensional n-long vectors.

Synopsis:

>>> import numpy as np >>> import numpysane as nps >>> a = np.arange(3) >>> b = a+5 >>> a array([0, 1, 2]) >>> b array([5, 6, 7]) >>> nps.dot(a,b) array(20)

This is identical to numpysane.inner(). For a conjugating version of this function, use nps.vdot().

### vdot()

Conjugating dot product of two 1-dimensional n-long vectors.

Synopsis:

>>> import numpy as np >>> import numpysane as nps >>> a = np.array(( 1 + 2j, 3 + 4j, 5 + 6j)) >>> b = a+5 >>> a array([ 1.+2.j, 3.+4.j, 5.+6.j]) >>> b array([ 6.+2.j, 8.+4.j, 10.+6.j]) >>> nps.vdot(a,b) array((136-60j)) >>> nps.dot(a,b) array((24+148j))

This is identical to numpysane.inner(). For a conjugating version of this function, use nps.vdot().

### outer()

Outer product of two 1-dimensional n-long vectors.

Synopsis:

>>> import numpy as np >>> import numpysane as nps >>> a = np.arange(3) >>> b = a+5 >>> a array([0, 1, 2]) >>> b array([5, 6, 7]) >>> nps.outer(a,b) array([[ 0, 0, 0], [ 5, 6, 7], [10, 12, 14]])

### matmult()

Multiplication of two matrices.

Synopsis:

>>> import numpy as np >>> import numpysane as nps >>> a = np.arange(6).reshape(2,3) >>> b = np.arange(12).reshape(3,4) >>> a array([[0, 1, 2], [3, 4, 5]]) >>> b array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> nps.matmult(a,b) array([[20, 23, 26, 29], [56, 68, 80, 92]])

## COMPATIBILITY

Python2 and python3 should are both supported. Please report a bug if either one doesn't work.

## REPOSITORY

## AUTHOR

Dima Kogan <dima@secretsauce.net>

## LICENSE AND COPYRIGHT

Copyright 2016 Dima Kogan.

This program is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License (version 3 or higher) as published by the Free Software Foundation