I have sample data which I would like to compute a confidence interval for, assuming a normal distribution. I
INTRODUCTION: I have a list of more than 30,000 integer values ranging from 0 to 47, inclusive, e.g.[0,0,0,0,.
After doing some processing on an audio or image array, it needs to be normalized within a range before it can
Using standard Python arrays, I can do the following: arr = [] arr.append([1,2,3]) arr.append([4,5,6]) # arr
I have a set of data and I want to compare which line describes it best (polynomials of different orders, expo
I am looking for a function that takes as input two lists, and returns the Pearson correlation, and the signif
Is there a SciPy function or NumPy function or module for Python that calculates the running mean of a 1D arra
Lets assume we have a dataset which might be given approximately by import numpy as np x = np.linspace(0,2*np
I have a very similar question to this question, but still one step behind. I have only one version of Python
SciPy appears to provide most (but not all [1]) of NumPy's functions in its own namespace. In other words
I just discovered a logical bug in my code which was causing all sorts of problems. I was inadvertently doing
I need to create a NumPy array of length n, each element of which is v. Is there anything better than: a = e
I have two numpy arrays of different shapes, but with the same length (leading dimension). I want to shuffle e
In numpy / scipy, is there an efficient way to get frequency counts for unique values in an array? Something
I have created an array thusly: import numpy as np data = np.zeros( (512,512,3), dtype=np.uint8) data[256,256
If I have a numpy dtype, how do I automatically convert it to its closest python data type? For example, num