Skip to content

Commit 823cf7a

Browse files
committed
v0.7
1 parent 540f686 commit 823cf7a

File tree

12 files changed

+781
-161
lines changed

12 files changed

+781
-161
lines changed

padasip/__init__.py

Lines changed: 14 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -18,20 +18,20 @@
1818
index
1919
2020
License
21-
************
21+
===============
2222
2323
This project is under `MIT License <https://en.wikipedia.org/wiki/MIT_License>`_.
2424
2525
Instalation
26-
************
26+
====================
2727
With `pip <https://pypi.python.org/pypi/pip>`_ from terminal: ``$ pip install padasip``
2828
2929
Or download you can download the source codes from Github
3030
(`link <https://github.com/matousc89/padasip>`_)
3131
3232
3333
Tutorials
34-
***********
34+
===============
3535
3636
All tutorials are created as Jupyter notebooks.
3737
You can open the tutorial as html, or you can download it as notebook.
@@ -50,29 +50,35 @@
5050
5151
5252
The User Quide
53-
***************
53+
=====================
5454
5555
If you need to know something what is not covered by tutorials,
5656
check the complete documentation here
5757
5858
59-
6059
.. toctree::
6160
:maxdepth: 2
61+
:titlesonly:
6262
6363
sources/preprocess
6464
sources/filters_mod
6565
sources/ann
66+
sources/misc
6667
6768
6869
Contact
69-
**********
70+
=====================
7071
7172
By email: matousc@gmail.com
7273
7374
75+
Changelog
76+
======================
77+
78+
For informations about versions and updates see :ref:`changelog`.
79+
7480
Indices and tables
75-
*******************
81+
===========================
7682
* :ref:`genindex`
7783
* :ref:`modindex`
7884
* :ref:`search`
@@ -82,8 +88,8 @@
8288
from padasip.filters.shortcuts import *
8389
import padasip.ann
8490
import padasip.filters
85-
8691
import padasip.preprocess
92+
import padasip.misc
8793

8894
# back compatibility with v0.5
8995
from padasip.preprocess.standardize import standardize

padasip/filters/__init__.py

Lines changed: 244 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,25 +1,168 @@
11
"""
2-
This sub-module stores adaptive filters and all related stuff.
2+
.. versionadded:: 0.1
3+
.. versionchanged:: 0.7
4+
35
46
An adaptive filter is a system that changes its adaptive parameteres
57
- adaptive weights :math:`\\textbf{w}(k)` - according to an optimization algorithm.
68
7-
The parameters of all implemented adaptive filters can be:
9+
The an adaptive filter can be described as
10+
11+
:math:`y(k) = w_1 \cdot x_{1}(k) + ... + w_n \cdot x_{n}(k)`,
12+
13+
or in a vector form
14+
15+
:math:`y(k) = \\textbf{x}^T(k) \\textbf{w}(k)`.
16+
17+
The adaptation of adaptive parameters (weights) can be done with
18+
various algorithms.
19+
20+
Content of this page:
21+
22+
.. contents::
23+
:local:
24+
:depth: 1
25+
26+
Usage instructions
27+
================================================
28+
29+
.. rubric:: Adaptive weights initial selection
30+
31+
The parameters of all implemented adaptive filters can be initially set:
32+
33+
* manually and passed to a filter as an array
34+
35+
* :code:`w="random"` - set to random - this will produce a vector of
36+
random values (zero mean, 0.5 standard deviation)
37+
38+
* :code:`w="zeros"` - set to zeros
39+
40+
.. rubric:: Input data
41+
42+
The adaptive filters need two inputs
843
9-
* selected manually and passed to a filter as an array
44+
* input matrix :code:`x` where rows represent the samples. Every row (sample)
45+
should contain multiple values (features).
1046
11-
* set to random - this will produce a vector of random values (zero mean,
12-
0.5 standard deviation)
47+
* desired value (target) :code:`d`
48+
49+
If you have only one signal and the historical values of this signal should
50+
be input of the filter (data reconstruction/prediction task) you can use helper
51+
function :ref:`preprocess-input_from_history` to build input matrix from
52+
the historical values.
53+
54+
.. rubric:: Creation of an adaptive filter
55+
56+
If you want to create adaptive filter (for example NLMS), with size :code:`n=4`,
57+
learning rate :code:`mu=0.1` and random initial parameters (weights), than use
58+
following code
59+
60+
.. code-block:: python
61+
62+
f = pa.filters.AdaptiveFilter(model="NLMS", n=4, mu=0.1, w="random")
63+
64+
where returned :code:`f` is the instance of class :code:`FilterNLMS`
65+
with given parameters.
66+
67+
.. rubric:: Data filtering
68+
69+
If you already created an instance of adaptive filter (:code:`f` in previous
70+
example) you can use it for filtering of
71+
data :code:`x` with desired value :code:`d` as simple as follows
72+
73+
.. code-block:: python
1374
14-
* set to zeros
75+
y, e, w = f.run(d, x)
76+
77+
where :code:`y` is output, :code:`e` is the error and :code:`w` is set
78+
of parameters at the end of the simulation.
1579
16-
All filters can be called directly from padasip without further imports
17-
as follows (LMS filter example):
80+
In case you want to just simply filter the data without creating and
81+
storing filter instance manually, use the following function
82+
83+
.. code-block:: python
84+
85+
y, e, w = pa.filters.filter_data(d, x, model="NLMS", mu=0.9, w="random")
1886
19-
>>> import padasip as pa
20-
>>> pa.filters.FilterLMS(3, mu=1.)
21-
<padasip.filters.lms.FilterLMS instance at 0xb726edec>
2287
88+
.. rubric:: Search for optimal learning rate
89+
90+
The search for optimal filter setup (especially learning rate) is a task
91+
of critical importance. Therefor an helper function for this task is
92+
implemented in the Padasip. To use this function you need to specify
93+
94+
* number of epochs (for training)
95+
96+
* part of data used in training epochs - `ntrain` (0.5 stands for 50% of
97+
given data)
98+
99+
* start and end of learning rate range you want to test (and number of
100+
steps in this range) - `mu_start`, `mu_end`, `steps`
101+
102+
* testing criteria (MSE, RMSE, MAE)
103+
104+
Example for `mu` in range of 100 values from `[0.01, ..., 1]` follows.
105+
In example is used 50% of data for training and leftoever data for testing
106+
with MSE criteria. Returned arrays are list of errors and list of corresponding
107+
learning rates, so it is easy to plot and analyze the error as
108+
a function of learning rate.
109+
110+
.. code-block:: python
111+
112+
errors_e, mu_range = f.explore_learning(d, x,
113+
mu_start=0.01,
114+
mu_end=1.,
115+
steps=100, ntrain=0.5, epochs=1,
116+
criteria="MSE")
117+
118+
Note: optimal learning rate depends on purpose and usage of filter (ammount
119+
of training, data characteristics, etc.).
120+
121+
122+
Full Working Example
123+
===================================================
124+
125+
Bellow is full working example with visualisation of results - the NLMS
126+
adaptive filter used for channel identification.
127+
128+
.. code-block:: python
129+
130+
import numpy as np
131+
import matplotlib.pylab as plt
132+
import padasip as pa
133+
134+
# creation of data
135+
N = 500
136+
x = np.random.normal(0, 1, (N, 4)) # input matrix
137+
v = np.random.normal(0, 0.1, N) # noise
138+
d = 2*x[:,0] + 0.1*x[:,1] - 4*x[:,2] + 0.5*x[:,3] + v # target
139+
140+
# identification
141+
f = pa.filters.AdaptiveFilter(model="NLMS", n=4, mu=0.1, w="random")
142+
y, e, w = f.run(d, x)
143+
144+
## show results
145+
plt.figure(figsize=(15,9))
146+
plt.subplot(211);plt.title("Adaptation");plt.xlabel("samples - k")
147+
plt.plot(d,"b", label="d - target")
148+
plt.plot(y,"g", label="y - output");plt.legend()
149+
plt.subplot(212);plt.title("Filter error");plt.xlabel("samples - k")
150+
plt.plot(10*np.log10(e**2),"r", label="e - error [dB]");plt.legend()
151+
plt.tight_layout()
152+
plt.show()
153+
154+
155+
Implemented filters
156+
========================
157+
158+
.. toctree::
159+
:glob:
160+
:maxdepth: 1
161+
162+
filters/*
163+
164+
Code explanation
165+
==================
23166
"""
24167
from padasip.filters.lms import FilterLMS
25168
from padasip.filters.nlms import FilterNLMS
@@ -28,9 +171,99 @@
28171
from padasip.filters.rls import FilterRLS
29172
from padasip.filters.ap import FilterAP
30173

174+
def filter_data(d, x, model="lms", **kwargs):
175+
"""
176+
Function that filter data with selected adaptive filter.
177+
178+
**Args:**
31179
180+
* `d` : desired value (1 dimensional array)
32181
182+
* `x` : input matrix (2-dimensional array). Rows are samples, columns are
183+
input arrays.
184+
185+
**Kwargs:**
186+
187+
* Any key argument that can be accepted with selected filter model.
188+
For more information see documentation of desired adaptive filter.
189+
190+
**Returns:**
191+
192+
* `y` : output value (1 dimensional array).
193+
The size corresponds with the desired value.
194+
195+
* `e` : filter error for every sample (1 dimensional array).
196+
The size corresponds with the desired value.
197+
198+
* `w` : history of all weights (2 dimensional array).
199+
Every row is set of the weights for given sample.
200+
201+
"""
202+
# overwrite n with correct size
203+
kwargs["n"] = x.shape[1]
204+
# create filter according model
205+
if model in ["LMS", "lms"]:
206+
f = FilterLMS(**kwargs)
207+
elif model in ["NLMS", "nlms"]:
208+
f = FilterNLMS(**kwargs)
209+
elif model in ["RLS", "rls"]:
210+
f = FilterRLS(**kwargs)
211+
elif model in ["GNGD", "gngd"]:
212+
f = FilterGNGD(**kwargs)
213+
elif model in ["AP", "ap"]:
214+
f = FilterAP(**kwargs)
215+
else:
216+
raise ValueError('Unknown model of filter {}'.format(model))
217+
# calculate and return the values
218+
y, e, w = f.run(d, x)
219+
return y, e, w
220+
221+
def AdaptiveFilter(model="lms", **kwargs):
222+
"""
223+
Function that filter data with selected adaptive filter.
224+
225+
**Args:**
226+
227+
* `d` : desired value (1 dimensional array)
33228
229+
* `x` : input matrix (2-dimensional array). Rows are samples, columns are
230+
input arrays.
231+
232+
**Kwargs:**
233+
234+
* Any key argument that can be accepted with selected filter model.
235+
For more information see documentation of desired adaptive filter.
236+
237+
* It should be at least filter size `n`.
34238
239+
**Returns:**
35240
241+
* `y` : output value (1 dimensional array).
242+
The size corresponds with the desired value.
243+
244+
* `e` : filter error for every sample (1 dimensional array).
245+
The size corresponds with the desired value.
246+
247+
* `w` : history of all weights (2 dimensional array).
248+
Every row is set of the weights for given sample.
249+
250+
"""
251+
# check if the filter size was specified
252+
if not "n" in kwargs:
253+
raise ValueError('Filter size is not defined (n=?).')
254+
# create filter according model
255+
if model in ["LMS", "lms"]:
256+
f = FilterLMS(**kwargs)
257+
elif model in ["NLMS", "nlms"]:
258+
f = FilterNLMS(**kwargs)
259+
elif model in ["RLS", "rls"]:
260+
f = FilterRLS(**kwargs)
261+
elif model in ["GNGD", "gngd"]:
262+
f = FilterGNGD(**kwargs)
263+
elif model in ["AP", "ap"]:
264+
f = FilterAP(**kwargs)
265+
else:
266+
raise ValueError('Unknown model of filter {}'.format(model))
267+
# return filter
268+
return f
36269

0 commit comments

Comments
 (0)