Skip to content

Commit 165b3cc

Browse files
authored
Merge pull request #216 from ToFuProject/devel
[#devel] Prepare 0.0.50
2 parents 2ae7403 + 06c477f commit 165b3cc

File tree

5 files changed

+64
-63
lines changed

5 files changed

+64
-63
lines changed

.github/workflows/python-publish.yml

Lines changed: 8 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -18,21 +18,18 @@ on:
1818
types: [created]
1919

2020
jobs:
21-
deploy:
21+
pypi:
2222
name: Publish sdist to Pypi
2323
runs-on: ubuntu-latest
2424
steps:
2525
- uses: actions/checkout@v4
2626
- uses: astral-sh/setup-uv@v5
2727
with:
2828
python-version: '3.11'
29-
- run: uv build
30-
# Check that basic features work and we didn't miss to include crucial files
31-
- name: Smoke test (wheel)
32-
run: uv run --isolated --no-project -p 3.11 --with dist/*.whl datastock/tests
33-
- name: Smoke test (source distribution)
34-
run: uv run --isolated --no-project -p 3.11 --with dist/*.tar.gz datastock/tests
35-
- run: uv publish --trusted-publishing always
36-
with:
37-
user: __token__
38-
password: ${{ secrets.PYPI_API_TOKEN }}
29+
- run: uv build
30+
# Check that basic features work and we didn't miss to include crucial files
31+
- name: import test (wheel)
32+
run: uv run --isolated --no-project -p 3.11 --with dist/*.whl datastock/tests/prepublish.py
33+
- name: import test (source distribution)
34+
run: uv run --isolated --no-project -p 3.11 --with dist/*.tar.gz datastock/tests/prepublish.py
35+
- run: uv publish -t ${{ secrets.PYPI_API_TOKEN }}

CLASSIFIERS.txt

Lines changed: 10 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,10 @@
1-
"Development Status :: 5 - Production/Stable"
2-
"Intended Audience :: Science/Research"
3-
"Programming Language :: Python :: 3"
4-
"Programming Language :: Python :: 3.6"
5-
"Programming Language :: Python :: 3.7"
6-
"Programming Language :: Python :: 3.8"
7-
"Programming Language :: Python :: 3.9"
8-
"Programming Language :: Python :: 3.10"
9-
"Programming Language :: Python :: 3.11"
10-
"Natural Language :: English"
11-
"License :: OSI Approved :: MIT License"
1+
Development Status :: 5 - Production/Stable
2+
Intended Audience :: Science/Research
3+
Programming Language :: Python :: 3
4+
Programming Language :: Python :: 3.6
5+
Programming Language :: Python :: 3.7
6+
Programming Language :: Python :: 3.8
7+
Programming Language :: Python :: 3.9
8+
Programming Language :: Python :: 3.10
9+
Programming Language :: Python :: 3.11
10+
Natural Language :: English

README.md

Lines changed: 34 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -41,15 +41,15 @@ Examples:
4141
Straightforward array visualization:
4242
------------------------------------
4343

44-
```
44+
``
4545
import datastock as ds
4646

4747
# any 1d, 2d or 3d array
48-
aa = np.np.random.random((100, 100, 100))
48+
aa = np.random((100, 100, 100))
4949

5050
# plot interactive figure using shortcut to method
5151
dax = ds.plot_as_array(aa)
52-
```
52+
``
5353

5454
Now do **shift + left clic** on any axes, the rest of the interactive commands are automatically printed in your python console
5555

@@ -75,7 +75,7 @@ Thanks to dref, the class knows the relationaships between all numpy arrays.
7575
In particular it knows which arrays share the same references / dimensions
7676

7777

78-
```
78+
```python
7979
import numpy as np
8080
import datastock as ds
8181

@@ -96,24 +96,24 @@ lprof = [(1 + np.cos(t)[:, None]) * x[None, :] for t in lt]
9696
# Populate DataStock
9797

9898
# instanciate
99-
st = ds.DataStock()
99+
coll = ds.DataStock()
100100

101101
# add references (i.e.: store size of each dimension under a unique key)
102-
st.add_ref(key='nc', size=nc)
103-
st.add_ref(key='nx', size=nx)
102+
coll.add_ref(key='nc', size=nc)
103+
coll.add_ref(key='nx', size=nx)
104104
for ii, nt in enumerate(lnt):
105-
st.add_ref(key=f'nt{ii}', size=nt)
105+
coll.add_ref(key=f'nt{ii}', size=nt)
106106

107107
# add data dependening on these references
108108
# you can, optionally, specify units, physical dimensionality (ex: distance, time...), quantity (ex: radius, height, ...) and name (to your liking)
109109

110-
st.add_data(key='x', data=x, dimension='distance', quant='radius', units='m', ref='nx')
110+
coll.add_data(key='x', data=x, dimension='distance', quant='radius', units='m', ref='nx')
111111
for ii, nt in enumerate(lnt):
112-
st.add_data(key=f't{ii}', data=lt[ii], dimension='time', units='s', ref=f'nt{ii}')
113-
st.add_data(key=f'prof{ii}', data=lprof[ii], dimension='velocity', units='m/s', ref=(f'nt{ii}', 'x'))
112+
coll.add_data(key=f't{ii}', data=lt[ii], dimension='time', units='s', ref=f'nt{ii}')
113+
coll.add_data(key=f'prof{ii}', data=lprof[ii], dimension='velocity', units='m/s', ref=(f'nt{ii}', 'x'))
114114

115115
# print in the console the content of st
116-
st
116+
coll
117117
```
118118

119119
<p align="center">
@@ -124,22 +124,22 @@ You can see that DataStock stores the relationships between each array and each
124124
Specifying explicitly the references is only necessary if there is an ambiguity (i.e.: several references have the same size, like nx and nt2 in our case)
125125

126126

127-
```
127+
``
128128
# plot any array interactively
129-
dax = st.plot_as_array('x')
130-
dax = st.plot_as_array('t0')
131-
dax = st.plot_as_array('prof0')
132-
dax = st.plot_as_array('prof0', keyX='t0', keyY='x', aspect='auto')
133-
```
129+
dax = coll.plot_as_array('x')
130+
dax = coll.plot_as_array('t0')
131+
dax = coll.plot_as_array('prof0')
132+
dax = coll.plot_as_array('prof0', keyX='t0', keyY='x', aspect='auto')
133+
``
134134

135135
You can then decide to store any object category
136136
Let's create a 'campaign' category to store the characteristics of each measurements campaign
137137
and let's add a 'campaign' parameter to each profile data
138138

139-
```
139+
``
140140
# add arbitrary object category as sub-dict of self.dobj
141141
for ii in range(nc):
142-
st.add_obj(
142+
coll.add_obj(
143143
which='campaign',
144144
key=f'c{ii}',
145145
start_date=f'{ii}.04.2022',
@@ -150,16 +150,16 @@ for ii in range(nc):
150150
)
151151

152152
# create new 'campaign' parameter for data arrays
153-
st.add_param('campaign', which='data')
153+
coll.add_param('campaign', which='data')
154154

155155
# tag each data with its campaign
156156
for ii in range(nc):
157-
st.set_param(which='data', key=f't{ii}', param='campaign', value=f'c{ii}')
158-
st.set_param(which='data', key=f'prof{ii}', param='campaign', value=f'c{ii}')
157+
coll.set_param(which='data', key=f't{ii}', param='campaign', value=f'c{ii}')
158+
coll.set_param(which='data', key=f'prof{ii}', param='campaign', value=f'c{ii}')
159159

160160
# print in the console the content of st
161-
st
162-
```
161+
coll
162+
``
163163

164164
<p align="center">
165165
<img align="middle" src="https://github.com/ToFuProject/datastock/blob/devel/README_figures/DataStock_Obj.png" width="600" alt="Direct 3d array visualization"/>
@@ -168,31 +168,31 @@ st
168168
DataStock also provides built-in object selection method to allow return all
169169
objects matching a criterion, as lits of int indices, bool indices or keys.
170170

171-
```
172-
In [9]: st.select(which='campaign', index=2, returnas=int)
171+
``
172+
In [9]: coll.select(which='campaign', index=2, returnas=int)
173173
Out[9]: array([2])
174174

175175
# list of 2 => return all matches inside the interval
176-
In [10]: st.select(which='campaign', index=[2, 4], returnas=int)
176+
In [10]: coll.select(which='campaign', index=[2, 4], returnas=int)
177177
Out[10]: array([2, 3, 4])
178178

179179
# tuple of 2 => return all matches outside the interval
180-
In [11]: st.select(which='campaign', index=(2, 4), returnas=int)
180+
In [11]: coll.select(which='campaign', index=(2, 4), returnas=int)
181181
Out[11]: array([0, 1])
182182

183183
# return as keys
184-
In [12]: st.select(which='campaign', index=(2, 4), returnas=str)
184+
In [12]: coll.select(which='campaign', index=(2, 4), returnas=str)
185185
Out[12]: array(['c0', 'c1'], dtype='<U2')
186186

187187
# return as bool indices
188-
In [13]: st.select(which='campaign', index=(2, 4), returnas=bool)
188+
In [13]: coll.select(which='campaign', index=(2, 4), returnas=bool)
189189
Out[13]: array([ True, True, False, False, False])
190190

191191
# You can combine as many constraints as needed
192-
In [17]: st.select(which='campaign', index=[2, 4], operator='Barnaby', returnas=str)
192+
In [17]: coll.select(which='campaign', index=[2, 4], operator='Barnaby', returnas=str)
193193
Out[17]: array(['c3', 'c4'], dtype='<U2')
194194

195-
```
195+
``
196196

197197
You can also decide to sub-class DataStock to implement methods and visualizations specific to your needs
198198

@@ -205,6 +205,6 @@ DataStock provides built-in methods like:
205205
- size is the total size of all data stored in the instance in bytes
206206
- dsize is a dict with the detail (size for each item in each sub-dict of the instance)
207207
* `save()`: will save the instance
208-
* `ds.load()`: will load a saved instance
208+
* `coll.load()`: will load a saved instance
209209

210210

datastock/tests/prepublish.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
print('test import datastock')
2+
import datastock as ds
3+
print('import datastock ok')

pyproject.toml

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -3,25 +3,28 @@ requires = ["setuptools", "setuptools_scm"]
33
build-backend = "setuptools.build_meta"
44

55

6-
[tool.setuptools.packages.find]
7-
where = ["datastock"]
8-
include = ["datastock*"]
9-
namespaces = false
6+
#[tool.setuptools.packages.find]
7+
#where = ["datastock"]
8+
#include = ["datastock*"]
9+
#namespaces = false
10+
11+
[tool.setuptools]
12+
packages = ["datastock", "datastock.tests"]
1013

1114

1215
[tool.setuptools_scm]
1316
version_file = "datastock/_version.py"
1417

1518

1619
[tool.setuptools.dynamic]
17-
readme = {file = ["README.md"]}
1820
classifiers = {file = ["CLASSIFIERS.txt"]}
1921

2022

2123
[project]
2224
name = "datastock"
25+
readme = "README.md"
2326
license = {text = "MIT"}
24-
dynamic = ["version", "readme", "classifiers"]
27+
dynamic = ["version", "classifiers"]
2528
description = "Generic handler for multiple heterogenous numpy arrays and subclasses"
2629
authors = [
2730
{name = "Didier VEZINET", email = "[email protected]"},
@@ -38,7 +41,6 @@ dependencies = [
3841
"scipy",
3942
"matplotlib",
4043
"PyQt5 ; platform_system != 'Windows'",
41-
# "PySide2; platform_system == 'Windows'",
4244
"astropy",
4345
]
4446

0 commit comments

Comments
 (0)