Skip to content

Commit 0df23da

Browse files
committed
@posts/CnsCompress.md; improves
@`./posts/CnsCompress.md`: -`#related-posts`, +`#synopsis--similar-posts`: Is followup to: commit 2a44ec8 (@`posts/CnsCompress.md#related-posts`; use local). TODO: `squash` this Other improvements. TODO: search for `git commit`s to `squash` thus into
1 parent e01c971 commit 0df23da

File tree

1 file changed

+14
-7
lines changed

1 file changed

+14
-7
lines changed

posts/CnsCompress.md

Lines changed: 14 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,21 @@
11
**\[Preview\] Have computers do most of central nervous system, such as thalamus, auditory cortex, visual cortices, homunculus**
2-
_Parse inputs of 1024*1280@60fps (2.6gbps), output text at a few kbps, reproduce originals from text (with small losses.)_
32

4-
\[This post [from _SubStack_](https://swudususuwu.substack.com/p/future-plans-have-computers-do-most) allows [_all uses_](https://creativecommons.org/licenses/by/2.0/).]
3+
Howto process *1024\*1280@60fps* (*standard definition*, *2.6 gigabits-per-second*) into compressed structures (such as text) which just use a *few kilobits-per-second*, then reproduce the originals from thus compressed structures (with small [losses](https://deepai.org/machine-learning-glossary-and-terms/perceptual-loss-function).)
54

5+
\[[This post](./CnsCompress.md) \[[<sup>2</sup>](https://swudususuwu.substack.com/p/future-plans-have-computers-do-most)\] is [released through *Creative Commons Generic Attribution 2* (which allows all uses)](https://creativecommons.org/licenses/by/2.0/).\] \[This post is a work-in-progress.\]
6+
7+
# Table of Contents
8+
- [Discussion](#Discussion)
9+
- [Synopsis + similar posts](#synopsis--similar-posts)
10+
11+
******************************************
12+
# Discussion
613
For the most new sources, use programs such as [_iSH_](https://apps.apple.com/us/app/ish-shell/id1436902243) (for _iOS_) or [_Termux_](https://play.google.com/store/apps/details?id=com.termux) (for _Android OS_) to run this:
714
```
815
git clone https://github.com/SwuduSusuwu/SusuLib.git
916
cd ./SusuLib/cxx && git switch preview && git pull --rebase && ls
1017
```
11-
To [contribute](https://github.com/SwuduSusuwu/SusuLib?tab=readme-ov-file#how-to-contribute) to this, submit [new pull requests](https://github.com/SwuduSusuwu/SusuLib/pulls) which reference <https://github.com/SwuduSusuwu/SusuLib/issues/2>.
12-
18+
To [contribute](https://github.com/SwuduSusuwu/SusuLib?tab=readme-ov-file#how-to-contribute) to this, submit [new pull requests](https://github.com/SwuduSusuwu/SusuLib/pulls) which reference <https://github.com/SwuduSusuwu/SusuLib/issues/2>. Progress:
1319
- [`cxx/ClassResultList.cxx`](https://github.com/SwuduSusuwu/SusuLib/blob/preview/cxx/ClassResultList.cxx) has correspondances to [neocortex](https://wikipedia.org/wiki/Neocortex). which is what humans use as databases.
1420
- [`cxx/AssistantCns.cxx`](https://github.com/SwuduSusuwu/SusuLib/blob/preview/cxx/AssistantCns.cxx) uses correspondances to [_Broca's area_](https://wikipedia.org/wiki/Broca's_area) (produces language through recursive processes), [_Wernicke’s area_](https://wikipedia.org/wiki/Wernicke's_area) (parses languages through [recursive](https://wikipedia.org/wiki/recursion) processes), plus [hippocampus](https://wikipedia.org/wiki/Hippocampus) (integration to the neocortex + [imagination](https://wikipedia.org/wiki/Procedural_generation) through numerous regions).
1521
- [`cxx/ClassCns.cxx`](https://github.com/SwuduSusuwu/SusuLib/blob/preview/cxx/ClassCns.cxx) (which can use "backends" (implementations) such as [`tensorflow`](https://github.com/tensorflow/tensorflow), [`apxr_run`](https://github.com/Rober-t/apxr_run) or [`HSOM`](https://github.com/CarsonScott/HSOM)) is a [pure-virtual](https://en.cppreference.com/book/intro/abstract_classes) general-purpose [emulations (heuristic approximates)](https://wikipedia.org/wiki/emulation) of neural tissue.
@@ -37,7 +43,7 @@ Sources: <https://wikipedia.org/wiki/Visual_cortex>, [*Neuroscience for Dummies*
3743

3844
______
3945

40-
Don't know if the 2 `arxiv.org` documents (\[1\]\[2\]) are about finished programs which do this:
46+
Found 2 `arxiv.org` documents (\[1\]\[2\]) through searches for finished programs which do this (but is difficult to follow thus documents, do not know if thus discuss actual finished programs):
4147
- \[1\] [A Computationally Efficient Neural Video Compression Accelerator Based on a Sparse CNN-Transformer Hybrid Network](https://arxiv.org/html/2312.10716v1)
4248
- \[2\] [Advances In Video Compression System Using Deep Neural Network: A Review And Case Studies](https://arxiv.org/abs/2101.06341)
4349

@@ -53,10 +59,11 @@ Those compression ratio guesses are based on the amount of visuals which human c
5359
- This section was introduced in response to [_Solar-Pro-2_'s review](https://github.com/SwuduSusuwu/SusuLib/issues/2#issuecomment-3141463645) (which suggests to give specifics of the compression ratio guesses).
5460
- Losses in simple computer visual formats (such as `mpeg-2`) cause noticeable "artifacts" ([accidental corruption](https://wikipedia.org/wiki/Compression_artifact)) such that viewers knew that the throughput was not sufficient; knew that specifics were lost/altered --- [accidental misidentification is common since human neural tissue does not have this attribute](https://swudususuwu.substack.com/p/do-some-jurisdictions-still-trust#%C2%A7fallible-witnesses).
5561
- Expect huge compression ratios due to the fact that human neural systems are not mature until given numerous years of training datasets. Guess that artificial intelligences which have processed years of visuals + sounds will have produced databases (of primitives symbols, concepts, meshes, ...) which allow compression ratios of close to 1,000,000 -> 2 for large visual inputs which are not random. But due to how lossy biological neural tissue is, such super-compressed records can not come close to 100% accurate, so traditional formulas will still have use.
56-
- This will process input such as [_Fanuc_'s videos of somewhat-autonomous tools](https://youtu.be/7lI-PY7InV8) into text (or symbolic / conceptual nodes) which allow to insert such as `Plus, produces \*.`, to produce outputs which shows _Fanuc_'s tools `*` (`*` is a wildcard --- can use symbols such as `Teslas` or `houses`).
62+
- This will process input such as [_Fanuc_'s videos of somewhat-autonomous tools](https://youtu.be/7lI-PY7InV8) into text (or symbolic / conceptual nodes) which allow to insert such as `Plus, produces houses.`, to produce outputs which shows such tools produce houses.
5763
- Or process input such as [this show about simple autonomous tools which mass-produce simple autonomous tools](https://youtu.be/hLDbRm-98cs), once processed, can accept inserts such as `As opposed to the standalone robotic arms shown, produces tools with locomotive systems + 2 arms`, which will output a show about such improved tools.
5864

59-
## Synopsis + related posts/resources
65+
******************************************
66+
# Synopsis + similar posts
6067
- [\[Preview\] How to use _FLOSS_ code to produce autonomous _Arduino_/_Elegoo_ tools](./ArduinoElegooTools.md)
6168
- [\[Preview\] How to produce general-use autonomous tools through calculus (continuous formulas), + use TensorFlow for synthesis of close-to-human consciousness.](https://github.com/SwuduSusuwu/SusuPosts/blob/preview/posts/Autonomous-tools_+_human-consciousness.md)
6269
- [How to improve how fast desktops/laptops/phones execute code](./SimdGpgpuTpu.md)

0 commit comments

Comments
 (0)